{"input": "What may happen if the VR headset lenses are exposed to sunlight or strong light?", "context": "'用户指南 * User Guide 02 CN 11 EN * 包装内含 使用前注意事项 快速引导 产品部件详情说明 操作说明 02 02 03 06 08 01 \n•本产品支持在系统设置中进行瞳距调节 , 调节时请务必注意,最小瞳距可能会碰触鼻梁。当您佩戴头盔后,您 “显示”中进行手动调节,请注意设置使用不合适的瞳距,可能会引起视觉重影或者眼睛疲劳。 可在“设置” ► •本产品“护眼模式”经德国 TÜV Rheinland 低蓝光认证,通过软件算法降低三色通道中的蓝光量,来达到保护 “护眼” “色彩调节” 眼睛的作用,该模式下画面颜色偏黄,您可根据个人喜好在“设置” 中激活或关闭此功能。 ““显示” ► ► ► 包装内含: VR 头盔 / 手柄 × 2 / 1.5V AA 碱性干电池 × 4/ 眼镜支架 / 遮光鼻托 / 手柄挂绳 × 2 / USB-C 电源适配器 / USB-Cto C 2.0 数据线 / 快速指南 / 用户指南 / 安全与质保指南使用前注意事项 •本产品在开阔的室內环境使用体验最佳,建议至少预留 2×2 米 的空间。使用前请确认身体没有不适且周围环 境安全,特别是佩戴头盔在室内行走移动时,要尽量避免发生意外。 •不建议 12 岁及以下儿童使用本产品,建议将头盔、手柄和配件置于儿童够不到的位置,13 岁以上青少年须在 成人监护下使用,以免发生意外。 •本产品无近视调节功能,近视用户请佩戴眼镜使用并尽量避免近视眼镜被头盔的光学镜片磨伤或刮伤。建议在 使用和收纳时注意防护光学镜片,避免尖锐物体划伤镜片,擦拭清洁时请使用柔软的眼镜布,否则可能划伤镜片, 影响视觉效果。 •长时间使用可能引发轻微的昡晕或者眼疲劳,建议使用 30 分钟后适当休息,可通过眼保健操或观看远处物体缓 解眼疲劳。如果您的身体感到任何不适,请立即停止使用。如果不适持续,请咨询医生。 •当头盔镜片被阳光或紫外线照射时(尤其在户外、阳台、窗台及汽车内存放时),可能导致屏幕出现永久性黄斑。 请尽量避免该情况发生,此种屏幕损坏不在产品的质保范围内。 *本产品最终外观及功能以实物为准,部分地区包装内含物品有所差异,本说明仅供参考。 02 CN\n六自由度 VR 体验 本产品可以追踪头盔和手柄前、后、左、右、上、下和旋转的运动状态,您在现实中的肢体运动会实时反映在虚 拟世界中。 由于没有任何线缆的束缚,您在虚拟世界自由探索时请确保游玩区域的安全。 1. 建议准备一个整洁安全的体验空间:至少 2×2 米;保持房间明亮,避免在只有单色的墙或大面积玻璃、镜子类 反射物以及许多移动画面和物体的空间中使用。 2. 撕下 VR 头盔前端摄像头上的保护膜,并佩戴手柄挂绳。 3. 根据开机后的画面提示进行游玩区域的设定。 ❶ 安装电池 按箭头方向拔出电池盖侧边的绝缘纸 快速引导 提示:本产品虚拟的安全区提醒功能,不能完全保证您在设定好的游戏区域中的安全,请时刻注意周围的安全情况。 提示:建议使用 1.5V AA 碱性电池。 按照图示拨动电池盖拨钮打开电池盖更换电池。 03 CN\n❷ 手柄开机 ❸ 头盔开机 ❹ 佩戴头盔,调节至清晰舒适的位置 首次开机:拔出绝缘纸,手柄自动开机(蓝灯闪烁) 非首次开机:短按手柄 Home 键开机(蓝灯闪烁) 长按头盔电源键 2 秒(蓝灯常亮) 调节旋钮转动绑带,使后脑垫套在头上,微调绑带长度及佩戴位置至视野清晰 04 提示:近视用户请佩戴眼镜或镜片插件使用,本产品不具备近视调节功能。 CN\n❺ 微调顶绑带 微调顶绑带使其受力以减少额头压力 ❻ 瞳 距 调 节 在系统设置:“设置” ► “显示”界面中进行瞳距调节,点击“+”或“-”按钮可微调瞳距直至画面清晰 64mm 请勿 强行 掰动镜 筒,以 免造 成损坏 ! 请注 意设 置使用 不合适 的瞳 距,可 能 会引起 视 觉重影 或 者眼睛 疲 劳。准 确 的瞳距 设 置有助 于 获得清 晰 的图像 并 减少眼睛 疲劳。 05 CN\n产品部件详情说明 头盔状态指示灯 蓝灯常亮:开机进行中或工作状态 黄灯常亮:充电中,电量低于 98% 红灯常亮:充电中,电量低于 20% 绿灯常亮:充电完毕,电量大于 98% 或 充满 蓝灯闪烁:关机进行中 红灯闪烁:电量低于 20% 指示灯熄灭:休眠或关机 06 ① 电源键 开机:长按 2 秒 关机:长按 5 秒 复位:长按 10 秒 开机时,短按休眠 ② ③ ④ ⑤ 状态指示灯 贴脸泡棉 音量键 彩色透视摄像头 使用时请勿遮挡 ⑥ ⑦ ⑧ 顶部绑带 可拆卸 绑带旋钮 环境追踪摄像头 使用时请勿遮挡 ⑨ ⑩ ⑪ USB-C 接口 左 / 右喇叭 距离传感器 佩戴头盔后,系统自动唤醒 摘下头盔后,系统自动休眠 ⑫ ⑬ 眼球追踪摄像头 此功能仅 Pro 版支持 使用时请勿遮挡 面部追踪摄像头 此功能仅 Pro 版支持 使用时请勿遮挡 CN\n手柄状态指示灯 熄灭:已连接或者关机 蓝灯常亮:固件升级模式 蓝灯闪烁:连接中 红蓝灯交替慢速闪烁:等待配对 ① ② 摇杆 菜单键 ③ Home 键 开机 : 短按关机 : 长按 6 秒退出应用 : 短按屏幕中心校正 : 长按 1 秒④ ⑤ ⑥ ⑦ 状态指示灯 抓握键 截屏键 扳机键 ⑧ ⑨ 电池盒 打开:拨动拨钮,电池盒弹出 安装:按压直至自动锁紧 追踪光环 使用时请勿遮挡 注:手柄挂绳可按图示将粗绳穿过细绳并锁紧在手柄尾端 07 CN\n手柄硬件复位 如果手柄出现按 Home 键和任何按键均无反应或者头盔中虚拟手柄卡死不动的问题可拆装电池重新启动手柄。 近视用户配戴 本设备不具备近视调节功能,头盔可支持佩戴镜框宽度小于 150mm 的大多数标准眼镜。 操作说明 头控模式 未连接手柄的情况下,您可通过转动头部光标及点击头盔音量加减按键进行操作。 切换主控手柄射线 在主控菜单下,短按对应手柄的扳机键可以切换主控手柄的射线。 屏幕中心校正 戴着头盔直视前方,按住手柄 Home 键(或头控模式下头盔上的音量减键)1 秒以上,进行屏幕中心的校正将菜 单拉到当前视野朝向位置。 断开手柄 长按手柄 Home 键直至手柄状态指示灯红灯亮起并伴随振动产生时即可松手,此时手柄关机并断开与头盔的连接。 您无需刻意进行手柄关机操作,在以下状态下手柄会自动关机省电: •头盔进入深度休眠时(摘下头盔后一段时间) •头盔手柄管理界面解绑手柄时 •头盔关机时 添加新手柄 如需添加新手柄(头盔最多可同时连接一对手柄,即左右手柄各一只),或解绑手柄后再次连接 , 可进入“设置” “手 柄”,点击“配对”,同时按住手柄 Home 键和扳机键直至手柄状态指示灯红蓝交替闪烁时即可松开,然后根据 头盔画面提示操作。 ► 休眠 / 唤醒 方式一:摘下头盔一段时间后,系统自动休眠;戴上头盔时,系统自动唤醒。 方式二:短按头盔电源键也可以进行休眠或唤醒操作。 硬件复位 头盔硬件复位 如果头盔出现短按头盔电源键没有反应或头盔的画面卡死等问题,可以长按头盔电源键 10 秒以上重新启动头盔。 08 CN\n安装眼镜支架 安装遮光鼻托 如您存在眼镜摩擦光学镜片或者压迫鼻梁的问题,请按照图示安装眼镜支架以增加间隔空间。 您可根据佩戴的舒适度选择是否安装。 如您感觉鼻子处漏光影响体验,请按照图示安装遮光鼻托配件。 由于眼睛空间密闭可能加剧起雾及出汗问题,您可根据喜好选择是否安装。 ❶ 摘下贴脸泡棉 ❷ 将眼镜支架按照图示安装在产品上 ❸ 将贴脸泡棉按照图示安装眼镜支架上 ❶ 摘下贴脸泡棉 ❸ 安装贴脸泡棉❷ 将遮光鼻托按照图示方式安装在贴脸泡棉上 注:按照图示拆卸眼镜支架 09 CN\n更换贴脸泡棉 贴脸泡棉多次清洁和长时间使用后会变色和质地变软,您可酌情更换新泡棉。 更换顶绑带 摘下贴脸泡棉 ❸ 安装贴脸泡棉 按照图示捏住顶绑带金属扣,往下压到底然后抽出 ❷ •购买优质热门应用 •畅 聊 社 区, 与 众 多 PICO 玩 家 一起探索 VR 世界 •管理设备更便捷 •参与丰富互动活动 •更多精彩内容等你来发现 ❶ 微 信公 众 号:PICO VR抖音:PICO官 方 旗 舰 店哔 哩 哔 哩:PICO-VR官 方微 博:PICO-VR ❶ ❷ 10 CN\nIn The Box: VR Headset / 2 Controllers / 4 1.5V AA Alkaline Batteries / Glasses Spacer / Nose Pad / 2 Controller Lan- yards / USB-C Power Adapter / USB-C to C 2.0 Data Cable / Quick Guide / User Guide / Safety and WarrantyGuide Important Health & Safety Notes • This product is designed and intended to be used in an open and safe indoor area, free of anytripping or slipping hazards. To avoid accidents, remain conscious to the potential confines ofyour physical area and respect the boundary of your virtual area whenever you see it. Be sure towear the lanyards when using the Controllers. Make sure that there is enough space around yourhead and body (at least 2 meters by 2 meters) to stretch your arms to avoid damage or injury toyourself, others, and your surroundings. • This product is not recommended for children aged 12 and under. It is recommended to keep headsets,controllers and accessories out of the reach of children. Teenagers aged 13 and over must use it underadult supervision to avoid accidents. • This product is designed to accommodate most prescription glasses. Make sure to wear the VR Headsetin a manner in which the VR Headset lenses do not rub or impair your prescription lenses. • Prolonged use may cause dizziness or eye fatigue. It is recommended to take a break every 30 minutes.Try relieving your eyestrain by looking at distant objects. If you feel any discomfort, stop using the prod- uct immediately. If the discomfort persists, seek medical advice.• Do not expose the optical lenses to direct sunlight or other strong light sources. Exposure to directsunlight may cause permanent yellow spot damage on the screen. Screen damage caused by sunlightexposure or other strong sources of light is not covered by the warranty. • This product supports interpupillary distance (IPD) adjustment in system settings. When adjusting,please be aware that with the minimum IPD, it may touch the bridge of the nose. You can adjust the IPDaccording to your actual interpupillary distance in \"Settings\"►\"Display\". Please note that using inap- propriate IPD may increase the risk of discomfort. • This product has an “Eye Protection Mode”, certified by TÜV Rheinland (Germany), which can protectyour eyes by reducing blue light in the three color channels using software algorithms. The screen ap- pears yellowish in this mode and you can turn this feature on/off in \"Settings\"►\"Display\"►\"Color\"►“- Eye Protection”. • Protect optical lenses during use and storage to prevent damage, such as scratches or exposure tostrong light or direct sunlight. * Product and packaging are updated regularly, and the functions and contents of the standalone headset may be upgraded in the future.Therefore, the content, appearance and functionality listed in this manual and product packaging are subject to change and may notreflect the final product. These instructions are for reference only. * Carefully read this user guide before using the product and share this information with any other users, as it contains important safetyinformation. Keep the user guide as reference for the future. 11 EN\n6 Degrees of Freedom VR The device can track your translational and rotational movements in all directions (up/down, left/right,forward/backward, pitch, roll, and yaw). Your movements in the real world will be captured and translatedto what you see in the virtual world when using the appropriate content. Ensure a safe environment before you start your VR experience. 1. Clear a safe indoor area of at least 2 meters by 2 meters. Keep the room bright, avoid spaces with main- ly single-colored walls, glass, mirrors, moving pictures or other similar objects. 2. Remove the protective film that covers the headset front cameras. Wear the lanyards connected to theControllers. 3. Set up your environment by following instructions on the VR Headset screen. Install Batteries ❶Pull the tab to remove the insulating paper. Quick Guide 2 m 2m This product can not guarantee your safety with guardian system, you will need to always pay attention to the surrounding safety. * Note: 1.5V AA alkaline batteries should be used.Slide the toggle according to arrow direction toopen the battery case. 12 EN\nPower on the Controller ❷ First Start: The Controller will start automaticallyafter removing the insulating paper. Others: Short press the Home button for 1second until the status indicator flashes blue.Power on the VR Headset ❸ Long press the Power button for 2 seconds untilthe status indicator turns blue.Wear Your Headset for a Comfortable Fit and View ❹ Adjust the strap dial to turn the strap so that the back of your head rests on the padding. Fine-tune thelength and position of the strap to give a clear view. * Note: You can use this product with prescription glasses or lenses insert. 13 EN\nFine-tune the Top Strap ❺ Fine-tune the head strap to reduce pressure on the forehead. Interpupillary Distance (IPD) Adjustment ❻ In System Setting, go to “Setting” ► “Display” to adjust IPD, tap “+” or “-” button to slightly adjust IPDuntil the picture is clear. 14 64mm Please note that inappropriate IPD setting may cause ghosting or eyestrain.Accurate IPD setting helps you get a clear imaging and ease eyestrain. EN\nProduct Details VR Headset Status Indicator Legend Blue: Powered on with battery over 20% Yellow: Charging: Battery is less than 98% Red: Charging: Battery is less than 20% Green: Charging: Battery is more than 98% or charge complete Blue flashing: Shutting down Red flashing: Battery is less than 20% Off: Sleeping or Powered off Power Power on: Long press for 2 seconds Power off: Long press for 5 seconds Hardware reset: Long press for 10 seconds Short press to enter sleep or wake up Status Indicator Face Cushion Volume ① ② ③ ④ ⑤ ⑥ ⑦ ⑧ RGB See Through Camera Do not block during use. Top Strap Removable Strap Dial Tracking Cameras Do not block during use. ⑨ ⑩ ⑪ USB-C Interface Left/Right Speaker Proximity Sensor The system wakes upwhen the VR headset isput on, sleeps when VRheadset is taken off. ⑫ ⑬ Eye Tracking Cameras Pro version only. Do not block during use. Face Tracking Camera Pro version only. Do not block during use. 15 EN\nController Status Indicator Legend Off: Connected or Powered off Blue: Firmware updating in progress Blue flashing: Searching for connection Red and blue flashing alternately: Pairing in progress 16 Joystick Menu ③ ① ② Home Power on: Short pressPower off: Long press for 6 secondsReturn home screen: Short pressScreen recentering: Press for 1 secondStatus Indicator Grip Capture Trigger ④ ⑤ ⑥ ⑦ ⑧ ⑨ Battery Case Open: Slide down the toggle andpop up the battery case. Lock: Push the battery case to lock. Tracking Ring Do not block during use. * Note: Pass the Controller Lanyardthrough the string as shown andlock at the end of the Controller EN\nOperating Instructions Headset Control Mode If the Controller is not connected, you can interact with the home screen by moving your head to directthe crosshairs over your intended selection and clicking the Volume Up/Down button on the VR Headset. Switch the pointer of the master Controller In the home screen, short press the Trigger of the corresponding Controller to switch the pointer of themaster Controller. Screen re-centering Wear the VR Headset and look straight ahead, press and hold the Home button of the Controller or VRHeadset ( or the Volume Down button of the VR Headset in head control mode) for more than 1 second tore-center the screen. Disconnect the Controller Press and hold the Home button until the status indicator turns red and the Controller vibrates.Controllers will automatically shut down to save power in the following cases:When the VR Headset enters deep sleep (a while after the VR Headset is taken off)When the Controller is unpairedWhen the VR Headset is powered off Add a new Controller If you need to add a new Controller (the VR Headset can only connect one left Controller and one rightController) or reconnect with an unpaired Controller. Go to “Settings” ► “Controller”, click on “Pair”.Press and hold the Home button and the Trigger of the Controller at the same time until the red and bluelights of the Controller flashing alternately, and then follow the instructions on the VR Headset screen. Sleep / Wake up Option 1 (Proximity Sensor) Take off VR Headset for automatic sleeping: wear the VR Headset for automat- ic waking up. Option 2 (POWER Button) Press the Power button of the VR Headset for manual sleeping or waking up. Hardware reset VR Headset reset If the visual in the VR Headset freezes, or the VR Headset does not respond after short press the Powerbutton, you can press the Power button of the VR Headset for more than 10 seconds to reboot the VRHeadset. Controller reset If the virtual Controller, the Home button or any buttons of the Controller doesn\\'t respond, remove andreinstall the battery case to restart the Controller. The VR Headset Adjustment This device has no myopia adjustment function. The VR Headset allows wearing most standard glasseswith a frame width of less than 150mm. to install Glasses Spacer to increase the space. You can install or not according to your situation. 17 EN\nInstall Glasses Spacer Install Nose Pad If you have glasses collision with headset lens or pressure on the bridge of nose, please follow the pictureto install Glasses Spacer to increase the space. You can install or not according to your situation. If you feel light leaking from your nose, please follow the picture to install Nose Pad to block the light.You can consider having it installed at your own discretion. Disassemble the Face Cushion. Install the Glasses Spacer on the Headset. ❸ ❶ ❷ Install the Face Cushion on the Glasses Spacer. Disassemble the Face Cushion. Install the Nose Pad on the Face Cushion. ❶ ❷ Install the Face Cushion on the Headset. ❸ * Note: Disassemble the Glasses Spacer 18 EN\nReplace Face Cushion The Face Cushion will have the following phenomena such as color change, surface fluff, soft texture afterlong-term use and repeated cleaning. You can replace a new Face Cushion as needed. Replace Top Strap ❶ ❷ Disassemble the Face Cushion. Pinch the metal buckle of the top strap asshown, press it down and pull it out.Install the Face Cushion on. ❸ ❷ ❶ • Purchase high-quality and trending apps • Join PICO Community and explore the VR worldwith other PICO players• Manage your device with ease • Engage in diverse and interactive activities • More exciting features waiting for you 19 EN\n'", "answers": ["Exposure to sunlight or strong light may cause permanent yellow spot damage on the screen."], "length": 2188, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "45e06726b160cfb851e2179017f90e37f4f4dec62f346e33"} {"input": "What was Hugh H. Goodwin's rank in the United States Navy?", "context": "Hugh Hilton Goodwin (December 21, 1900 – February 25, 1980) was a decorated officer in the United States Navy with the rank of Vice Admiral. A veteran of both World Wars, he commanded escort carrier during the Mariana Islands campaign. Goodwin then served consecutively as Chief of Staff, Carrier Strike Group 6 and as Air Officer, Philippine Sea Frontier and participated in the Philippines campaign in the later part of the War.\n\nFollowing the War, he remained in the Navy and rose to the flag rank and held several important commands including Vice Commander, Military Air Transport Service, Commander, Carrier Division Two and Commander, Naval Air Forces, Continental Air Defense Command.\n\nEarly life and career\n\nHugh H. Goodwin was born on December 21, 1900, in Monroe, Louisiana and attended Monroe High School there (now Neville High School). Following the United States' entry into World War I in April 1917, Goodwin left the school without receiving the diploma in order to see some combat and enlisted the United States Navy on May 7, 1917. He completed basic training and was assigned to the battleship . Goodwin participated in the training of armed guard crews and engine room personnel as the Atlantic Fleet prepared to go to war and in November 1917, he sailed with the rest of Battleship Division 9, bound for Britain to reinforce the Grand Fleet in the North Sea.\n\nAlthough he did not complete the last year of high school, Goodwin was able to earn an appointment to the United States Naval Academy at Annapolis, Maryland in June 1918. While at the academy, he earned a nickname \"Huge\" and among his classmates were several future admirals and generals including: Hyman G. Rickover, Milton E. Miles, Robert E. Blick Jr., Herbert S. Duckworth, Clayton C. Jerome, James P. Riseley, James A. Stuart, Frank Peak Akers, Sherman Clark, Raymond P. Coffman, Delbert S. Cornwell, Frederick J. Eckhoff, Ralph B. DeWitt, John Higgins, Vernon Huber, Albert K. Morehouse, Harold F. Pullen, Michael J. Malanaphy, William S. Parsons, Harold R. Stevens, John P. Whitney, Lyman G. Miller and George J. O'Shea.\n\nGoodwin graduated with Bachelor of Science degree on June 3, 1922, and was commissioned Ensign in the United States Navy. He was subsequently assigned to the battleship and took part in the voyage to Rio de Janeiro, Brazil, before he was ordered to the Naval Torpedo Station at Newport, Rhode Island for submarine instruction in June 1923. Goodwin completed the training several weeks later and was attached to the submarine . He then continued his further training aboard submarine and following his promotion to Lieutenant (junior grade) on June 3, 1925, he qualified as submariner.\n\nHe then served aboard submarine off the coast of California, before he was ordered for the recruiting duty to San Francisco in September 1927. While in this capacity, Goodwin applied for naval aviation training which was ultimately approved and he was ordered to the Naval Air Station Pensacola, Florida in August 1928. Toward the end of the training, he was promoted to lieutenant on December 11, 1928, and upon the completion of the training in January 1929, he was designated Naval aviator.\n\nGoodwin was subsequently attached to the Observation Squadron aboard the aircraft carrier and participated in the Fleet exercises in the Caribbean. He was transferred to the Bureau of Aeronautics in Washington, D.C. in August 1931 and served consecutively under the architect of naval aviation William A. Moffett and future Chief of Naval Operations Ernest J. King.\n\nIn June 1933, Goodwin was ordered to the Naval War College at Newport, Rhode Island, where he completed junior course in May of the following year. He subsequently joined the crew of aircraft carrier and served under Captain Arthur B. Cook and took part in the Fleet exercises in the Caribbean and off the East Coast of the United States.\n\nHe was ordered back to the Naval Air Station Pensacola, Florida in June 1936 and was attached to the staff of the Base Commandant, then-Captain Charles A. Blakely. When Blakely was succeeded by William F. Halsey in June 1937, Goodwin remained in Halsey's staff and was promoted to Lieutenant Commander on December 1, 1937. He also completed correspondence course in International law at the Naval War College.\n\nGoodwin was appointed Commanding officer of the Observation Squadron 1 in June 1938 and attached to the battleship he took part in the patrolling of the Pacific and \nWest Coast of the United States until September 1938, when he assumed command of the Observation Squadron 2 attached to the battleship .\n\nWhen his old superior from Lexington, now Rear Admiral Arthur B. Cook, was appointed Commander Aircraft, Scouting Force in June 1939, he requested Goodwin as his Aide and Flag Secretary. He became Admiral Cook's protégé and after year and half of service in the Pacific, he continued as his Aide and Flag Secretary, when Cook was appointed Commander Aircraft, Atlantic Fleet in November 1940.\n\nWorld War II\n\nFollowing the United States' entry into World War II, Goodwin was promoted to the temporary rank of Commander on January 1, 1942, and assumed duty as advisor to the Argentine Navy. His promotion was made permanent two months later and he returned to the United States in early 1943 for duty as assistant director of Planning in the Bureau of Aeronautics under Rear admiral John S. McCain. While still in Argentina, Goodwin was promoted to the temporary rank of Captain on June 21, 1942.\n\nBy the end of December 1943, Goodwin was ordered to Astoria, Oregon, where he assumed command of newly commissioned escort carrier USS Gambier Bay. He was responsible for the initial training of the crew and was known as a strict disciplinarian, but the crew appreciated the skills he taught them that prepared them for combat. Goodwin insisted that everyone aboard has to do every job right every time and made us fight our ship at her best.\n\nDuring the first half of 1944, Gambier Bay was tasked with ferrying aircraft for repairs and qualified carrier pilots from San Diego to Pearl Harbor, Hawaii, before departed on May 1, 1944, to join Rear admiral Harold B. Sallada's Carrier Support Group 2, staging in the Marshalls for the invasion of the Marianas.\n\nThe air unit, VC-10 Squadron, under Goodwin's command gave close air support to the initial landings of Marines on Saipan on June 15, 1944, destroying enemy gun emplacements, troops, tanks, and trucks. On the 17th, her combat air patrol (CAP) shot down or turned back all but a handful of 47 enemy planes headed for her task group and her gunners shot down two of the three planes that did break through to attack her.\n\nGoodwin's carrier continued in providing of close ground support operations at Tinian during the end of July 1944, then turned her attention to Guam, where she gave identical aid to invading troops until mid-August that year. For his service during the Mariana Islands campaign, Goodwin was decorated with Bronze Star Medal with Combat \"V\".\n\nHe was succeeded by Captain Walter V. R. Vieweg on August 18, 1944, and appointed Chief of Staff, Carrier Division Six under Rear admiral Arthur W. Radford. The Gambier Bay was sunk in the Battle off Samar on October 25, 1944, during the Battle of Leyte Gulf after helping turn back a much larger attacking Japanese surface force.\n\nGoodwin served with Carrier Division Six during the Bonin Islands raids, the naval operations at Palau and took part in the Battle of Leyte Gulf and operations supporting Leyte landings in late 1944. He was later appointed Air Officer of the Philippine Sea Frontier under Rear admiral James L. Kauffman and remained with that command until the end of hostilities. For his service in the later part of World War II, Goodwin was decorated with Legion of Merit with Combat \"V\". He was also entitled to wear two Navy Presidential Unit Citations and Navy Unit Commendation.\n\nPostwar service\n\nFollowing the surrender of Japan, Goodwin assumed command of Light aircraft carrier on August 24, 1945. The ship was tasked with air missions over Japan became mercy flights over Allied prisoner-of-war camps, dropping food and medicine until the men could be rescued. She was also present at Tokyo Bay for the Japanese surrender on September 2, 1945.\n\nGoodwin returned with San Jacinto to the United States in mid-September 1945 and he was detached in January 1946. He subsequently served in the office of the Chief of Naval Operations until May that year, when he entered the instruction at National War College. Goodwin graduated in June 1947 and served on Secretary's committee for Research on Reorganization. Upon promotion to Rear admiral on April 1, 1949, Goodwin was appointed Chief of Staff and Aide to Commander-in-Chief, Atlantic Fleet under Admiral William H. P. Blandy.\n\nRevolt of the Admirals\n\nIn April 1949, the budget's cuts and proposed reorganization of the United States Armed Forces by the Secretary of Defense Louis A. Johnson launched the wave of discontent between senior commanders in the United States Navy. Johnson proposed the merging of the Marine Corps into the Army, and reduce the Navy to a convoy-escort force.\n\nGoodwin's superior officer, Admiral Blandy was call to testify before the House Committee on Armed Services and his harsh statements for the defense of the Navy, costed him his career. Goodwin shared his views and openly criticized Secretary Johnson for having power concentrated in a single civilian executive, who is an appointee of the Government and not an elected representative of the people. He also criticized aspects of defense unification which permitted the Joint Chiefs of Staff to vote on arms policies of individual services, and thus \"rob\" the branches of autonomy.\n\nThe outbreak of the Korean War in summer 1950 proved the proposal of Secretary Johnson as incorrect and he resigned in September that year. Also Secretary of the Navy, Francis P. Matthews resigned one month earlier.\n\nLater service\n\nDue to the Revolts of the admirals, Blandy was forced to retire in February 1950 and Goodwin was ordered to Newport, Rhode Island for temporary duty as Chief of Staff and Aide to the President of the Naval War College under Vice admiral Donald B. Beary in April 1950. Goodwin was detached from that assignment two months and appointed member of the General Board of the Navy. He was shortly thereafter appointed acting Navy Chief of Public Information, as the substitute for Rear Admiral Russell S. Berkey, who was relieved of illness, but returned to the General Board of the Navy in July that year. Goodwin served in that capacity until February 1951, when he relieved his Academy class, Rear admiral John P. Whitney as Vice Commander, Military Air Transport Service (MATS).\n\nWhile in this capacity, Goodwin served under Lieutenant general Laurence S. Kuter and was co-responsible for the logistical support of United Nations troops fighting in Korea. The MATS operated from the United States to Japan and Goodwin served in this capacity until August 1953, when he was appointed Commander Carrier Division Two. While in this assignment, he took part in the Operation Mariner, Joint Anglo-American exercise which encountered very heavy seas over a two-week period in fall 1953.\n\nGoodwin was ordered to the Philippines in May 1954 and assumed duty as Commander, U.S. Naval Forces in the Philippines with headquarters at Naval Station Sangley Point near Cavite. He held that command in the period of tensions between Taiwan and China and publicly declared shortly after his arrival, that any attack on Taiwan by the Chinese Communists on the mainland would result in US participation in the conflict. The naval fighter planes under his command also provided escort for passing commercial planes. Goodwin worked together with retired Admiral Raymond A. Spruance, then-Ambassador to the Philippines, and accompanied him during the visits to Singapore, Bangkok and Saigon in January 1955.\n\nOn December 18, 1955, Goodwin's classmate Rear admiral Albert K. Morehouse, then serving as Commander, Naval Air Forces, Continental Air Defense Command (CONAD), died of heart attack and Goodwin was ordered to CONAD headquarters in Colorado Springs, Colorado to assume Morehouse's position. While in this capacity, he was subordinated to Army General Earle E. Partridge and was responsible for the Naval and Marine Forces allocated to the command designated for the defense of the Continental United States.\n\nRetirement\n\nGoodwin retired on June 1, 1957, after 40 years of active service and was advanced to the rank of Vice admiral on the retired list for having been specially commended in combat. A week later, he was invited back to his Monroe High School (now Neville High School) and handed a diploma showing that he had been graduated with the class of 1918. He then settled in Monterey, California where he taught American history at Stevenson school and was a member of the Naval Order of the United States.\n\nVice admiral Hugh H. Goodwin died at his home on February 25, 1980, aged 79. He was survived by his wife, Eleanor with whom he had two children, a daughter Sidney and a son Hugh Jr., who graduated from the Naval Academy in June 1948, but died one year later, when the Hellcat fighter he was piloting collided with another over the Gulf of Mexico during training.\n\nDecorations\n\nHere is the ribbon bar of Vice admiral Hugh H. Goodwin:\n\nReferences\n\n1900 births\n1980 deaths\nPeople from Monroe, Louisiana\nMilitary personnel from Louisiana\nUnited States Naval Academy alumni\nNaval War College alumni\nUnited States Naval Aviators\nUnited States Navy personnel of World War I\nUnited States Navy World War II admirals\nUnited States Navy vice admirals\nUnited States submarine commanders\nRecipients of the Legion of Merit", "answers": ["Vice Admiral."], "length": 2292, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "e02f6a69d7b2a96a3aa6cd84a9189c2d552f6fb089f216e1"} {"input": "Who was Ralph Rokebye's brother?", "context": "Rokebye, Ralph of Yorks, arm. Gloucester Hall, matric. 9 Nov., 1582, aged 15; student of Lincoln's Inn 1585. See Foster's Inns of Court Register.\nRokebye, Ralph of Herts (? Yorks), gent. Broadgates Hall, matric. entry 28 Feb., 1589-90, aged 14. See pedigree in Foster's Yorkshire Collection.\nRokeby, William (brother of Sir Richard, treasurer of Ireland, son of John, of Thundercliffe Grange, Yorks), fellow of King's Hall, Cambridge, D.Can.L.; rector of Sandal 1487, and of Halifax, Yorks, 1502, rector of Fakenham, Norfolk, 1496, chancellor of Ireland 1498, and 1515, bishop of Meath, and privy councillor 1507, archbishop of Dublin 1512, archdeacon of Surrey 1520, until his death 29 Nov., 1521. See Ath. ii. 717; Cotton, i. 25; & Lansdowne MS. 979, ff. 4, 6.\nRolfe, Augustine (Rolfus) M.A. from Queen's Coll., Cambridge, 1595; incorporated 10 July, 1599.\nRolf, Richard B.A. from Emanuel Coll., Cambridge, 1584-5 (incorporated 11 July, 1585); M.A. 1588. See Foster's Graduati Cantab.\nRolfe, William cler. fil. New Coll., matric. 10 March, 1656-7, B.A. 1660, fellow, M.A. 14 Jan., 1663-4; rector of Brampton 1668, and of Stoke Bruern, Northants, 1676, until his death, buried (at Stoke Bruern) 6 Sept., 1693. See Baker's Northants, i. 86.\nRolfe, William s. William, of Stoke-Bruern, Northants, cler. Brasenose Coll. 7 July, 1688, aged 16; student of Inner Temple 1692, buried in the Temple church 1 March, 1692-3. See Foster's Inns of Court Reg.\nRolle, Denis youngest son John, of Steventon, Devon, equitis. Exeter Coll., matric. 15 Feb., 1666-7, aged 17; brother of John same date.\nRolle, Denis s. D., of Heanton, Devon, arm. Exeter Coll. matric. 24 Oct., 1687, aged 17; B.A. 1691, M.A. 1694 (as Denys), rector of Merton, Devon, 1696. See Samuel 1687, & Foster's Index Ecclesiasticus.\nRolle, (Sir) Henry of Devon, arm. fil. Broadgates Hall, matric. 14 June, 1594, aged 18; student of Middle Temple 1597 (as son and heir of Henry, of Steventon, Devon, esq.), knighted 23 July, 1603, died in 1617. See Foster's Inns of Court Reg.\nRolle, Henry of Devon, arm. Exeter Coll., matric. 20 March, 1606-7, aged 17; bar.-at-law, Inner Temple, 1618, bencher 1633 (2s. Robert, of Heanton, Devon), M.P. Callington 1621-2, 1624-5, Truro 1625-1626, 1628-9, serjeant-at-law 1640, recorder of Dorchester 1636, a judge of king's bench 1645, chief justice of upper bench 1648-55, died 30 July, 1656, buried in Shapwick church, Somerset; brother of Samuel 1605. See Ath. iii. 416; & Foster's Judges and Barristers.\nRolle, Henry s. Alex., of Tavistock, Devon, gent. Christ Church, matric. 23 March, 1696-7, aged 17.\nRolle, John of Devon, arm. Exeter Coll., matric. 30 May, 1589, aged 15; B.A. 8 Feb., 1592-3, M.A. 25 May, 1596.\nRolle, John 1s. John, of Steventon, Devon, equitis. Exeter Coll., matric. 15 Feb., 1666-7, aged 18; of Bicton, Devon; died in his father's lifetime, buried at Bicton 22 April, 1689; brother of Denis 1667, and father of Denis 1698.\nRolle, Richard s. Richard, of Cookeburye, Devon, gent. New Inn Hall, matric. 26 Sept., 1634, aged 18; B.A. from Jesus Coll., Cambridge, 1638, incorporated from Gloucester Hall 17 Dec., 1639, M.A. 2 July, 1642, rector of Sheviocke, Cornwall, 1656; father of the next-named. See Foster's Index Eccl.\nRolle, Richard s. R., of Sheviock, Cornwall, cler. St. Alban Hall, matric. 3 July, 1674, aged 17; B.A. 1678.\nRolle, Robert (Rooles or Roales) fellow New Coll. 1551-60 from Mark Lane, city of London, B.A. 26 June, 1555, M.A. 26 July, 1560, B.D. 22 Jan., 1572-3, D.D. June, 1585, a teacher in Westminster school; perhaps canon of Combe (4) in Wells, 1574, and rector of Stoke Climsland, Devon, 1574. See O.H.S. i. 345; & Foster's Index Eccl.\nRolle, Samuel s. Denis, of Great Torrington, Devon, arm. Exeter Coll., matric. 16 July, 1687, aged 18, B.A. 1691; bar.-at-law Middle Temple 1697; M.P. Barnstaple 1705, died 1747; see Denis 1687. See Foster's Judges and Barristers.\nRolle, William B.C.L. 14 July, 1528; perhaps vicar of Yarncombe, Devon, 1536. See Foster's Index Ecclesiasticus.\nRolles, Gabriel (Rooles) B.A. from St. John's Coll., Cambridge, 1610-11, M.A. 1614; incorporated 13 July, 1619, rector of East Locking, Berks, 1620, as Rolle. See Foster's Graduati Cantab.\nRolles, Richard gent. Jesus Coll., matric. 1 March, 1632-3, B.A. next day, M.A. 15 Oct., 1635; perhaps created B.D. 20 Dec. 1642, \"ex regis gratia,\" rector of Wavendon, Bucks, and of Witham, Essex, 1646, by the Westminster assembly. See Add. MS. 15,670, p. 70.\nRolles, William s. Richard, of Lewknor, Oxon, gent. St. John's Coll., matrie. 12 March, 1637-8, aged 17, B.A. 9 Nov., 1641, M.A. 6 July, 1644; B.D. from Jesus Coll. 12 Sept., 1661, rector of Wheatfield, Oxon, 1660, and of Chalfont St. Giles, Bucks, 1662. See Foster's Index Eccl.\nRolles, William created M.A. from Exeter Coll. 14 April, 1648.\nRolleston, Simon created M.A. 31 Aug., 1636.\nRolleston, Thomas of Devon, gent. Wadham Coll., matric. 12 May, 1620, aged 16.\nRollinson, Francis 1584. See Rallinson.\nRollinson, William s. \"Jose,\" of London, gent. St. John's Coll., matric. 7 March, 1694-5, aged 15; perhaps brother of John Rawlinson, of New Coll. 1692. See page 1236.\nRolt, Edward youngest son of Tho., of London, equitis. Merton Coll., matric. 7 Nov., 1701, aged 15; of Sacomb, Herts, and Chippenham, Wilts, student of Lincoln's Inn, 1702, M.P. St. Mawes 1713, Grantham 1715-22, Chippenham 1722; died 22 Dec., 1722; his father knighted 1 Oct., 1682, and died 9 Sept., 1710. See Foster's Parliamentary Dictionary.\nRolte, George s. Thomas, of St. Margarets par. Darenth, Kent, pleb. St. Alban Hall, matric. 17 June, 1631, aged 18; B.A. 20 June, 1631, M.A. 29 April, 1634, incorporated at Cambridge 1639.\nRomane, Edmund pleb. Balliol Coll., matric. 20 Feb., 1627-8, aged 18; B.A. next day, M.A. 3 June, 1630.\nRomaine, Matthew pleb. Balliol Coll., matric. 10 June, 1630, B.A. same day, M.A. 14 May, 1633, vicar of Stoke Gaylard, Dorset, 1639; father of the next. See Foster's Index Eccl.\nRomayne, Thomas s. Matth., of Stoke Gaylard, Dorset, minister. Wadham Coll., matric. 17 July, 1669, aged 17; B.A. from Hart Hall 1673, \"the intruded\" rector of Stoke Gaylard 1675. See Foster's Index Eccl.\nRomayne, William (Ronayne) gent. Trinity Coll., matric. 31 July, 1671, aged 16.\nRome, Harcourt s. William, of London, p.p. Brasenose Coll., matric. 13 Dec., 1672, aged 17.\nRome, William s. G. (? \"Gul.\"), of Northampton (city), pleb. Brasenose Coll., matric. 11 Dec., 1684, aged 16.\nRomney, Joseph B.A. from Emanuel Coll., Cambridge, 1610-11, M.A. 1614; incorporated 8 July, 1614, student of Inner Temple 1610, as of London, gent. See Foster's Inns of Court Reg.\nRone, John s. Randolph, of Hanmer, Flints, pleb. Brasenose Coll., matric. 10 Oct., 1634, aged 18; D.D. Trinity Coll., Dublin, 25 Jan., 1666 (as Roane), vicar of Hanmer, Flints, 1644, ejected same year, dean of Clogher 1667, bishop of Killaloe 1675, until his death 5 Sept., 1692. See Cotton's Fasti Ecc. Hib. i. 467.\nRone, William of New Coll. 1661. See Roane.\nRoode, Edward (or Rode) B.A. 21 July, 1522, M.A. 26 Nov., 1534; perhaps canon of Southwell 1561-73.\nRoode, Edward cler. fil. Merton Coll., matric. 22 Nov., 1650; Eton postmaster 1649, fellow 1651, B.A. 2 March, 1651-2, M.A. 14 Dec., 1655; incorporated at Cambridge 1657, and LL.D. 1671; vicar of Gamlingay, co. Cambridge, rector of one moiety 1661, and of the other 1677; died at Cambridge 1689. See Burrows, 525; & O.H.S. iv. 292.\nRoode, Onesiphorus s. Edward, of Thame, Oxon, sacerd. New Inn Hall, matric. 27 Oct., 1637, aged 16, B.A. 1 July, 1641; incorporated at Cambridge 1645; chaplain to the house of lords after the expulsion of the bishops; minister of New chapel, Tuttle-Fields, Westminster, 1648, until ejected in 1660. See Calamy, i. 195.\nRood, Richard M.A. from Pembroke Coll. 5 Dec., 1634.\nRooke, John s. Tho., of Broadwell, co. Gloucester, pleb. Pembroke Coll., matric. 1 March, 1683-4, aged 17; brother of Thomas 1693.\nRooke, John s. Tho., of Whitchurch, Wilts, gent. Balliol Coll., matric. 14 Jan., 1713-14, aged 17.\nRooke, Nicholas s. Arthur, of Totnes, Devon, gent. Exeter Coll., matric. 10 March, 1670-1, aged 16; B.A. 1674, M.A. 1677, rector of Dartington, Devon, 1679. See Foster's Index Eccl.\nRooke, Robert \"ser.\" Oriel Coll., matric. 1 April, 1656, B.A. 1659.\nRooke, Robert s. R., p.p. St. Alban Hall, matric. 30 March, 1677, aged 17.\nRooke, Thomas pleb. Christ Church, matric. 3 May, 1659.\nRooke, William (Roock) of Dorset, pleb. Brasenose Coll., matric. entry under date 20 March, 1578-9, aged 19; B.A. from St. Alban Hall 30 Jan., 1582-3, M.A. 9 May, 1586.\nRooke, William of Dorset, gent. New Coll., matric. 12 July, 1605, aged 18; B.A. 21 Feb., 1608-9, chaplain, M.A. 16 Dec., 1611, rector of North Cheriton, Somerset, 1618. See Foster's Index Eccl.\nRooke, William s. J., of Workington, Cumberland, p.p. Queen's Coll., matric. 22 Oct., 1669, aged 17; B.A. 1674, M.A. 1677, B.D. 1690, vicar of Plumstead, Kent, 1691, and rector of Hadley, Hants, 1695. See Foster's Index Eccl.\nRookes, Christopher (Rokys or Rokkis) B.A. 8 July, 1522, M.A. 1 July, 1527, B.D. supd. Oct., 1540; principal of Magdalen Hall 1529-32, vicar of Stanstead Abbots, Herts, 1534. See Foster's Index Eccl.\nRookes, Jonas B.A. from Magdalen Hall 24 April, 1599, M.A. 11 Feb., 1601-2 (2s. William, of Roydes Hall); vicar of Penistone, Yorks, 1619, see Foster's Index Eccl.; styled fellow and bursar of University Coll. in Foster's Yorkshire Collection, possibly brother of the next-named.\nRookes, Robert of Yorks, pleb. Magdalen Hall, matric. 14 May, 1602, aged 19; possibly brother of the last-named.\nRo(o)kes, William demy Magdalen Coll. 1544, B.A. supd. 1551, fellow 1552-71, M.A. 27 April, 1556, B.Med. supd. 24 April, 1561. See Bloxam, iv. 99.\nRookes, William s. William, of Rhodes Hall, Yorks, gent. University Coll., matric. 30 June, 1665, aged 16; died at Oxford in 1667.\nRoope, Ambrose s. A., of Dartmouth Parva, Devon, arm. Exeter Coll., matric. 15 March, 1671-2, aged 16.\nRoope, George s. Ant., of Bradford, Wilts, gent. Hart Hall, matric. 10 Oct., 1702, aged 15.\nRoope, John s. Nicholas, of Dartmouth, Devon, gent. Exeter Coll., matric. 17 Nov., 1637, aged 15; student of Lincoln's Inn 1638. See Foster's Inns of Court Reg.\nRoope, Nicholas of Devon, gent. Broadgates Hall, matric. 6 Feb., 1606-7, aged 18; B.A. 6 Nov., 1610; probably father of the last-named.\nRooper, Thomas s. T., of London, gent. Trinity Coll., matric. 9 July, 1699, aged 16; B.A. 1703, M.A. 19 Feb., 1705-6, as Roper.\nRooper, William of St. Alban Hall 1667. See Roper.\nRoos, Brian D.Can.L. or doctor of decrees of the university of Valentia; incorporated 3 Feb., 1510-11; died 1529, buried in the church of Chelray. See Fasti, i. 31.\nRoot, Isaac pleb. St. John's Coll., matric. 2 July, 1658, admitted to Merchant Taylors' school 1649 (only son of Isaac, merchant taylor); born in Trinity parish 20 Aug., 1641. See Robinson, i. 193.\nRoots, Richard s. Tho., of Tunbridge, Kent, gent. St. John's Coll., matric. 26 Dec., 1689, aged 15; demy Magdalen Coll. 1690-1702, B.A. 1693, M.A. 1696, rector of Chilmarck, Wilts, 1702-27, canon of Sarum 1722, rector and vicar of Bishopstone, Wilts, 1728; brother of William 1699. See Rawl. iii. 447, and xix. 90; Bloxam, vi. 111; & Foster's Index Eccl.\nRoots, Thomas of Sussex, pleb. Magdalen Hall, matric. entry 17 Nov., 1581, aged 13; B.A. supd. 1 July, 1584, bar.-at-law, Lincoln's Inn, 1594. See Foster's Judges and Barristers.\nRootes, Thomas s. William, of Tunbridge, Kent, pleb. St. John's Coll., matric. 31 Jan., 1628-9, aged 23; B.A. 12 Feb., 1628-9, vicar of Long Stanton All Saints, co. Cambridge, 1630. See Add. MSS. 15,669-70; & Foster's Index Eccl.\nRootes, Thomas pleb. St. John's Coll., matric. 2 July, 1658; B.A. 1661, M.A. 1666; possibly father of Richard 1689, and William 1699.\nRoots, William s. Tho., of Tunbridge, Kent, gent. Christ Church, matric. 16 March, 1698-9, aged 18; B.A. 1704; clerk Magdalen Coll. 1705-11, M.A. 1707, rector of Little Berkhampstead, Herts, 1714; brother of Richard 1689. See Bloxam, ii. 85; & Foster's Index Eccl.\nRoper, Francis s. Robert, of Trimdon, co. Durham, gent. Corpus Christi Coll., matric. 16 Dec., 1661, aged 18; probably identical with Francis, son of Robert, of Kelloe, co. Durham, farmer, was admitted sizar of St. John's Coll., Cambridge, 21 Sept., 1658, aged 16; fellow, B.A. 1662-3, M.A. 1666, B.D. 1673, vicar of Waterbeach, co. Cambridge, 1678, canon of Ely 1686-90, rector of Northwold, Norfolk, 1687, died 13 April, 1719. See Mayor, 138; Surtees' Durham, i. 107; & Foster's Index Eccl.\nRoper, John (or Rooper) demy Magdalen Coll., from Berks, M.A. fellow, 1483, D.D. disp. 27 June, 1506, (first) Margaret professor of divinity, 1500, vice-chancellor of the university 1505, and 1511, principal of Salesurry and George Hall, rector of Witney, Oxon, 1493, vicar of St. Mary's church, Oxford, canon of Cardinal Coll. 1532; died May, 1534. See Ath. i. 76; & Landsowne MS. 979, f. 118.\nRoper, John B.A. disp. 4 July, 1512.\nRoper, Thomas of Trinity Coll. 1699. See Rooper.\nRoper, Philip of Kent, arm. Gloucester Hall, matric. 7 Sept., 1588, aged 15 (subscribes Rooper).\nRoper, William (subscribes Rooper) of co. Hereford, militis fil. St. Alban Hall, matric. entry dated 5 June, 1607, aged 13; probably of Malmains, Kent, 2nd son of Sir Christopher Roper, afterwards 2nd baron Teynham. See Foster's Peerage.\nRoscarrock, Henry of Cornwall, arm. Hart Hall, matric. entry under date 17 Dec., 1576, aged 21; probably son of Thomas, of Roscarrock, and brother of the next, and of Richard 1581.\nRoscarrock, John B.A. 11 Feb., 1576-7; perhaps from Exeter Coll. (and 1s. Thomas, of Roscarrock, Cornwall); died 24 Nov., 1608; brother of Henry and Richard. See O.H.S. xii. 65.\nRoscarrock, Nicolas (Roiscariot) B.A. supd. 3 May, 1568, student Inner Temple 1571, as of Roscarrock, Cornwall. See Foster's Inns of Court Reg.\nRoscarrock, Richard of Cornwall, arm. Broadgates Hall, matric. entry under date circa 1581, aged 19; student of Middle Temple 1583 (as 3s. Thomas, of Roscarrock, Cornwall, esq.), brother of Henry and John. See Foster's Inns of Court Reg.\nRosdell, Christopher of Yorks, pleb. St. Edmund Hall, matric. entry under date 22 Dec., 1576, aged 22, B.A. 4 July, 1576; rector of St. Bennet Sherehog, London, 1579, and vicar of Somerton, Somerset, 1582. See Foster's Index Eccl.\nRose, Christopher s. John, of Marlow, Bucks, gent. Christ Church, matric. 13 Feb., 1622-3, aged 21, B.A. same day; rector of Hutton, Essex, 1642. See Foster's Index Ecclesiasticus.\nRose, Christopher s. Giles, of Lynn Regis, Norfolk, gent. Lincoln Coll., matric. 8 July, 1670, aged 15; student of Gray's Inn, 1673. See Foster's Gray's Inn Register.\nRose, Gilbert Augustinian Canon, B.D. supd. 22 May, 1512, and supd. 12 Dec., 1519, for incorporation as D.D.\nRose, Henry \"ser.\" Lincoln Coll., matric. 22 July, 1658, B.A. 16 Jan., 1660-1, fellow 1662 from Pirton, Oxon, M.A. 1663 (incorporated at Cambridge 1688), B.D. 1672; minister of All Saints, Oxford, but running much into debt, and marrying beneath himself, left his fellowship and church about 1674, retired to London, and at length to Ireland. See Ath. iv. 561.\nRose, Hugh s. \"Dav. Ni.\" (Nigg 4to.), of Ross, Scotland, p.p. (subs. pleb.). Balliol Coll., matric. 3 April, 1707, aged 20; B.A. 1709.\nRose, John B.A. 8 June, 1519, fellow Merton Coll. 1523, M.A. 31 March, 1525; one of these names vicar of Shoreham, Kent, 1536. See Foster's Index Ecclesiasticus.\nRose, John of co. Leicester, pleb. Merton Coll., matric. 24 Nov., 1581, aged 21.\nRose, John s. Jeremy, of Swell, co. Gloucester, pleb. Corpus Christi Coll., matric. 12 Dec., 1623, aged 15; B.A. 4 July, 1626.\nRose, John s. Rich., of Halberton, Devon, gent. Exeter Coll., matric. 14 May, 1688, aged 17.\nRose, John s. J., of West Derby, co. Lancaster, pleb. University Coll., matric. 7 March, 1712-13, aged 18, B.A. 1716; rector of Bilborough, Notts, 1722. See Foster's Index Eccl.\nRose, Jonathan s. Th., of Mickleton, co. Gloucester, pleb. St. Alban Hall, matric. 16 May, 1677, aged 18; B.A. 9 Feb., 1680-1.\nRose, Joseph s. Thomas, of Sturminster Newton, Dorset, pleb. Oriel Coll., matric. 12 Dec., 1623, aged 19.\nRose, Richard B.A. from Exeter Coll. 14 June, 1621; perhaps student of Middle Temple 1622 (as son and heir of John, of Lyme, Dorset, gent.), and M.P. Lyme Regis April-May, 1640, 1640 (l.p.), till his death after 1648. See Foster's Inns of Court Reg. & Foster's Parliamentary Dictionary.\nRose, Richard arm. Exeter Coll., matric. 29 March, 1656; student of Lincoln's Inn 1659, as 4s. Richard, of Wootton Fitzwarren, Dorset, esq. See Foster's Inns of Court Reg.\nRose, Richard s. Richard, of Monks Kirby, co. Warwick, pleb. Magdalen Coll., matric. 3 May, 1672, aged 16 (as Rosse); chorister 1670-6. See Bloxam, i. 95.\nRose, Richard s. R(ichard), of Wyng, Bucks, gent. Trinity Coll., matric. 7 May, 1680, aged 16; bar.-at-law, Inner Temple, 1699. See Foster's Judges and Barristers.\nRose, Stephen of co. Gloucester, pleb. Corpus Christi Coll., matric. 21 Jan., 1619-20, aged 16; B.A. 13 Nov., 1621, M.A. 2 July, 1625, vicar of Aldermaston 1627, and rector of Barkham 1633, and of Arborfield, Berks, 1640, and perhaps of Hartley Mawditt, Hants, 1652. See Foster's Index Ecclesiasticus.\nRose, Stephen \"ser.\" Lincoln Coll., matric. 19 Nov., 1650.\nRose, Stephen \"servi. fil.\" Magdalen Coll., matric. 19 Nov., 1650 (subscribes \"serv.\").\nRose, Stephen \"ser.\" Magdalen Coll., subscribed 23 Nov., 1655; B.A. from Wadham Coll. 1659, vicar of Cold Overton, co. Leicester, 1662-3, and rector of Woolhampton, Berks, 1667-95, father of Temple. See Foster's Index Eccl.\nRose, Temple s. Step., of Woolhampton, Berks, cler. Trinity Coll., matric. 29 March, 1693, aged 17, B.A. 1696.\nRose, Thomas Minorite, B.D. 22 June, 1509.\nRose, Thomas of Herts, pleb. Magdalen Hall, matric. 10 Oct., 1589, aged 15.\nRose, Thomas s. Seth, of Telscombe, Sussex, sacerd. Oriel Coll., matric. 5 June, 1640, aged 18; his father rector of Telscombe 1604, etc. See Foster's Index Eccl.\nRose, Thomas s. Edw., ", "answers": ["Sir Richard."], "length": 2952, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "dee63735510f6958f4c5bf318421c9b9fac0bc8b3e341a4a"} {"input": "What is the name of the most active fan club?", "context": "Football Club Urartu (, translated Futbolayin Akumb Urartu), commonly known as Urartu, is an Armenian professional football team based in the capital Yerevan that currently plays in the Armenian Premier League. The club won the Armenian Cup three times, in 1992, 2007 and 2016. In 2013–2014, they won the Armenian Premier League for the first time in their history.\n\nIn early 2016, the Russia-based Armenian businessman Dzhevan Cheloyants became a co-owner of the club after purchasing the major part of the club shares. The club was known as FC Banants until 1 August 2019, when it was officially renamed FC Urartu.\n\nHistory\n\nKotayk\nUrartu FC were founded as FC Banants by Sarkis Israelyan on 21 January 1992 in the village of Kotayk, representing the Kotayk Province. He named the club after his native village of Banants (currently known as Bayan). Between 1992 and 1995, the club was commonly referred to as Banants Kotayk. During the 1992 season, the club won the first Armenian Cup. At the end of the 1995 transitional season, Banants suffered a financial crisis. The club owners decided that it was better to merge the club with FC Kotayk of Abovyan, rather than disband it. In 2001, Banants demerged from FC Kotayk, and was moved from Abovyan to the capital Yerevan.\n\nYerevan\n\nFC Banants was relocated to Yerevan in 2001. At the beginning of 2003, Banants merged with FC Spartak Yerevan, but was able to limit the name of the new merger to FC Banants. Spartak became Banants's youth academy and later changed the name to Banants-2. Because of the merger, Banants acquired many players from Spartak Yerevan, including Samvel Melkonyan. After the merger, Banants took a more serious approach and have finished highly in the league table ever since. The club managed to lift the Armenian Cup in 2007.\nExperience is making way for youth for the 2008 and 2009 seasons. The departures of most of the experienced players have left the club's future to the youth. Along with two Ukrainian players, Ugandan international, Noah Kasule, has been signed.\n\nThe club headquarters are located on Jivani Street 2 of the Malatia-Sebastia District, Yerevan.\n\nDomestic\n\nEuropean\n\nStadium\n\nThe construction of the Banants Stadium was launched in 2006 in the Malatia-Sebastia District of Yerevan, with the assistance of the FIFA goal programme. It was officially opened in 2008 with a capacity of 3,600 seats. Further developments were implemented later in 2011, when the playing pitch was modernized and the capacity of the stadium was increased up to 4,860 seats (2,760 at the northern stand, 1,500 at the southern stand and 600 at the western stand).\n\nTraining centre/academy\nBanants Training Centre is the club's academy base located in the Malatia-Sebastia District of Yerevan. In addition to the main stadium, the centre houses 3 full-size training pitches, mini football pitches as well as an indoor facility. The current technical director of the academy is the former Russian footballer Ilshat Faizulin.\n\nFans\nThe most active group of fans is the South West Ultras fan club, mainly composed of residents from several neighbourhoods within the Malatia-Sebastia District of Yerevan, since the club is a de facto representer of the district. Members of the fan club benefit from events organized by the club and many facilities of the Banants training centre, such as the mini football pitch, the club store and other entertainments.\n\nAchievements\n Armenian Premier League\n Winner (1): 2013–14.\n Runner-up (5): 2003, 2006, 2007, 2010, 2018.\n\n Armenian Cup\n Winner (3): 1992, 2007, 2016.\n Runner-up (6): 2003, 2004, 2008, 2009, 2010, 2021–22\n\n Armenian Supercup\n Winner (1): 2014.\n Runner-up (5): 2004, 2007, 2009, 2010, 2016.\n\nCurrent squad\n\nOut on loan\n\nPersonnel\n\nTechnical staff\n\nManagement\n\nUrartu-2\n\nFC Banants' reserve squad play as FC Banants-2 in the Armenian First League. They play their home games at the training field with artificial turf of the Urartu Training Centre.\n\nManagerial history\n Varuzhan Sukiasyan (1992–94)\n Poghos Galstyan (July 1, 1996 – June 30, 1998)\n Oganes Zanazanyan (2001–05)\n Ashot Barseghyan (2005–06)\n Nikolay Kiselyov (2006–07)\n Jan Poštulka (2007)\n Nikolay Kostov (July 1, 2007 – April 8, 2008)\n Nedelcho Matushev (April 8, 2008 – June 30, 2008)\n Kim Splidsboel (2008)\n Armen Gyulbudaghyants (Jan 1, 2009 – Dec 1, 2009)\n Ashot Barseghyan (interim) (2009)\n Stevica Kuzmanovski (Jan 1, 2010 – Dec 31, 2010)\n Rafael Nazaryan (Jan 1, 2011 – Jan 15, 2012)\n Volodymyr Pyatenko (Jan 17, 2013 – June 30, 2013)\n Zsolt Hornyák (July 1, 2013 – May 30, 2015)\n Aram Voskanyan (July 1, 2015 – Oct 11, 2015)\n Tito Ramallo (Oct 12, 2015 – Oct 3, 2016)\n Artur Voskanyan (Oct 3, 2016 – Aug 11, 2018)\n Ilshat Faizulin (Aug 12, 2018 –Nov 24, 2019)\n Aleksandr Grigoryan (Nov 25, 2019 –Mar 10, 2021)\n Robert Arzumanyan (10 March 2021–24 June 2022)\n Dmitri Gunko (27 June 2022–)\n\nReferences\n\nExternal links\n Official website \n Banants at Weltfussball.de \n\n \nUrartu\nUrartu\nUrartu\nUrartu", "answers": ["South West Ultras fan club."], "length": 819, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "c29e95ab6195406aceecf3874186150cb1b8b26db5bcd0e4"} {"input": "What is the recommended space for using the VR headset?", "context": "'用户指南 * User Guide 02 CN 11 EN * 包装内含 使用前注意事项 快速引导 产品部件详情说明 操作说明 02 02 03 06 08 01 \n•本产品支持在系统设置中进行瞳距调节 , 调节时请务必注意,最小瞳距可能会碰触鼻梁。当您佩戴头盔后,您 “显示”中进行手动调节,请注意设置使用不合适的瞳距,可能会引起视觉重影或者眼睛疲劳。 可在“设置” ► •本产品“护眼模式”经德国 TÜV Rheinland 低蓝光认证,通过软件算法降低三色通道中的蓝光量,来达到保护 “护眼” “色彩调节” 眼睛的作用,该模式下画面颜色偏黄,您可根据个人喜好在“设置” 中激活或关闭此功能。 ““显示” ► ► ► 包装内含: VR 头盔 / 手柄 × 2 / 1.5V AA 碱性干电池 × 4/ 眼镜支架 / 遮光鼻托 / 手柄挂绳 × 2 / USB-C 电源适配器 / USB-Cto C 2.0 数据线 / 快速指南 / 用户指南 / 安全与质保指南使用前注意事项 •本产品在开阔的室內环境使用体验最佳,建议至少预留 2×2 米 的空间。使用前请确认身体没有不适且周围环 境安全,特别是佩戴头盔在室内行走移动时,要尽量避免发生意外。 •不建议 12 岁及以下儿童使用本产品,建议将头盔、手柄和配件置于儿童够不到的位置,13 岁以上青少年须在 成人监护下使用,以免发生意外。 •本产品无近视调节功能,近视用户请佩戴眼镜使用并尽量避免近视眼镜被头盔的光学镜片磨伤或刮伤。建议在 使用和收纳时注意防护光学镜片,避免尖锐物体划伤镜片,擦拭清洁时请使用柔软的眼镜布,否则可能划伤镜片, 影响视觉效果。 •长时间使用可能引发轻微的昡晕或者眼疲劳,建议使用 30 分钟后适当休息,可通过眼保健操或观看远处物体缓 解眼疲劳。如果您的身体感到任何不适,请立即停止使用。如果不适持续,请咨询医生。 •当头盔镜片被阳光或紫外线照射时(尤其在户外、阳台、窗台及汽车内存放时),可能导致屏幕出现永久性黄斑。 请尽量避免该情况发生,此种屏幕损坏不在产品的质保范围内。 *本产品最终外观及功能以实物为准,部分地区包装内含物品有所差异,本说明仅供参考。 02 CN\n六自由度 VR 体验 本产品可以追踪头盔和手柄前、后、左、右、上、下和旋转的运动状态,您在现实中的肢体运动会实时反映在虚 拟世界中。 由于没有任何线缆的束缚,您在虚拟世界自由探索时请确保游玩区域的安全。 1. 建议准备一个整洁安全的体验空间:至少 2×2 米;保持房间明亮,避免在只有单色的墙或大面积玻璃、镜子类 反射物以及许多移动画面和物体的空间中使用。 2. 撕下 VR 头盔前端摄像头上的保护膜,并佩戴手柄挂绳。 3. 根据开机后的画面提示进行游玩区域的设定。 ❶ 安装电池 按箭头方向拔出电池盖侧边的绝缘纸 快速引导 提示:本产品虚拟的安全区提醒功能,不能完全保证您在设定好的游戏区域中的安全,请时刻注意周围的安全情况。 提示:建议使用 1.5V AA 碱性电池。 按照图示拨动电池盖拨钮打开电池盖更换电池。 03 CN\n❷ 手柄开机 ❸ 头盔开机 ❹ 佩戴头盔,调节至清晰舒适的位置 首次开机:拔出绝缘纸,手柄自动开机(蓝灯闪烁) 非首次开机:短按手柄 Home 键开机(蓝灯闪烁) 长按头盔电源键 2 秒(蓝灯常亮) 调节旋钮转动绑带,使后脑垫套在头上,微调绑带长度及佩戴位置至视野清晰 04 提示:近视用户请佩戴眼镜或镜片插件使用,本产品不具备近视调节功能。 CN\n❺ 微调顶绑带 微调顶绑带使其受力以减少额头压力 ❻ 瞳 距 调 节 在系统设置:“设置” ► “显示”界面中进行瞳距调节,点击“+”或“-”按钮可微调瞳距直至画面清晰 64mm 请勿 强行 掰动镜 筒,以 免造 成损坏 ! 请注 意设 置使用 不合适 的瞳 距,可 能 会引起 视 觉重影 或 者眼睛 疲 劳。准 确 的瞳距 设 置有助 于 获得清 晰 的图像 并 减少眼睛 疲劳。 05 CN\n产品部件详情说明 头盔状态指示灯 蓝灯常亮:开机进行中或工作状态 黄灯常亮:充电中,电量低于 98% 红灯常亮:充电中,电量低于 20% 绿灯常亮:充电完毕,电量大于 98% 或 充满 蓝灯闪烁:关机进行中 红灯闪烁:电量低于 20% 指示灯熄灭:休眠或关机 06 ① 电源键 开机:长按 2 秒 关机:长按 5 秒 复位:长按 10 秒 开机时,短按休眠 ② ③ ④ ⑤ 状态指示灯 贴脸泡棉 音量键 彩色透视摄像头 使用时请勿遮挡 ⑥ ⑦ ⑧ 顶部绑带 可拆卸 绑带旋钮 环境追踪摄像头 使用时请勿遮挡 ⑨ ⑩ ⑪ USB-C 接口 左 / 右喇叭 距离传感器 佩戴头盔后,系统自动唤醒 摘下头盔后,系统自动休眠 ⑫ ⑬ 眼球追踪摄像头 此功能仅 Pro 版支持 使用时请勿遮挡 面部追踪摄像头 此功能仅 Pro 版支持 使用时请勿遮挡 CN\n手柄状态指示灯 熄灭:已连接或者关机 蓝灯常亮:固件升级模式 蓝灯闪烁:连接中 红蓝灯交替慢速闪烁:等待配对 ① ② 摇杆 菜单键 ③ Home 键 开机 : 短按关机 : 长按 6 秒退出应用 : 短按屏幕中心校正 : 长按 1 秒④ ⑤ ⑥ ⑦ 状态指示灯 抓握键 截屏键 扳机键 ⑧ ⑨ 电池盒 打开:拨动拨钮,电池盒弹出 安装:按压直至自动锁紧 追踪光环 使用时请勿遮挡 注:手柄挂绳可按图示将粗绳穿过细绳并锁紧在手柄尾端 07 CN\n手柄硬件复位 如果手柄出现按 Home 键和任何按键均无反应或者头盔中虚拟手柄卡死不动的问题可拆装电池重新启动手柄。 近视用户配戴 本设备不具备近视调节功能,头盔可支持佩戴镜框宽度小于 150mm 的大多数标准眼镜。 操作说明 头控模式 未连接手柄的情况下,您可通过转动头部光标及点击头盔音量加减按键进行操作。 切换主控手柄射线 在主控菜单下,短按对应手柄的扳机键可以切换主控手柄的射线。 屏幕中心校正 戴着头盔直视前方,按住手柄 Home 键(或头控模式下头盔上的音量减键)1 秒以上,进行屏幕中心的校正将菜 单拉到当前视野朝向位置。 断开手柄 长按手柄 Home 键直至手柄状态指示灯红灯亮起并伴随振动产生时即可松手,此时手柄关机并断开与头盔的连接。 您无需刻意进行手柄关机操作,在以下状态下手柄会自动关机省电: •头盔进入深度休眠时(摘下头盔后一段时间) •头盔手柄管理界面解绑手柄时 •头盔关机时 添加新手柄 如需添加新手柄(头盔最多可同时连接一对手柄,即左右手柄各一只),或解绑手柄后再次连接 , 可进入“设置” “手 柄”,点击“配对”,同时按住手柄 Home 键和扳机键直至手柄状态指示灯红蓝交替闪烁时即可松开,然后根据 头盔画面提示操作。 ► 休眠 / 唤醒 方式一:摘下头盔一段时间后,系统自动休眠;戴上头盔时,系统自动唤醒。 方式二:短按头盔电源键也可以进行休眠或唤醒操作。 硬件复位 头盔硬件复位 如果头盔出现短按头盔电源键没有反应或头盔的画面卡死等问题,可以长按头盔电源键 10 秒以上重新启动头盔。 08 CN\n安装眼镜支架 安装遮光鼻托 如您存在眼镜摩擦光学镜片或者压迫鼻梁的问题,请按照图示安装眼镜支架以增加间隔空间。 您可根据佩戴的舒适度选择是否安装。 如您感觉鼻子处漏光影响体验,请按照图示安装遮光鼻托配件。 由于眼睛空间密闭可能加剧起雾及出汗问题,您可根据喜好选择是否安装。 ❶ 摘下贴脸泡棉 ❷ 将眼镜支架按照图示安装在产品上 ❸ 将贴脸泡棉按照图示安装眼镜支架上 ❶ 摘下贴脸泡棉 ❸ 安装贴脸泡棉❷ 将遮光鼻托按照图示方式安装在贴脸泡棉上 注:按照图示拆卸眼镜支架 09 CN\n更换贴脸泡棉 贴脸泡棉多次清洁和长时间使用后会变色和质地变软,您可酌情更换新泡棉。 更换顶绑带 摘下贴脸泡棉 ❸ 安装贴脸泡棉 按照图示捏住顶绑带金属扣,往下压到底然后抽出 ❷ •购买优质热门应用 •畅 聊 社 区, 与 众 多 PICO 玩 家 一起探索 VR 世界 •管理设备更便捷 •参与丰富互动活动 •更多精彩内容等你来发现 ❶ 微 信公 众 号:PICO VR抖音:PICO官 方 旗 舰 店哔 哩 哔 哩:PICO-VR官 方微 博:PICO-VR ❶ ❷ 10 CN\nIn The Box: VR Headset / 2 Controllers / 4 1.5V AA Alkaline Batteries / Glasses Spacer / Nose Pad / 2 Controller Lan- yards / USB-C Power Adapter / USB-C to C 2.0 Data Cable / Quick Guide / User Guide / Safety and WarrantyGuide Important Health & Safety Notes • This product is designed and intended to be used in an open and safe indoor area, free of anytripping or slipping hazards. To avoid accidents, remain conscious to the potential confines ofyour physical area and respect the boundary of your virtual area whenever you see it. Be sure towear the lanyards when using the Controllers. Make sure that there is enough space around yourhead and body (at least 2 meters by 2 meters) to stretch your arms to avoid damage or injury toyourself, others, and your surroundings. • This product is not recommended for children aged 12 and under. It is recommended to keep headsets,controllers and accessories out of the reach of children. Teenagers aged 13 and over must use it underadult supervision to avoid accidents. • This product is designed to accommodate most prescription glasses. Make sure to wear the VR Headsetin a manner in which the VR Headset lenses do not rub or impair your prescription lenses. • Prolonged use may cause dizziness or eye fatigue. It is recommended to take a break every 30 minutes.Try relieving your eyestrain by looking at distant objects. If you feel any discomfort, stop using the prod- uct immediately. If the discomfort persists, seek medical advice.• Do not expose the optical lenses to direct sunlight or other strong light sources. Exposure to directsunlight may cause permanent yellow spot damage on the screen. Screen damage caused by sunlightexposure or other strong sources of light is not covered by the warranty. • This product supports interpupillary distance (IPD) adjustment in system settings. When adjusting,please be aware that with the minimum IPD, it may touch the bridge of the nose. You can adjust the IPDaccording to your actual interpupillary distance in \"Settings\"►\"Display\". Please note that using inap- propriate IPD may increase the risk of discomfort. • This product has an “Eye Protection Mode”, certified by TÜV Rheinland (Germany), which can protectyour eyes by reducing blue light in the three color channels using software algorithms. The screen ap- pears yellowish in this mode and you can turn this feature on/off in \"Settings\"►\"Display\"►\"Color\"►“- Eye Protection”. • Protect optical lenses during use and storage to prevent damage, such as scratches or exposure tostrong light or direct sunlight. * Product and packaging are updated regularly, and the functions and contents of the standalone headset may be upgraded in the future.Therefore, the content, appearance and functionality listed in this manual and product packaging are subject to change and may notreflect the final product. These instructions are for reference only. * Carefully read this user guide before using the product and share this information with any other users, as it contains important safetyinformation. Keep the user guide as reference for the future. 11 EN\n6 Degrees of Freedom VR The device can track your translational and rotational movements in all directions (up/down, left/right,forward/backward, pitch, roll, and yaw). Your movements in the real world will be captured and translatedto what you see in the virtual world when using the appropriate content. Ensure a safe environment before you start your VR experience. 1. Clear a safe indoor area of at least 2 meters by 2 meters. Keep the room bright, avoid spaces with main- ly single-colored walls, glass, mirrors, moving pictures or other similar objects. 2. Remove the protective film that covers the headset front cameras. Wear the lanyards connected to theControllers. 3. Set up your environment by following instructions on the VR Headset screen. Install Batteries ❶Pull the tab to remove the insulating paper. Quick Guide 2 m 2m This product can not guarantee your safety with guardian system, you will need to always pay attention to the surrounding safety. * Note: 1.5V AA alkaline batteries should be used.Slide the toggle according to arrow direction toopen the battery case. 12 EN\nPower on the Controller ❷ First Start: The Controller will start automaticallyafter removing the insulating paper. Others: Short press the Home button for 1second until the status indicator flashes blue.Power on the VR Headset ❸ Long press the Power button for 2 seconds untilthe status indicator turns blue.Wear Your Headset for a Comfortable Fit and View ❹ Adjust the strap dial to turn the strap so that the back of your head rests on the padding. Fine-tune thelength and position of the strap to give a clear view. * Note: You can use this product with prescription glasses or lenses insert. 13 EN\nFine-tune the Top Strap ❺ Fine-tune the head strap to reduce pressure on the forehead. Interpupillary Distance (IPD) Adjustment ❻ In System Setting, go to “Setting” ► “Display” to adjust IPD, tap “+” or “-” button to slightly adjust IPDuntil the picture is clear. 14 64mm Please note that inappropriate IPD setting may cause ghosting or eyestrain.Accurate IPD setting helps you get a clear imaging and ease eyestrain. EN\nProduct Details VR Headset Status Indicator Legend Blue: Powered on with battery over 20% Yellow: Charging: Battery is less than 98% Red: Charging: Battery is less than 20% Green: Charging: Battery is more than 98% or charge complete Blue flashing: Shutting down Red flashing: Battery is less than 20% Off: Sleeping or Powered off Power Power on: Long press for 2 seconds Power off: Long press for 5 seconds Hardware reset: Long press for 10 seconds Short press to enter sleep or wake up Status Indicator Face Cushion Volume ① ② ③ ④ ⑤ ⑥ ⑦ ⑧ RGB See Through Camera Do not block during use. Top Strap Removable Strap Dial Tracking Cameras Do not block during use. ⑨ ⑩ ⑪ USB-C Interface Left/Right Speaker Proximity Sensor The system wakes upwhen the VR headset isput on, sleeps when VRheadset is taken off. ⑫ ⑬ Eye Tracking Cameras Pro version only. Do not block during use. Face Tracking Camera Pro version only. Do not block during use. 15 EN\nController Status Indicator Legend Off: Connected or Powered off Blue: Firmware updating in progress Blue flashing: Searching for connection Red and blue flashing alternately: Pairing in progress 16 Joystick Menu ③ ① ② Home Power on: Short pressPower off: Long press for 6 secondsReturn home screen: Short pressScreen recentering: Press for 1 secondStatus Indicator Grip Capture Trigger ④ ⑤ ⑥ ⑦ ⑧ ⑨ Battery Case Open: Slide down the toggle andpop up the battery case. Lock: Push the battery case to lock. Tracking Ring Do not block during use. * Note: Pass the Controller Lanyardthrough the string as shown andlock at the end of the Controller EN\nOperating Instructions Headset Control Mode If the Controller is not connected, you can interact with the home screen by moving your head to directthe crosshairs over your intended selection and clicking the Volume Up/Down button on the VR Headset. Switch the pointer of the master Controller In the home screen, short press the Trigger of the corresponding Controller to switch the pointer of themaster Controller. Screen re-centering Wear the VR Headset and look straight ahead, press and hold the Home button of the Controller or VRHeadset ( or the Volume Down button of the VR Headset in head control mode) for more than 1 second tore-center the screen. Disconnect the Controller Press and hold the Home button until the status indicator turns red and the Controller vibrates.Controllers will automatically shut down to save power in the following cases:When the VR Headset enters deep sleep (a while after the VR Headset is taken off)When the Controller is unpairedWhen the VR Headset is powered off Add a new Controller If you need to add a new Controller (the VR Headset can only connect one left Controller and one rightController) or reconnect with an unpaired Controller. Go to “Settings” ► “Controller”, click on “Pair”.Press and hold the Home button and the Trigger of the Controller at the same time until the red and bluelights of the Controller flashing alternately, and then follow the instructions on the VR Headset screen. Sleep / Wake up Option 1 (Proximity Sensor) Take off VR Headset for automatic sleeping: wear the VR Headset for automat- ic waking up. Option 2 (POWER Button) Press the Power button of the VR Headset for manual sleeping or waking up. Hardware reset VR Headset reset If the visual in the VR Headset freezes, or the VR Headset does not respond after short press the Powerbutton, you can press the Power button of the VR Headset for more than 10 seconds to reboot the VRHeadset. Controller reset If the virtual Controller, the Home button or any buttons of the Controller doesn\\'t respond, remove andreinstall the battery case to restart the Controller. The VR Headset Adjustment This device has no myopia adjustment function. The VR Headset allows wearing most standard glasseswith a frame width of less than 150mm. to install Glasses Spacer to increase the space. You can install or not according to your situation. 17 EN\nInstall Glasses Spacer Install Nose Pad If you have glasses collision with headset lens or pressure on the bridge of nose, please follow the pictureto install Glasses Spacer to increase the space. You can install or not according to your situation. If you feel light leaking from your nose, please follow the picture to install Nose Pad to block the light.You can consider having it installed at your own discretion. Disassemble the Face Cushion. Install the Glasses Spacer on the Headset. ❸ ❶ ❷ Install the Face Cushion on the Glasses Spacer. Disassemble the Face Cushion. Install the Nose Pad on the Face Cushion. ❶ ❷ Install the Face Cushion on the Headset. ❸ * Note: Disassemble the Glasses Spacer 18 EN\nReplace Face Cushion The Face Cushion will have the following phenomena such as color change, surface fluff, soft texture afterlong-term use and repeated cleaning. You can replace a new Face Cushion as needed. Replace Top Strap ❶ ❷ Disassemble the Face Cushion. Pinch the metal buckle of the top strap asshown, press it down and pull it out.Install the Face Cushion on. ❸ ❷ ❶ • Purchase high-quality and trending apps • Join PICO Community and explore the VR worldwith other PICO players• Manage your device with ease • Engage in diverse and interactive activities • More exciting features waiting for you 19 EN\n'", "answers": ["It is recommended to have at least a 2x2 meter space for using the VR headset."], "length": 2184, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "ee53f01eb9a44de54723ab03918b7361f2eb630a35ce7b81"} {"input": "What is the correct expression for the derivative of the function?", "context": "\\section{Introduction}\n\nDerivate is one of the most important topics not only in mathematics, but also in physics, chemistry, economics and engineering. Every standard Calculus course provides a variety of exercises for the students to learn how to apply the concept of derivative. The types of problems range from finding an equation of the tangent line to the application of differentials and advanced curve sketching. Usually, these exercises heavily rely on such differentiation techniques as Product, Quotient and Chain Rules, Implicit and Logarithmic Differentiation \\cite{Stewart2012}. The definition of the derivative is hardly ever applied after the first few classes and its use is not much motivated.\n\nLike many other topics in undergraduate mathematics, derivative gave rise to many misconceptions \\cite{Muzangwa2012}, \\cite{Gur2007}, \\cite{Li2006}. Just when the students seem to learn how to use the differentiation rules for most essential functions, the application of the derivative brings new issues. A common students' error of determining the domain of the derivative from its formula is discussed in \\cite{Rivera2013} and some interesting examples of the derivatives, defined at the points where the functions themselves are undefined, are provided. However, the hunt for misconceptions takes another twist for the derivatives undefined at the points where the functions are in fact defined.\n\nThe expression of the derivative of the function obtained using differentiation techniques does not necessarily contain the information about the existence or the value of the derivative at the points, where the expression for the derivative is undefined. In this article we discuss a type of continuous functions that have the expression for the derivative undefined at a certain point, while the derivative itself at that point exists. We show, how relying on the formula for the derivative for finding the horizontal tangent line of a function, leads to a false conclusion and consequently to missing a solution. We also provide a simple methodological treatment of similar functions suitable for the classroom.\n\n\\section{Calculating the Derivative}\n\nIn order to illustrate how deceitful the expression of the derivative can be to a students' eye, let us consider the following problem.\n\n\\vspace{12pt}\n\n\\fbox{\\begin{minipage}{5.25in}\n\n\\begin{center}\n\n\\begin{minipage}{5.0in}\n\n\\vspace{10pt}\n\n\\emph{Problem}\n\n\\vspace{10pt}\n\nDifferentiate the function $f\\left(x\\right)=\\sqrt[3]{x}\\sin{\\left(x^2\\right)}$. For which values of $x$ from the interval $\\left[-1,1\\right]$ does the graph of $f\\left(x\\right)$ have a horizontal tangent?\n\n\\vspace{10pt}\n\n\\end{minipage}\n\n\\end{center}\n\n\\end{minipage}}\n\n\\vspace{12pt}\n\nProblems with similar formulations can be found in many Calculus books \\cite{Stewart2012}, \\cite{Larson2010}, \\cite{Thomas2009}. Following the common procedure, let us find the expression for the derivative of the function $f\\left(x\\right)$ applying the Product Rule:\n\\begin{eqnarray}\nf'\\left(x\\right) &=& \\left(\\sqrt[3]{x}\\right)'\\sin{\\left(x^2\\right)}+\\left(\\sin{\\left(x^2\\right)}\\right)'\\sqrt[3]{x} \\notag \\\\ &=& \\frac{1}{3\\sqrt[3]{x^2}}\\sin{\\left(x^2\\right)}+2x\\cos{\\left(x^2\\right)}\\sqrt[3]{x} \\notag \\\\ &=& \\frac{6x^2\\cos{x^2}+\\sin{x^2}}{3\\sqrt[3]{x^2}} \\label{DerivativeExpression}\n\\end{eqnarray}\n\nSimilar to \\cite{Stewart2012}, we find the values of $x$ where the derivative $f'\\left(x\\right)$ is equal to zero:\n\\begin{equation}\n6x^2\\cos{x^2}+\\sin{x^2} = 0 \n\\label{DerivativeEqualZero}\n\\end{equation}\n\nSince the expression for the derivative (\\ref{DerivativeExpression}) is not defined at $x=0$, it is not hard to see that for all values of $x$ from $\\left[-1,1\\right]$ distinct from zero, the left-hand side of (\\ref{DerivativeEqualZero}) is always positive. Hence, we conclude that the function $f\\left(x\\right)$ does not have horizontal tangent lines on the interval $\\left[-1,1\\right]$.\n\nHowever, a closer look at the graph of the function $f\\left(x\\right)$ seems to point at a different result: there is a horizontal tangent at $x=0$ (see Figure \\ref{fig:FunctionGraph}). \n\nFirst, note that the function $f\\left(x\\right)$ is defined in $x=0$. In order to verify if it has a horizontal tangent at this point, let us find the derivative of the function $f\\left(x\\right)$ using definition:\n\\begin{eqnarray}\nf'\\left(0\\right) &=& \\lim_{h\\rightarrow0}{\\frac{f\\left(0+h\\right)-f\\left(0\\right)}{h}} \\notag \\\\ \n&=& \\lim_{h\\rightarrow0}{\\frac{\\sqrt[3]{h}\\sin{\\left(h^2\\right)}}{h}} \\notag \\\\ \n&=& \\lim_{h\\rightarrow0}{\\left(\\sqrt[3]{h} \\cdot {h} \\cdot \\frac{\\sin{\\left(h^2\\right)}}{h^2}\\right)} \\notag \\\\\n&=& \\lim_{h\\rightarrow0}{\\sqrt[3]{h}} \\cdot \\lim_{h\\rightarrow0}{h} \\cdot \\lim_{h\\rightarrow0}{\\frac{\\sin{\\left(h^2\\right)}}{h^2}} \\notag \\\\\n&=& 0 \\cdot 0 \\cdot 1 = 0 \\notag\n\\end{eqnarray}\nsince each of the limits above exists. We see that, indeed, the function $f\\left(x\\right)$ possesses a horizontal tangent line at the point $x=0$.\n\n\\section{Closer Look at the Expression for the Derivative}\n\nWhat is the problem with the standard procedure proposed by many textbooks and repeated in every Calculus class? The explanation lies in the following premise: the expression of the derivative of the function does not contain the information as to whether the function is differentiable or not at the points where it is undefined. As it is pointed out in \\cite{Rivera2013}, the domain of the derivative is determined \\emph{a priori} and therefore should not be obtained from the formula of the derivative itself.\n\nIn the example above the Product Law for derivatives requires the existence of the derivatives of both functions at the point of interest. Since the function $\\sqrt[3]{x}$ is not differentiable in zero, the Product Rule cannot be applied. \n\nIn order to see what exactly happens when we apply the Product Rule, let us find the expression for the derivative using definition of the derivative:\n\\begin{eqnarray}\nf'\\left(x\\right) &=& \\lim_{h\\rightarrow0}{\\frac{f\\left(x+h\\right)-f\\left(x\\right)}{h}} \\notag \\\\ \n&=& \\lim_{h\\rightarrow0}{\\frac{\\sqrt[3]{x+h}\\sin{\\left(x+h\\right)^2}-\\sqrt[3]{x}\\sin{\\left(x^2\\right)}}{h}} \\notag \\\\ \n&=& \\lim_{h\\rightarrow0}{\\frac{\\left(\\sqrt[3]{x+h}-\\sqrt[3]{x}\\right)}{h}\\sin{\\left(x^2\\right)}} + \\notag \\\\\n&& \\lim_{h\\rightarrow0}{\\frac{\\left(\\sin{\\left(x+h\\right)^2}-\\sin{\\left(x^2\\right)}\\right)}{h}\\sqrt[3]{x+h}} \\notag \\\\\n&=& \\lim_{h\\rightarrow0}{\\frac{\\sqrt[3]{x+h}-\\sqrt[3]{x}}{h}} \\cdot \\lim_{h\\rightarrow0}{\\sin{\\left(x^2\\right)}} + \\notag \\\\&& \\lim_{h\\rightarrow0}{\\frac{\\sin{\\left(x+h\\right)^2}-\\sin{\\left(x^2\\right)}}{h}} \\cdot \\lim_{h\\rightarrow0}{\\sqrt[3]{x+h}} \\notag \\\\\n&=& \\frac{1}{3\\sqrt[3]{x^2}} \\cdot \\sin{\\left(x^2\\right)}+2x\\cos{\\left(x^2\\right)} \\cdot \\sqrt[3]{x} \\notag \n\\end{eqnarray}\nwhich seems to be identical to the expression (\\ref{DerivativeExpression}).\n\nStudents are expected to develop a skill of deriving similar results and know how to find the derivative of the function using definition of the derivative only. But how `legal' are the performed operations?\n\n\\begin{figure}[H]\n\\begin{center}\n\t\\includegraphics[width=6.0in]{sin.pdf}\n\t\\vspace{.1in}\n\t\\caption{Graph of the function $g\\left(x\\right)=\\sqrt[3]{x}\\cos{\\left(x^2\\right)}$}\n\t\\label{fig:GFunction}\n\\end{center}\n\\end{figure}\n\nLet us consider each of the following limits: \n\\begin{eqnarray*}\n&& \\lim_{h\\rightarrow0}{\\frac{\\sqrt[3]{x+h}-\\sqrt[3]{x}}{h}} \\notag \\\\\n&& \\lim_{h\\rightarrow0}{\\sin{\\left(x^2\\right)}}\\notag \\\\\n&& \\lim_{h\\rightarrow0}{\\frac{\\sin{\\left(x+h\\right)^2}-\\sin{\\left(x^2\\right)}}{h}}\\notag \\\\\n&& \\lim_{h\\rightarrow0}{\\sqrt[3]{x+h}}.\n\\end{eqnarray*}\nThe last three limits exist for all real values of the variable $x$. However, the first limit does not exist when $x=0$. Indeed\n\\begin{equation*}\n\\lim_{h\\rightarrow0}{\\frac{\\sqrt[3]{0+h}-\\sqrt[3]{0}}{h}} = \\lim_{h\\rightarrow0}{\\frac{1}{\\sqrt[3]{h^2}}} = + \\infty\n\\end{equation*}\n\nThis implies that the Product and Sum Laws for limits cannot be applied and therefore this step is not justifiable in the case of $x=0$. When the derivation is performed, we automatically assume the conditions, under which the Product Law for limits can be applied, i.e. that both limits that are multiplied exist. It is not hard to see that in our case these conditions are actually equivalent to $x\\neq0$. This is precisely why, when we wrote out the expression for the derivative (\\ref{DerivativeExpression}), it already contained the assumption that it is only true for the values of $x$ that are different from zero.\n\nNote, that in the case of $x=0$ the application of the Product and Sum Laws for limits is not necessary, since the term $\\left(\\sqrt[3]{x+h}-\\sqrt[3]{x}\\right)\\sin{\\left(x^2\\right)}$ vanishes.\n\nThe correct expression for the derivative of the function $f\\left(x\\right)$ should be the following:\n\\begin{equation*}\nf'\\left(x\\right) = \n\\begin{cases} \n\\frac{6x^2\\cos{\\left(x^2\\right)}+\\sin{\\left(x^2\\right)}}{3\\sqrt[3]{x^2}}, & \\mbox{if } x \\neq 0 \\\\ \n0, & \\mbox{if } x = 0 \n\\end{cases}\n\\end{equation*}\n\nThe expression for the derivative of the function provides the correct value of the derivative only for those values of the independent variable, for which the expression is defined; it does not tell anything about the existence or the value of the derivative, where the expression for the derivative is undefined. Indeed, let us consider the function\n\\begin{equation*}\ng\\left(x\\right) = {\\sqrt[3]{x}}\\cos{\\left(x^2\\right)}\n\\end{equation*}\nand its derivative $g'\\left(x\\right)$ \n\\begin{equation*}\ng'\\left(x\\right) = \\frac{\\cos{\\left(x^2\\right)}-6x^2\\sin{\\left(x^2\\right)}}{3\\sqrt[3]{x^2}}\n\\end{equation*}\n\nSimilar to the previous example, the expression for the derivative is undefined at $x=0$. Nonetheless, it can be shown that $g\\left(x\\right)$ is not differentiable at $x=0$ (see Figure \\ref{fig:GFunction}). Therefore, we provided two visually similar functions: both have the expressions for their derivatives undefined in zero, however, one of these functions possesses a derivative, but the other one does not.\n\n\\section{Methodological Remarks}\n\nUnfortunately, there exist many functions similar to the ones discussed above and they can arise in a variety of typical Calculus problems: finding the points where the tangent line is horizontal, finding an equation of the tangent and normal lines to the curve at the given point, the use of differentials and graph sketching. Relying only on the expression of the derivative for determining its value at the undefined points may lead to missing a solution (as in the example discussed above) or to some completely false interpretations (as in the case of curve sketching).\n\nAs it was discussed above, the expression for the derivative does not provide any information on the existence or the value of the derivative, where the expression itself is undefined. Here we present a methodology for the analysis of this type of functions.\n\nLet $f\\left(x\\right)$ be the function of interest and $f'\\left(x\\right)$ be the expression of its derivative undefined at some point $x_{0}$. In order to find out if $f\\left(x\\right)$ is differentiable at $x_{0}$, we suggest to follow a list of steps:\n\n\\begin{enumerate}\n \\item Check if the function $f\\left(x\\right)$ itself is defined at the point $x_{0}$. If $f\\left(x\\right)$ is undefined at $x_{0}$, then it is not differentiable at $x_{0}$. If $f\\left(x\\right)$ is defined at $x_{0}$, then proceed to next step. \n \\item Identify the basic functions that are used in the formula of the function $f\\left(x\\right)$, that are themselves defined at the point $x_{0}$, but their derivative is not (such as, for example, the root functions).\n\t\\item Find the derivative of the function $f\\left(x\\right)$ at the point $x_{0}$ using definition.\n\\end{enumerate}\n\nThe importance of the first step comes from the fact that most students tend to pay little attention to the functions domain analysis when asked to investigate its derivative. Formally, the second step can be skipped, however it will give the students the insight into which part of the function presents a problem and teach them to identify similar cases in the future. the difficulty of accomplishing the third step depends on the form of the function and sometimes can be tedious. Nevertheless, it allows the students to apply the previously obtained skills and encourages the review of the material.\n\n\\begin{figure}[H]\n\\begin{center}\n\t\\includegraphics[width=6.0in]{cos.pdf}\n\t\\vspace{.1in}\n\t\\caption{Graph of the function $g\\left(x\\right)=\\sqrt[3]{x}\\cos{\\left(x^2\\right)}$}\n\t\\label{fig:GFunction}\n\\end{center}\n\\end{figure}\n\n\\section{Conclusion}\n\nWe discussed the misconception, that the expression of the derivative of the function contains the information as to whether the function is differentiable or not at the points, where the expression is undefined. We considered a typical Calculus problem of looking for the horizontal tangent line of a function as an example. We showed how the search for the values that make the expression of the derivative equal zero leads to missing a solution: even though the expression of the derivative is undefined, the function still possesses the derivative at the point. We provided an example of the function that similarly has the expression for the derivative undefined, however the function is not differentiable at the point. We also presented the methodological treatment of such functions by applying the definition of the derivative, which can be used in the classroom.\n\n", "answers": ["It depends on the value of x, either 0 or (6x^2cos(x^2)+sin(x^2))/(3(x^2)^(1/3))."], "length": 1762, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "101225d103936ad083c5da47c3dce73ce569b62cbaa093bd"} {"input": "What award did Brooksley Born receive in 2009?", "context": "Brooksley Elizabeth Born (born August 27, 1940) is an American attorney and former public official who, from August 26, 1996, to June 1, 1999, was chair of the Commodity Futures Trading Commission (CFTC), the federal agency which oversees the U.S. futures and commodity options markets. During her tenure on the CFTC, Born lobbied Congress and the President to give the CFTC oversight of off-exchange markets for derivatives, in addition to its role with respect to exchange-traded derivatives, but her warnings were ignored or dismissed, and her calls for reform resisted by other regulators.Goodman, Peter S. The Reckoning - Taking Hard New Look at a Greenspan Legacy, The New York Times, October 9, 2008. Born resigned as chairperson on June 1, 1999, shortly after Congress passed legislation prohibiting her agency from regulating derivatives.\n\nEarly life and education\nBorn graduated from Abraham Lincoln High School (San Francisco, California) at the age of 16. She then attended Stanford University, where she majored in English and was graduated with the class of 1961. She initially wanted to become a doctor, but a guidance counsellor at Stanford advised her against medicine, so she majored in English literature instead.\n\nShe then attended Stanford Law School, one of only seven women in her class. She was the first female student ever to be named president of the Stanford Law Review. She received the \"Outstanding Senior\" award and graduated as valedictorian of the class of 1964.\n\nLegal career\nImmediately after law school Born was selected as a law clerk to judge Henry Edgerton of the U.S. Court of Appeals for the District of Columbia Circuit. It was during this time that she met her first husband, Jacob C. Landau, who was a journalist covering the Federal courts at the time. Following her clerkship, she became an associate at the Washington, D.C.-based international law firm of Arnold & Porter. Born was attracted to Arnold & Porter because it was one of the few major law firms to have a woman partner at that time, Carolyn Agger, who was the head of the tax practice. Born took a two-year leave of absence from Arnold & Porter to accompany her first husband to Boston, where he had received a fellowship. During that time she worked as a research assistant to law professor Alan Dershowitz.\n\nBorn's early career at Arnold & Porter focused on international trade law, in which she represented a number of Swiss industries and the government of Switzerland. She developed a practice representing clients in numerous complex litigation and arbitration cases involving financial market transactions. Among her high-profile cases was the matter of the Hunt Brothers attempt to corner the market in silver in the 1970s. She made partner at Arnold & Porter, after moving to a three-day schedule to help raise her second child, and eventually rose to be the head of the firm's derivatives practice.\n\nBorn was among the first female attorneys to systematically address inequities regarding how the laws treated women. Born and another female lawyer, Marna Tucker, taught what is considered to have been the first \"Women and the Law\" course at Catholic University’s Columbus School of Law. The class exclusively concerned prejudicial treatment of women under the laws of the United States, past and present. Born and Tucker were surprised to discover that there was no textbook on the issue at the time. Born is also one of the co-founders of the National Women's Law Center. Born also helped rewrite the American Bar Association rules to make it possible for more women and minorities to sit on federal bench.\n\nDuring her long legal career, and into her retirement, Born did much pro bono and other types of volunteer work. She was active in the American Bar Association, the largest professional organization of lawyers in the United States. Initially Born was named a member of the governing council of the ABA's Individual Rights Section, eventually becoming chairperson. Born and Tucker founded the ABA Women's Caucus, the first organization of female lawyers in the ABA. She held several other senior positions in the ABA, including being named the first woman member of the ABA's Standing Committee on the Federal Judiciary. As a member of the Judiciary Committee, Born provided testimony and opinion on persons nominated for federal judgeships. In 1980 she was named chair of the committee. As chair of the committee, Born was invited to address the U.S. Congress regarding the nomination of Judge Sandra Day O'Connor to the U.S. Supreme Court.\n\nIn 1993, Born's name was floated as a possible candidate for Attorney General of the United States, but Janet Reno was nominated.\n\nIn July 2009, Nancy Pelosi appointed Brooksley Born as a commissioner to the Financial Crisis Inquiry Commission (FCIC).\n\nBorn and the OTC derivatives market\nBorn was appointed to the CFTC on April 15, 1994, by President Bill Clinton. Due to litigation against Bankers Trust Company by Procter and Gamble and other corporate clients, Born and her team at the CFTC sought comments on the regulation of over-the-counter derivatives, a first step in the process of writing CFTC regulations to supplement the existing regulations of the Federal Reserve System, the Options Clearing Corporation, and the National Association of Insurance Commissioners. Born was particularly concerned about swaps, financial instruments that are traded over the counter between banks, insurance companies or other funds or companies, and thus have no transparency except to the two counterparties and the counterparties' regulators, if any. CFTC regulation was strenuously opposed by Federal Reserve chairman Alan Greenspan, and by Treasury Secretaries Robert Rubin and Lawrence Summers. On May 7, 1998, former SEC Chairman Arthur Levitt joined Rubin and Greenspan in objecting to the issuance of the CFTC's concept release. Their response dismissed Born's analysis and focused on the hypothetical possibility that CFTC regulation of swaps and other OTC derivative instruments could create a \"legal uncertainty\" regarding such financial instruments, hypothetically reducing the value of the instruments. They argued that the imposition of regulatory costs would \"stifle financial innovation\" and encourage financial capital to transfer its transactions offshore. The disagreement between Born and the Executive Office's top economic policy advisors has been described not only as a classic Washington turf war, but also a war of ideologies, insofar as it is possible to argue that Born's actions were consistent with Keynesian and neoclassical economics while Greenspan, Rubin, Levitt, and Summers consistently espoused neoliberal, and neoconservative policies.\n\nIn 1998, a trillion-dollar hedge fund called Long Term Capital Management (LTCM) was near collapse. Using mathematical models to calculate debt risk, LTCM used derivatives to leverage $5 billion into more than $1 trillion, doing business with fifteen of Wall Street's largest financial institutions. The derivative transactions were not regulated, nor were investors able to evaluate LTCM's exposures. Born stated, \"I thought that LTCM was exactly what I had been worried about\". In the last weekend of September 1998, the President's working group was told that the entire American economy hung in the balance. After intervention by the Federal Reserve, the crisis was averted. In congressional hearings into the crisis, Greenspan acknowledged that language had been introduced into an agriculture bill that would prevent CFTC from regulating the derivatives which were at the center of the crisis that threatened the US economy. U.S. Representative Maurice Hinchey (D-NY) asked \"How many more failures do you think we'd have to have before some regulation in this area might be appropriate?\" In response, Greenspan brushed aside the substance of Born's warnings with the simple assertion that \"the degree of supervision of regulation of the over-the-counter derivatives market is quite adequate to maintain a degree of stability in the system\". Born's warning was that there wasn't any regulation of them. Born's chief of staff, Michael Greenberger summed up Greenspan's position this way: \"Greenspan didn't believe that fraud was something that needed to be enforced, and he assumed she probably did. And of course, she did.\" Under heavy pressure from the financial lobby, legislation prohibiting regulation of derivatives by Born's agency was passed by the Congress. Born resigned on June 1, 1999.\n\nThe derivatives market continued to grow yearly throughout both terms of George W. Bush's administration. On September 15, 2008, the bankruptcy of Lehman Brothers forced a broad recognition of a financial crisis in both the US and world capital markets. As Lehman Brothers' failure temporarily reduced financial capital's confidence, a number of newspaper articles and television programs suggested that the failure's possible causes included the conflict between the CFTC and the other regulators.Faiola, Anthony, Nakashima, Ellen and Drew, Jill. The Crash: Risk and Regulation - What Went Wrong, The Washington Post, October 15, 2008.\n\nBorn declined to publicly comment on the unfolding 2008 crisis until March 2009, when she said: \"The market grew so enormously, with so little oversight and regulation, that it made the financial crisis much deeper and more pervasive than it otherwise would have been.\" She also lamented the influence of Wall Street lobbyists on the process and the refusal of regulators to discuss even modest reforms.\n\nAn October 2009 Frontline documentary titled \"The Warning\" described Born's thwarted efforts to regulate and bring transparency to the derivatives market, and the continuing opposition thereto. The program concluded with an excerpted interview with Born sounding another warning: \"I think we will have continuing danger from these markets and that we will have repeats of the financial crisis -- may differ in details but there will be significant financial downturns and disasters attributed to this regulatory gap, over and over, until we learn from experience.\"\n\nIn 2009 Born, along with Sheila Bair of the FDIC, was awarded the John F. Kennedy Profiles in Courage Award in recognition of the \"political courage she demonstrated in sounding early warnings about conditions that contributed\" to the 2007-08 financial crisis. According to Caroline Kennedy, \"Brooksley Born recognized that the financial security of all Americans was being put at risk by the greed, negligence and opposition of powerful and well connected interests.... The catastrophic financial events of recent months have proved them [Born and Sheila Bair] right.\" One member of the President's working group had a change of heart about Brooksley Born. SEC Chairman Arthur Levitt stated \"I've come to know her as one of the most capable, dedicated, intelligent and committed public servants that I have ever come to know\", adding that \"I could have done much better. I could have made a difference\" in response to her warnings.\n\nIn 2010, a documentary film Inside Job further alleged that derivatives regulation was ineffective from the Clinton administration on. Along with fellow whistleblower, former IMF Chief Economist Raghuram Rajan, who was also scorned by the economic establishment, Brooksley Born was cited as one of the authorities arguing that financial derivatives increase economic risk.\n\n Personal life \nBorn is married to Alexander E. Bennett (also retired from Arnold & Porter). She has five adult children - two from a previous marriage to Jacob Landau and three stepchildren. Notably, Born was named a partner at Arnold & Porter while working part-time so she could raise her two young children. When both of her children were school-age, Born returned to practice full-time.\n\nReferences\n\nExternal links\nAttorney profile at Arnold & Porter\nBrooksley Born (2009 Winner) of the Profiles in Courage Award, with acceptance speech transcript and NECN video\n\nProfile at MarketsWiki\nSpeeches and statements\n\"Testimony Of Brooksley Born Chairperson of the CFTC Concerning The Over-The-Counter Derivatives Market\", before the House Committee On Banking And Financial Services, July 24, 1998.\n\"The Lessons of Long Term Capital Management L.P.\", Remarks of Brooksley Born, Chairperson of the CFTC, Chicago-Kent-IIT Commodities Law Institute, Chicago, Illinois, October 15, 1998.\n Interview: Brooksley Born for \"PBS Frontline: The Warning\", PBS, (streaming VIDEO 1 hour), October 20, 2009.\nArticles\nManuel Roig-Franzia. \"Credit Crisis Cassandra:Brooksley Born's Unheeded Warning Is a Rueful Echo 10 Years On\", The Washington Post, May 26, 2009\n Taibbi, Matt. \"The Great American Bubble Machine\", Rolling Stone'', July 9–23, 2009\n\n1940 births\nAmerican women lawyers\nArnold & Porter people\nClinton administration personnel\nColumbus School of Law faculty\nCommodity Futures Trading Commission personnel\nHeads of United States federal agencies\nLawyers from San Francisco\nLiving people\nStanford Law School alumni\n21st-century American women\nStanford University alumni", "answers": ["In 2009, Brooksley Born received the John F. Kennedy Profiles in Courage Award."], "length": 2054, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "c93a8eb62278ce7c575ba542b2bcd5e610fa15035906463c"} {"input": "When did Margaret Way start self-publishing her books as e-books?", "context": "Margaret Way (b. Brisbane d. Cleveland, Queensland, Australia ) was an Australian writer of romance novels and women's fiction. A prolific author, Way wrote more than 120 novels since 1970, many through Mills & Boon, a romance imprint of British publisher Harlequin UK Ltd., owned by Harlequin Enterprises.\n\nBiography\nBefore her marriage, she was a well-known pianist, teacher, vocal coach and accompanist. She began writing when her son, Laurence Way, was born, a friend took a pile of Mills & Boon books to her, she read all and decided that she also could write these types of novels. She began to write and promote her country with her stories set in Australia. She sold her first novels in 1970. Margaret Way lives with her family in her native Brisbane. Beginning in 2013, Margaret began to self-publish, releasing her first \"e-book\" mid-July.\n\nMargaret died on the 10th of August 2022 in Cleveland, Queensland.\n\nBibliography\n\nSingle Novels\nKing Country (1970)\nBlaze of Silk (1970)\nThe Time of the Jacaranda (1970)\nBauhinia Junction (1971)\nMan from Bahl Bahla (1971)\nSummer Magic (1971)\nReturn to Belle Amber (1971)\nRing of Jade (1972)\nCopper Moon (1972)\nRainbow Bird (1972)\nMan Like Daintree (1972)\nNoonfire (1972)\nStorm Over Mandargi (1973)\nWind River (1973)\nLove Theme (1974)\nMcCabe's Kingdom (1974)\nSweet Sundown (1974)\nReeds of Honey (1975)\nStorm Flower (1975)\nLesson in Loving (1975)\nFlight into Yesterday (1976)\nRed Cliffs of Malpara (1976)\nMan on Half-moon (1976)\nSwan's Reach (1976)\nMutiny in Paradise (1977)\nOne Way Ticket (1977)\nPortrait of Jaime (1977)\nBlack Ingo (1977)\nAwakening Flame (1978)\nWild Swan (1978)\nRing of Fire (1978)\nWake the Sleeping Tiger (1978)\nValley of the Moon (1979)\nWhite Magnolia (1979)\nWinds of Heaven (1979)\nBlue Lotus (1979)\nButterfly and the Baron (1979)\nGolden Puma (1980)\nTemple of Fire (1980)\nLord of the High Valley (1980)\nFlamingo Park (1980)\nNorth of Capricorn (1981)\nSeason for Change (1981)\nShadow Dance (1981)\nMcIvor Affair (1981)\nHome to Morning Star (1981)\nBroken Rhapsody (1982)\nThe Silver Veil (1982)\nSpellbound (1982)\nHunter's Moon (1982)\nGirl at Cobalt Creek (1983)\nNo Alternative (1983)\nHouse of Memories (1983)\nAlmost a Stranger (1984)\nA place called Rambulara (1984)\nFallen Idol (1984)\nHunt the Sun (1985)\nEagle's Ridge (1985)\nThe Tiger's Cage (1986)\nInnocent in Eden (1986)\nDiamond Valley (1986)\nMorning Glory (1988)\nDevil Moon (1988)\nMowana Magic (1988)\nHungry Heart (1988)\nRise of an Eagle (1988)\nOne Fateful Summer (1993)\nThe Carradine Brand (1994)\nHolding on to Alex (1997)\nThe Australian Heiress (1997)\nClaiming His Child (1999)\nThe Cattleman's Bride (2000)\nThe Cattle Baron (2001)\nThe Husbands of the Outback (2001)\nSecrets of the Outback (2002)\nWith This Ring (2003)\nInnocent Mistress (2004)\nCattle Rancher, Convenient Wife (2007)\nOutback Marriages (2007)\nPromoted: Nanny to Wife (2007)\nCattle Rancher, Secret Son (2007)\nGenni's Dilemma (2008)\nBride At Briar Ridge (2009)\nOutback Heiress, Surprise Proposal (2009)\nCattle Baron, Nanny Needed (2009)\n\nLegends of the Outback Series\nMail Order Marriage (1999)\nThe Bridesmaid's Wedding (2000)\nThe English Bride (2000)\nA Wife at Kimbara (2000)\n\nKoomera Crossing Series\nSarah's Baby (2003)\nRunaway Wife (2003)\nOutback Bridegroom (2003)\nOutback Surrender (2003)\nHome to Eden (2004)\n\nMcIvor Sisters Series\nThe Outback Engagement (2005)\nMarriage at Murraree (2005)\n\nMen Of The Outback Series\nThe Cattleman (2006)\nThe Cattle Baron's Bride (2006)\nHer Outback Protector (2006)\nThe Horseman (2006)\n\nOutback Marriages Series\nOutback Man Seeks Wife (2007)\nCattle Rancher, Convenient Wife (2007)\n\nBarons of the Outback Series Multi-Author\nWedding At Wangaree Valley (2008)\nBride At Briar's Ridge (2008)\n\nFamily Ties Multi-Author\nOnce Burned (1995)\n\nHitched! Multi-Author\nA Faulkner Possession (1996)\n\nSimply the Best Multi-Author\nGeorgia and the Tycoon (1997)\n\nThe Big Event Multi-Author\nBeresford's Bride (1998)\n\nGuardian Angels Multi-Author\nGabriel's Mission (1998)\n\nAustralians Series Multi-Author\n7. Her Outback Man (1998)\n17. Master of Maramba (2001)\n19. Outback Fire (2001)\n22. Mistaken Mistress (2002)\n24. Outback Angel (2002)\n33. The Australian Tycoon's Proposal (2004)\n35. His Heiress Wife (2004)\n\nMarrying the Boss Series Multi-Author\nBoardroom Proposal (1999)\n\nContract Brides Series Multi-Author\nStrategy for Marriage (2002)\n\nEverlasting Love Series Multi-Author\nHidden Legacy (2008)\n\nDiamond Brides Series Multi-Author\nThe Australian's Society Bride (2008)\n\nCollections\nSummer Magic / Ring of Jade / Noonfire (1981)\nWife at Kimbara / Bridesmaid's Wedding (2005)\n\nOmnibus in Collaboration\nPretty Witch / Without Any Amazement / Storm Over Mandargi (1977) (with Lucy Gillen and Margaret Malcolm)\nDear Caliban / Heart of the Eagle / Swans' Reach (1978) (with Jane Donnelly and Elizabeth Graham)\nThe Bonds of Matrimony / Dragon Island / Reeds of Honey (1979) (with Elizabeth Hunter and Henrietta Reid)\nThe Man Outside / Castles in Spain / McCabe's Kingdom (1979) (with Jane Donnelly and Rebecca Stratton)\nWinds From The Sea / Island of Darkness / Wind River (1979) (with Margaret Pargeter and Rebecca Stratton)\nMoorland Magic / Tree of Idleness / Sweet Sundown (1980) (with Elizabeth Ashton and Elizabeth Hunter)\nThe Shifting Sands / Portrait of Jaime / Touched by Fire (1982) (with Jane Donnelly and Kay Thorpe)\nHead of Chancery / Wild Heart / One-Way Ticket (1986) (with Betty Beaty and Doris Smith)\nHeart of the Scorpion / The Winds of Heaven / Sweet Compulsion (1987) (with Janice Gray and Victoria Woolf)\nOne Brief Sweet Hour / Once More With Feeling / Blue Lotus (1990) (with Jane Arbor and Natalie Sparks)\nMarry Me Cowboy (1995) (with Janet Dailey, Susan Fox and Anne McAllister)\nHusbands on Horseback (1996) (with Diana Palmer)\nWedlocked (1999) (with Day Leclaire and Anne McAllister)\nMistletoe Magic (1999) (with Betty Neels and Rebecca Winters)\nThe Australians (2000) (with Helen Bianchin and Miranda Lee)\nWeddings Down Under (2001) (with Helen Bianchin and Jessica Hart)\nOutback Husbands (2002) (with Marion Lennox)\nThe Mother's Day Collection (2002) (with Helen Dickson and Kate Hoffmann)\nAustralian Nights (2003) (with Miranda Lee)\nOutback Weddings (2003) (with Barbara Hannay)\nAustralian Playboys (2003) (with Helen Bianchin and Marion Lennox)\nAustralian Tycoons (2004) (with Emma Darcy and Marion Lennox)\nA Mother's Day Gift (2004) (with Anne Ashley and Lucy Monroe)\nWhite Wedding (2004) (with Judy Christenberry and Jessica Steele)\nA Christmas Engagement (2004) (with Sara Craven and Jessica Matthews)\nA Very Special Mother's Day (2005) (with Anne Herries)\nAll I Want for Christmas... (2005) (with Betty Neels and Jessica Steele)\nThe Mills and Boon Collection (2006) (with Caroline Anderson and Penny Jordan)\nOutback Desire (2006) (with Emma Darcy and Carol Marinelli)\nTo Mum, with Love (2006) (with Rebecca Winters)\nAustralian Heroes (2007) (with Marion Lennox and Fiona McArthur)\nTall, Dark and Sexy (2008) (with Caroline Anderson and Helen Bianchin)\nThe Boss's Proposal (2008) (with Jessica Steele and Patricia Thayer)\nIsland Heat / Outback Man Seeks Wife / Prince's Forbidden Virgin / One Night Before Marriage / Their Lost-and-found Family / Single Dad's Marriage Wish (2008) (with Robyn Donald, Marion Lennox, Carol Marinelli, Sarah Mayberry and Anne Oliver)\nAustralian Billionaires (2009) (with Jennie Adams and Amy Andrews)\nCattle Baron : Nanny Needed / Bachelor Dad on Her Doorstep (2009) (with Michelle Douglas)\n\nExternal links\nMargaret Way at Harlequin Enterprises Ltd\n\nAustralian romantic fiction writers\nAustralian women novelists\nLiving people\nYear of birth missing (living people)\nWomen romantic fiction writers", "answers": ["Margaret Way started self-publishing her books as e-books in 2013."], "length": 1201, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "f20114170ce5bb518f17950edb4ef828d980bba0bba01077"} {"input": "What hedge fund's collapse in 1998 highlighted the need for regulation of derivatives?", "context": "Brooksley Elizabeth Born (born August 27, 1940) is an American attorney and former public official who, from August 26, 1996, to June 1, 1999, was chair of the Commodity Futures Trading Commission (CFTC), the federal agency which oversees the U.S. futures and commodity options markets. During her tenure on the CFTC, Born lobbied Congress and the President to give the CFTC oversight of off-exchange markets for derivatives, in addition to its role with respect to exchange-traded derivatives, but her warnings were ignored or dismissed, and her calls for reform resisted by other regulators.Goodman, Peter S. The Reckoning - Taking Hard New Look at a Greenspan Legacy, The New York Times, October 9, 2008. Born resigned as chairperson on June 1, 1999, shortly after Congress passed legislation prohibiting her agency from regulating derivatives.\n\nIn 2009, Born received the John F. Kennedy Profiles in Courage Award, along with Sheila Bair of the Federal Deposit Insurance Corporation, in recognition of the \"political courage she demonstrated in sounding early warnings about conditions that contributed\" to the 2007-08 financial crisis.\n\nEarly life and education\nBorn graduated from Abraham Lincoln High School (San Francisco, California) at the age of 16. She then attended Stanford University, where she majored in English and was graduated with the class of 1961. She initially wanted to become a doctor, but a guidance counsellor at Stanford advised her against medicine, so she majored in English literature instead.\n\nShe then attended Stanford Law School, one of only seven women in her class. She was the first female student ever to be named president of the Stanford Law Review. She received the \"Outstanding Senior\" award and graduated as valedictorian of the class of 1964.\n\nLegal career\nImmediately after law school Born was selected as a law clerk to judge Henry Edgerton of the U.S. Court of Appeals for the District of Columbia Circuit. It was during this time that she met her first husband, Jacob C. Landau, who was a journalist covering the Federal courts at the time. Following her clerkship, she became an associate at the Washington, D.C.-based international law firm of Arnold & Porter. Born was attracted to Arnold & Porter because it was one of the few major law firms to have a woman partner at that time, Carolyn Agger, who was the head of the tax practice. Born took a two-year leave of absence from Arnold & Porter to accompany her first husband to Boston, where he had received a fellowship. During that time she worked as a research assistant to law professor Alan Dershowitz.\n\nBorn's early career at Arnold & Porter focused on international trade law, in which she represented a number of Swiss industries and the government of Switzerland. She developed a practice representing clients in numerous complex litigation and arbitration cases involving financial market transactions. Among her high-profile cases was the matter of the Hunt Brothers attempt to corner the market in silver in the 1970s. She made partner at Arnold & Porter, after moving to a three-day schedule to help raise her second child, and eventually rose to be the head of the firm's derivatives practice.\n\nBorn was among the first female attorneys to systematically address inequities regarding how the laws treated women. Born and another female lawyer, Marna Tucker, taught what is considered to have been the first \"Women and the Law\" course at Catholic University’s Columbus School of Law. The class exclusively concerned prejudicial treatment of women under the laws of the United States, past and present. Born and Tucker were surprised to discover that there was no textbook on the issue at the time. Born is also one of the co-founders of the National Women's Law Center. Born also helped rewrite the American Bar Association rules to make it possible for more women and minorities to sit on federal bench.\n\nDuring her long legal career, and into her retirement, Born did much pro bono and other types of volunteer work. She was active in the American Bar Association, the largest professional organization of lawyers in the United States. Initially Born was named a member of the governing council of the ABA's Individual Rights Section, eventually becoming chairperson. Born and Tucker founded the ABA Women's Caucus, the first organization of female lawyers in the ABA. She held several other senior positions in the ABA, including being named the first woman member of the ABA's Standing Committee on the Federal Judiciary. As a member of the Judiciary Committee, Born provided testimony and opinion on persons nominated for federal judgeships. In 1980 she was named chair of the committee. As chair of the committee, Born was invited to address the U.S. Congress regarding the nomination of Judge Sandra Day O'Connor to the U.S. Supreme Court.\n\nIn 1993, Born's name was floated as a possible candidate for Attorney General of the United States, but Janet Reno was nominated.\n\nIn July 2009, Nancy Pelosi appointed Brooksley Born as a commissioner to the Financial Crisis Inquiry Commission (FCIC).\n\nBorn and the OTC derivatives market\nBorn was appointed to the CFTC on April 15, 1994, by President Bill Clinton. Due to litigation against Bankers Trust Company by Procter and Gamble and other corporate clients, Born and her team at the CFTC sought comments on the regulation of over-the-counter derivatives, a first step in the process of writing CFTC regulations to supplement the existing regulations of the Federal Reserve System, the Options Clearing Corporation, and the National Association of Insurance Commissioners. Born was particularly concerned about swaps, financial instruments that are traded over the counter between banks, insurance companies or other funds or companies, and thus have no transparency except to the two counterparties and the counterparties' regulators, if any. CFTC regulation was strenuously opposed by Federal Reserve chairman Alan Greenspan, and by Treasury Secretaries Robert Rubin and Lawrence Summers. On May 7, 1998, former SEC Chairman Arthur Levitt joined Rubin and Greenspan in objecting to the issuance of the CFTC's concept release. Their response dismissed Born's analysis and focused on the hypothetical possibility that CFTC regulation of swaps and other OTC derivative instruments could create a \"legal uncertainty\" regarding such financial instruments, hypothetically reducing the value of the instruments. They argued that the imposition of regulatory costs would \"stifle financial innovation\" and encourage financial capital to transfer its transactions offshore. The disagreement between Born and the Executive Office's top economic policy advisors has been described not only as a classic Washington turf war, but also a war of ideologies, insofar as it is possible to argue that Born's actions were consistent with Keynesian and neoclassical economics while Greenspan, Rubin, Levitt, and Summers consistently espoused neoliberal, and neoconservative policies.\n\nIn 1998, a trillion-dollar hedge fund called Long Term Capital Management (LTCM) was near collapse. Using mathematical models to calculate debt risk, LTCM used derivatives to leverage $5 billion into more than $1 trillion, doing business with fifteen of Wall Street's largest financial institutions. The derivative transactions were not regulated, nor were investors able to evaluate LTCM's exposures. Born stated, \"I thought that LTCM was exactly what I had been worried about\". In the last weekend of September 1998, the President's working group was told that the entire American economy hung in the balance. After intervention by the Federal Reserve, the crisis was averted. In congressional hearings into the crisis, Greenspan acknowledged that language had been introduced into an agriculture bill that would prevent CFTC from regulating the derivatives which were at the center of the crisis that threatened the US economy. U.S. Representative Maurice Hinchey (D-NY) asked \"How many more failures do you think we'd have to have before some regulation in this area might be appropriate?\" In response, Greenspan brushed aside the substance of Born's warnings with the simple assertion that \"the degree of supervision of regulation of the over-the-counter derivatives market is quite adequate to maintain a degree of stability in the system\". Born's warning was that there wasn't any regulation of them. Born's chief of staff, Michael Greenberger summed up Greenspan's position this way: \"Greenspan didn't believe that fraud was something that needed to be enforced, and he assumed she probably did. And of course, she did.\" Under heavy pressure from the financial lobby, legislation prohibiting regulation of derivatives by Born's agency was passed by the Congress. Born resigned on June 1, 1999.\n\nThe derivatives market continued to grow yearly throughout both terms of George W. Bush's administration. On September 15, 2008, the bankruptcy of Lehman Brothers forced a broad recognition of a financial crisis in both the US and world capital markets. As Lehman Brothers' failure temporarily reduced financial capital's confidence, a number of newspaper articles and television programs suggested that the failure's possible causes included the conflict between the CFTC and the other regulators.Faiola, Anthony, Nakashima, Ellen and Drew, Jill. The Crash: Risk and Regulation - What Went Wrong, The Washington Post, October 15, 2008.\n\nBorn declined to publicly comment on the unfolding 2008 crisis until March 2009, when she said: \"The market grew so enormously, with so little oversight and regulation, that it made the financial crisis much deeper and more pervasive than it otherwise would have been.\" She also lamented the influence of Wall Street lobbyists on the process and the refusal of regulators to discuss even modest reforms.\n\nAn October 2009 Frontline documentary titled \"The Warning\" described Born's thwarted efforts to regulate and bring transparency to the derivatives market, and the continuing opposition thereto. The program concluded with an excerpted interview with Born sounding another warning: \"I think we will have continuing danger from these markets and that we will have repeats of the financial crisis -- may differ in details but there will be significant financial downturns and disasters attributed to this regulatory gap, over and over, until we learn from experience.\"\n\nIn 2009 Born, along with Sheila Bair of the FDIC, was awarded the John F. Kennedy Profiles in Courage Award in recognition of the \"political courage she demonstrated in sounding early warnings about conditions that contributed\" to the 2007-08 financial crisis. According to Caroline Kennedy, \"Brooksley Born recognized that the financial security of all Americans was being put at risk by the greed, negligence and opposition of powerful and well connected interests.... The catastrophic financial events of recent months have proved them [Born and Sheila Bair] right.\" One member of the President's working group had a change of heart about Brooksley Born. SEC Chairman Arthur Levitt stated \"I've come to know her as one of the most capable, dedicated, intelligent and committed public servants that I have ever come to know\", adding that \"I could have done much better. I could have made a difference\" in response to her warnings.\n\nIn 2010, a documentary film Inside Job further alleged that derivatives regulation was ineffective from the Clinton administration on. Along with fellow whistleblower, former IMF Chief Economist Raghuram Rajan, who was also scorned by the economic establishment, Brooksley Born was cited as one of the authorities arguing that financial derivatives increase economic risk.\n\n Personal life \nBorn is married to Alexander E. Bennett (also retired from Arnold & Porter). She has five adult children - two from a previous marriage to Jacob Landau and three stepchildren. Notably, Born was named a partner at Arnold & Porter while working part-time so she could raise her two young children. When both of her children were school-age, Born returned to practice full-time.\n\nReferences\n\nExternal links\nAttorney profile at Arnold & Porter\nBrooksley Born (2009 Winner) of the Profiles in Courage Award, with acceptance speech transcript and NECN video\n\nProfile at MarketsWiki\nSpeeches and statements\n\"Testimony Of Brooksley Born Chairperson of the CFTC Concerning The Over-The-Counter Derivatives Market\", before the House Committee On Banking And Financial Services, July 24, 1998.\n\"The Lessons of Long Term Capital Management L.P.\", Remarks of Brooksley Born, Chairperson of the CFTC, Chicago-Kent-IIT Commodities Law Institute, Chicago, Illinois, October 15, 1998.\n Interview: Brooksley Born for \"PBS Frontline: The Warning\", PBS, (streaming VIDEO 1 hour), October 20, 2009.\nArticles\nManuel Roig-Franzia. \"Credit Crisis Cassandra:Brooksley Born's Unheeded Warning Is a Rueful Echo 10 Years On\", The Washington Post, May 26, 2009\n Taibbi, Matt. \"The Great American Bubble Machine\", Rolling Stone'', July 9–23, 2009\n\n1940 births\nAmerican women lawyers\nArnold & Porter people\nClinton administration personnel\nColumbus School of Law faculty\nCommodity Futures Trading Commission personnel\nHeads of United States federal agencies\nLawyers from San Francisco\nLiving people\nStanford Law School alumni\n21st-century American women\n", "answers": ["Long Term Capital Management (LTCM)."], "length": 2091, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "a013f691a7063527fa4e7a4b357081c76656af3cd402a27a"} {"input": "What is the dynamical behavior of the anisotropic order parameter following a quench to the critical point?", "context": "\\section*{Dynamical Behaviour of $O$ in Lattice Gases}\n\nThe dynamical behaviour of the anisotropic order parameter $m$ [see Eq.~\\eqref{eq:def-m} in the Letter] following a quench to the critical point is well described by\nthe Gaussian theory for all the three lattice gas models studied, $i.e.,$ driven lattice gas with either constant (IDLG) or random (RDLG) infinite drive and equilibrium lattice gas (LG). In other words, in the short-time regime, $m \\sim t^{1/2}$ [see Eq. \\eqref{eq:mt}] and the Binder cumulant $g$ of the lowest transverse mode [defined in Eq. \\eqref{eq:binder}] is zero in this regime. The alternative order parameter $O,$ however, distinguishes between the driven (IDLG, RDLG) and the equilibrium (LG) lattice gases. \n\nIn order to understand this, we first write the phenomenological scaling form for $O$, analogous to Eq. \\eqref{eq:scalingass} in the Letter,\n\\begin{eqnarray}\nO (t, L_{\\parallel} ; S_\\Delta) = L_{\\parallel}^{-\\beta/[\\nu(1+\\Delta)]} \\tilde f_O (t/L_{\\parallel}^{z/(1+\\Delta)} ; S_\\Delta).\\quad\n\\label{eq:Oscalingass}\n\\end{eqnarray}\nWe already remarked that, in the LG, this scaling form is not compatible with the prediction $O \\sim t^{1/8} L_{\\parallel}^{-1/2}$ of the Gaussian theory. However, following Ref. \\cite{AS2002}, it can be argued that, at short times, the only dependence of $O$ on the system size $L_{\\parallel}$ is of the form $O \\sim L_\\parallel^{-1/2}$ which is very well confirmed by numerical simulations. Accordingly, the generic behaviour of $O$ can be assumed to be\n\\begin{eqnarray}\nO \\sim t^{\\alpha} L_\\parallel^{-1/2}, \\label{eq:O}\n\\end{eqnarray}\nwhere $\\alpha$ is a phenomenological exponent to be determined. This, along with Eq. \\eqref{eq:Oscalingass}, implies $\\tilde f_O(x) \\sim x^{\\alpha}.$ Comparing the finite-size behaviour in Eq.~\\eqref{eq:O} with Eq.~\\eqref{eq:Oscalingass} one actually infers,\n\\begin{eqnarray}\n\\alpha &=& \\frac{1+ \\Delta -2 \\beta/\\nu}{2 \\, (4- \\eta)}. \\label{eq:alpha}\n\\end{eqnarray}\nThis equation, together with the hyperscaling relation $\\Delta - 2 \\beta/\\nu= - \\eta$ in two spatial dimensions, shows that the prediction $\\alpha = 1/8$ of the Gaussian theory [see Eq. \\eqref{eq:Ot}] can be obtained only when $\\eta=0,$ which is the case for the IDLG (exactly) and the RDLG (approximately) but not for the LG. \n\nOn the other hand, Eq.~\\eqref{eq:alpha} predicts $\\alpha = 1/10$ upon substituting the values of the critical exponents corresponding to the Ising universality class (LG). This is consistent with the numerical simulation results presented in the main text, see Fig. \\ref{fig:ising}(b) therein.\n\n\\begin{figure}[th]\n\\vspace*{0.2 cm}\n \\centering\n \\includegraphics[width=10 cm]{./compare_binder.pdf}\n\n\\caption{Comparison between the temporal evolution of the Binder cumulants $g$ corresponding to the $12^{th}$ transverse mode, $i.e.,$ with $n_\\perp =12,$ in the LG (lowest curve), IDLG and RDLG (two upper curves) on a $32 \\times 32$ lattice. \\label{fig:b}}\n \\label{fig:binder}\n\\end{figure}\n\n\nThe emergence of this new value $1/10$ of the exponent $\\alpha$ must be traced back to the non-Gaussian nature of higher fluctuating modes in the LG. In fact, even though the lowest mode behaves identically in all the three models we considered, characterized by the same behaviour of $m$, higher modes show a significant difference in the non-driven case. \n\n\nTo illustrate this, we measured the Binder cumulants of higher modes which is defined analogously to Eq.~(11), using transverse modes other than the first, i.e., with $\\mu=\\tilde \\sigma(0,2 \\pi n_\\bot/L_\\bot)$ and $n_\\bot>1.$ \n Figure \\ref{fig:b} compares the same for all the three lattice gases for the mode with $n_\\perp =12$ on a $32 \\times 32$ lattice. Clearly, the curve corresponding to the LG (lowest, blue) departs from Gaussian behaviour $g=0$ (in practice, $e.g.,$ $|g| \\lesssim 0.005,$ corresponding to the shaded gray area) much earlier than it does for the IDLG or RDLG (two upper curves, red and green respectively).\n\nAccordingly, the different dynamical behaviour of $O$, which involves a sum over all modes, can be attributed to the non-Gaussian nature of the higher modes in the LG. \nSuch a departure is not entirely surprising. In fact, for higher modes, mesoscopic descriptions such as the ones in Eqs. \\eqref{eq:L-DLG} or \\eqref{eq:g_evol} are not expected to hold, while the anisotropy at the microscopic level could be the mechanism leading to the Gaussianity of higher modes in the driven models.\n\n", "answers": ["It is well described by the Gaussian theory."], "length": 669, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "322b4b2b074049aac101086797d655a21671ef5a9f366353"} {"input": "When did Born resign as chairperson of the CFTC?", "context": "Brooksley Elizabeth Born (born August 27, 1940) is an American attorney and former public official who, from August 26, 1996, to June 1, 1999, was chair of the Commodity Futures Trading Commission (CFTC), the federal agency which oversees the U.S. futures and commodity options markets. During her tenure on the CFTC, Born lobbied Congress and the President to give the CFTC oversight of off-exchange markets for derivatives, in addition to its role with respect to exchange-traded derivatives, but her warnings were ignored or dismissed, and her calls for reform resisted by other regulators.Goodman, Peter S. The Reckoning - Taking Hard New Look at a Greenspan Legacy, The New York Times, October 9, 2008. Born resigned as chairperson on June 1, 1999, shortly after Congress passed legislation prohibiting her agency from regulating derivatives.\n\nIn 2009, Born received the John F. Kennedy Profiles in Courage Award, along with Sheila Bair of the Federal Deposit Insurance Corporation, in recognition of the \"political courage she demonstrated in sounding early warnings about conditions that contributed\" to the 2007-08 financial crisis.\n\nEarly life and education\nBorn graduated from Abraham Lincoln High School (San Francisco, California) at the age of 16. She then attended Stanford University, where she majored in English and was graduated with the class of 1961. She initially wanted to become a doctor, but a guidance counsellor at Stanford advised her against medicine, so she majored in English literature instead.\n\nShe then attended Stanford Law School, one of only seven women in her class. She was the first female student ever to be named president of the Stanford Law Review. She received the \"Outstanding Senior\" award and graduated as valedictorian of the class of 1964.\n\nLegal career\nImmediately after law school Born was selected as a law clerk to judge Henry Edgerton of the U.S. Court of Appeals for the District of Columbia Circuit. It was during this time that she met her first husband, Jacob C. Landau, who was a journalist covering the Federal courts at the time. Following her clerkship, she became an associate at the Washington, D.C.-based international law firm of Arnold & Porter. Born was attracted to Arnold & Porter because it was one of the few major law firms to have a woman partner at that time, Carolyn Agger, who was the head of the tax practice. Born took a two-year leave of absence from Arnold & Porter to accompany her first husband to Boston, where he had received a fellowship. During that time she worked as a research assistant to law professor Alan Dershowitz.\n\nBorn's early career at Arnold & Porter focused on international trade law, in which she represented a number of Swiss industries and the government of Switzerland. She developed a practice representing clients in numerous complex litigation and arbitration cases involving financial market transactions. Among her high-profile cases was the matter of the Hunt Brothers attempt to corner the market in silver in the 1970s. She made partner at Arnold & Porter, after moving to a three-day schedule to help raise her second child, and eventually rose to be the head of the firm's derivatives practice.\n\nBorn was among the first female attorneys to systematically address inequities regarding how the laws treated women. Born and another female lawyer, Marna Tucker, taught what is considered to have been the first \"Women and the Law\" course at Catholic University’s Columbus School of Law. The class exclusively concerned prejudicial treatment of women under the laws of the United States, past and present. Born and Tucker were surprised to discover that there was no textbook on the issue at the time. Born is also one of the co-founders of the National Women's Law Center. Born also helped rewrite the American Bar Association rules to make it possible for more women and minorities to sit on federal bench.\n\nDuring her long legal career, and into her retirement, Born did much pro bono and other types of volunteer work. She was active in the American Bar Association, the largest professional organization of lawyers in the United States. Initially Born was named a member of the governing council of the ABA's Individual Rights Section, eventually becoming chairperson. Born and Tucker founded the ABA Women's Caucus, the first organization of female lawyers in the ABA. She held several other senior positions in the ABA, including being named the first woman member of the ABA's Standing Committee on the Federal Judiciary. As a member of the Judiciary Committee, Born provided testimony and opinion on persons nominated for federal judgeships. In 1980 she was named chair of the committee. As chair of the committee, Born was invited to address the U.S. Congress regarding the nomination of Judge Sandra Day O'Connor to the U.S. Supreme Court.\n\nIn 1993, Born's name was floated as a possible candidate for Attorney General of the United States, but Janet Reno was nominated.\n\nIn July 2009, Nancy Pelosi appointed Brooksley Born as a commissioner to the Financial Crisis Inquiry Commission (FCIC).\n\nBorn and the OTC derivatives market\nBorn was appointed to the CFTC on April 15, 1994, by President Bill Clinton. Due to litigation against Bankers Trust Company by Procter and Gamble and other corporate clients, Born and her team at the CFTC sought comments on the regulation of over-the-counter derivatives, a first step in the process of writing CFTC regulations to supplement the existing regulations of the Federal Reserve System, the Options Clearing Corporation, and the National Association of Insurance Commissioners. Born was particularly concerned about swaps, financial instruments that are traded over the counter between banks, insurance companies or other funds or companies, and thus have no transparency except to the two counterparties and the counterparties' regulators, if any. CFTC regulation was strenuously opposed by Federal Reserve chairman Alan Greenspan, and by Treasury Secretaries Robert Rubin and Lawrence Summers. On May 7, 1998, former SEC Chairman Arthur Levitt joined Rubin and Greenspan in objecting to the issuance of the CFTC's concept release. Their response dismissed Born's analysis and focused on the hypothetical possibility that CFTC regulation of swaps and other OTC derivative instruments could create a \"legal uncertainty\" regarding such financial instruments, hypothetically reducing the value of the instruments. They argued that the imposition of regulatory costs would \"stifle financial innovation\" and encourage financial capital to transfer its transactions offshore. The disagreement between Born and the Executive Office's top economic policy advisors has been described not only as a classic Washington turf war, but also a war of ideologies, insofar as it is possible to argue that Born's actions were consistent with Keynesian and neoclassical economics while Greenspan, Rubin, Levitt, and Summers consistently espoused neoliberal, and neoconservative policies.\n\nIn 1998, a trillion-dollar hedge fund called Long Term Capital Management (LTCM) was near collapse. Using mathematical models to calculate debt risk, LTCM used derivatives to leverage $5 billion into more than $1 trillion, doing business with fifteen of Wall Street's largest financial institutions. The derivative transactions were not regulated, nor were investors able to evaluate LTCM's exposures. Born stated, \"I thought that LTCM was exactly what I had been worried about\". In the last weekend of September 1998, the President's working group was told that the entire American economy hung in the balance. After intervention by the Federal Reserve, the crisis was averted. In congressional hearings into the crisis, Greenspan acknowledged that language had been introduced into an agriculture bill that would prevent CFTC from regulating the derivatives which were at the center of the crisis that threatened the US economy. U.S. Representative Maurice Hinchey (D-NY) asked \"How many more failures do you think we'd have to have before some regulation in this area might be appropriate?\" In response, Greenspan brushed aside the substance of Born's warnings with the simple assertion that \"the degree of supervision of regulation of the over-the-counter derivatives market is quite adequate to maintain a degree of stability in the system\". Born's warning was that there wasn't any regulation of them. Born's chief of staff, Michael Greenberger summed up Greenspan's position this way: \"Greenspan didn't believe that fraud was something that needed to be enforced, and he assumed she probably did. And of course, she did.\" Under heavy pressure from the financial lobby, legislation prohibiting regulation of derivatives by Born's agency was passed by the Congress. Born resigned on June 1, 1999.\n\nThe derivatives market continued to grow yearly throughout both terms of George W. Bush's administration. On September 15, 2008, the bankruptcy of Lehman Brothers forced a broad recognition of a financial crisis in both the US and world capital markets. As Lehman Brothers' failure temporarily reduced financial capital's confidence, a number of newspaper articles and television programs suggested that the failure's possible causes included the conflict between the CFTC and the other regulators.Faiola, Anthony, Nakashima, Ellen and Drew, Jill. The Crash: Risk and Regulation - What Went Wrong, The Washington Post, October 15, 2008.\n\nBorn declined to publicly comment on the unfolding 2008 crisis until March 2009, when she said: \"The market grew so enormously, with so little oversight and regulation, that it made the financial crisis much deeper and more pervasive than it otherwise would have been.\" She also lamented the influence of Wall Street lobbyists on the process and the refusal of regulators to discuss even modest reforms.\n\nAn October 2009 Frontline documentary titled \"The Warning\" described Born's thwarted efforts to regulate and bring transparency to the derivatives market, and the continuing opposition thereto. The program concluded with an excerpted interview with Born sounding another warning: \"I think we will have continuing danger from these markets and that we will have repeats of the financial crisis -- may differ in details but there will be significant financial downturns and disasters attributed to this regulatory gap, over and over, until we learn from experience.\"\n\nIn 2009 Born, along with Sheila Bair of the FDIC, was awarded the John F. Kennedy Profiles in Courage Award in recognition of the \"political courage she demonstrated in sounding early warnings about conditions that contributed\" to the 2007-08 financial crisis. According to Caroline Kennedy, \"Brooksley Born recognized that the financial security of all Americans was being put at risk by the greed, negligence and opposition of powerful and well connected interests.... The catastrophic financial events of recent months have proved them [Born and Sheila Bair] right.\" One member of the President's working group had a change of heart about Brooksley Born. SEC Chairman Arthur Levitt stated \"I've come to know her as one of the most capable, dedicated, intelligent and committed public servants that I have ever come to know\", adding that \"I could have done much better. I could have made a difference\" in response to her warnings.\n\nIn 2010, a documentary film Inside Job further alleged that derivatives regulation was ineffective from the Clinton administration on. Along with fellow whistleblower, former IMF Chief Economist Raghuram Rajan, who was also scorned by the economic establishment, Brooksley Born was cited as one of the authorities arguing that financial derivatives increase economic risk.\n\n Personal life \nBorn is married to Alexander E. Bennett (also retired from Arnold & Porter). She has five adult children - two from a previous marriage to Jacob Landau and three stepchildren. Notably, Born was named a partner at Arnold & Porter while working part-time so she could raise her two young children. When both of her children were school-age, Born returned to practice full-time.\n\nReferences\n\nExternal links\nAttorney profile at Arnold & Porter\nBrooksley Born (2009 Winner) of the Profiles in Courage Award, with acceptance speech transcript and NECN video\n\nProfile at MarketsWiki\nSpeeches and statements\n\"Testimony Of Brooksley Born Chairperson of the CFTC Concerning The Over-The-Counter Derivatives Market\", before the House Committee On Banking And Financial Services, July 24, 1998.\n\"The Lessons of Long Term Capital Management L.P.\", Remarks of Brooksley Born, Chairperson of the CFTC, Chicago-Kent-IIT Commodities Law Institute, Chicago, Illinois, October 15, 1998.\n Interview: Brooksley Born for \"PBS Frontline: The Warning\", PBS, (streaming VIDEO 1 hour), October 20, 2009.\nArticles\nManuel Roig-Franzia. \"Credit Crisis Cassandra:Brooksley Born's Unheeded Warning Is a Rueful Echo 10 Years On\", The Washington Post, May 26, 2009\n Taibbi, Matt. \"The Great American Bubble Machine\", Rolling Stone'', July 9–23, 2009\n\n1940 births\nAmerican women lawyers\nArnold & Porter people\nClinton administration personnel\nColumbus School of Law faculty\nCommodity Futures Trading Commission personnel\nHeads of United States federal agencies\nLawyers from San Francisco\nLiving people\nStanford Law School alumni\n21st-century American women\nStanford University alumni", "answers": ["June 1, 1999."], "length": 2088, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "2d48bbc2bd6052c5e67824752a9296cbbc26351616cd6d7e"} {"input": "When did the 2017 general election be held?", "context": "Sir Simon William English (born 30 December 1961) is a New Zealand former National Party politician who served as the 39th prime minister of New Zealand from 2016 to 2017. He had previously served as the 17th deputy prime minister of New Zealand and minister of finance from 2008 to 2016 under John Key and the Fifth National Government.\n\nA farmer and public servant before entering politics, English was elected to the New Zealand Parliament in as the National Party's candidate in the Wallace electorate. He was elevated to Cabinet in 1996 and in 1999 was made minister of finance, although he served for less than a year due to his party's loss at the 1999 general election. In October 2001, English replaced Jenny Shipley as the leader of the National Party (and consequently as Leader of the Opposition). He led the party to its worst defeat at the 2002 general election, and as a consequence, in October 2003 he was replaced as leader by Don Brash.\n\nIn November 2006, after Brash's resignation, English became deputy leader under John Key. After National's victory at the 2008 general election, he became deputy prime minister and was also made minister of finance for the second time. Under English's direction New Zealand's economy maintained steady growth during National's three terms of government. He became a list-only MP after stepping down as an electorate MP at the 2014 general election.\n\nJohn Key resigned as leader of the National Party and prime minister in December 2016. English won the resulting leadership election unopposed and was sworn in as prime minister on 12 December 2016. His tenure was only ten months, and included a three-month election campaign. In the 2017 general election, National won the largest number of seats but fell short of a majority. The parties holding the balance of power declined to support the existing government, and English was subsequently replaced as prime minister by Jacinda Ardern, leader of the Labour Party. English initially continued on as Leader of the Opposition, but resigned as leader of the National Party on 27 February 2018 and left parliament two weeks later.\n\nEarly life\nEnglish was born on 30 December 1961 at Lumsden Maternity Centre in Lumsden. He is the eleventh of twelve children of Mervyn English and Norah (née O'Brien) English. His parents purchased Rosedale, a mixed sheep and cropping farm in Dipton, Southland from Mervyn's uncle, Vincent English, a bachelor, in 1944. English was born in the maternity unit at Lumsden.\n\nEnglish attended St Thomas's School in Winton, then boarded at St. Patrick's College in Upper Hutt, where he became head boy. He played in the first XV of the school's rugby team. English went on to study commerce at the University of Otago, where he was a resident at Selwyn College, and then completed an honours degree in English literature at Victoria University of Wellington.\n\nAfter finishing his studies, English returned to Dipton and farmed for a few years. From 1987 to 1989, he worked in Wellington as a policy analyst for the New Zealand Treasury, at a time when the free market policies favoured by Labour's finance minister Roger Douglas (known collectively as \"Rogernomics\") were being implemented.\n\nEnglish joined the National Party in 1980, while at Victoria University. He served for a period as chairman of the Southland branch of the Young Nationals, and became a member of the Wallace electorate committee. After moving to Wellington, he served for periods on the Island Bay and Miramar electorate committees, respectively.\n\nFourth National Government (1990–1999)\n\nAt the 1990 general election, English stood as the National candidate in Wallace, replacing the retiring Derek Angus, and was elected with a large majority. He would hold this seat, renamed Clutha-Southland in 1996, until 2014. He and three other newly elected National MPs (Tony Ryall, Nick Smith, and Roger Sowry) were soon identified as rising stars in New Zealand politics, and at various points were dubbed the \"brat pack\", the \"gang of four\", and the \"young Turks\". In his first term in parliament, English chaired a select committee into social services. He was made a parliamentary under-secretary in 1993, serving under the Minister of Health.\n\nFirst period in cabinet (1996–1999)\nIn early 1996, English was elevated to cabinet by Prime Minister Jim Bolger, becoming the Minister for Crown Health Enterprises and Associate Minister of Education (to Wyatt Creech). He was 34 at the time, becoming the cabinet's youngest member. After the 1996 general election, the National Party was forced into a coalition with New Zealand First to retain government. In the resulting cabinet reshuffle, English emerged as Minister of Health. However, as a condition of the coalition agreement, NZ First's Neil Kirton (a first-term MP) was made Associate Minister of Health, effectively becoming English's deputy. This arrangement was described in the press as a \"shotgun marriage\", and there were frequent differences of opinion between the two ministers. After their relationship became unworkable, Kirton was sacked from the role in August 1997, with the agreement of NZ First leader Winston Peters.\n\nAs Minister of Health, English was responsible for continuing the reforms to the public health system that National had begun after the 1990 general election. The reforms were unpopular, and health was perceived as one of the government's weaknesses, with the health portfolio consequently being viewed as a challenge. English believed that the unpopularity of the reforms was in part due to a failure in messaging, and encouraged his National colleagues to avoid bureaucratic and money-focused language (such as references to \"balance sheets\" and \"user charges\") and instead talk about the improvements to services the government's reforms would bring. He also rejected the idea that public hospitals could be run as commercial enterprises, a view which some of his colleagues had previously promoted.\n\nBy early 1997, as dissatisfaction with Bolger's leadership began to grow, English was being touted as a potential successor, along with Jenny Shipley and Doug Graham. His age (35) was viewed as the main impediment to a successful leadership run. National's leadership troubles were resolved in December 1997, when Bolger resigned and Shipley was elected to the leadership unopposed. English had been a supporter of Bolger as leader, but Shipley reappointed him Minister of Health in her new cabinet.\n\nEnglish was promoted to Minister of Finance in a reshuffle in January 1999, a position which was at the time subordinate to the Treasurer, Bill Birch. After a few months, the pair switched positions as part of Birch's transition to retirement, with English assuming the senior portfolio. In early interviews, he emphasised his wish to be seen as a pragmatist rather than an ideologue, and said that the initiatives of some of his predecessors (Roger Douglas's \"Rogernomics\" and Ruth Richardson's \"Ruthanasia\") had focused on \"fruitless, theoretical debates\" when \"people just want to see problems solved\".\n\nOpposition (1999–2008)\n\nAfter the National Party lost the 1999 election to Helen Clark's Labour Party, English continued on in the shadow cabinet as National's spokesperson for finance. He was elected deputy leader of the party in February 2001, following the resignation of Wyatt Creech, with Gerry Brownlee being his unsuccessful opponent.\n\nLeader of the Opposition\nIn October 2001, after months of speculation, Jenny Shipley resigned as leader of the National Party after being told she no longer had the support of the party caucus. English was elected as her replacement unopposed (with Roger Sowry as his deputy), and consequently became Leader of the Opposition. However, he did not openly organise against Shipley, and according to The Southland Times \"there was almost an element of 'aw, shucks, I'll do it then' about Mr English's ascension\".\n\nAged 39 when he was elected, English became the second-youngest leader in the National Party's history, after Jim McLay (who was 38 when elected in 1984). He also became only the third Southlander to lead a major New Zealand political party, after Joseph Ward and Adam Hamilton. However, English failed to improve the party's performance. In the 2002 election, National suffered its worst electoral defeat ever, gaining barely more than twenty percent of the vote. English described it as \"the worst day of my political life\". Both party insiders and the general public were split as to how much to blame him for the loss, but most of the party believed that English would be able to rebuild National's support.\n\nBy late 2003, however, National's performance in opinion polls remained poor. The party had briefly increased its popularity in the year following the election, but by October its support had fallen to levels only slightly better than what it achieved in the last ballot. English also appeared in a boxing match for a charity against entertainer Ted Clarke. This did not boost his polling or that of the National party either, with suggestions that it devalued his image as a serious politician. Don Brash, former governor of the Reserve Bank and a relative newcomer to politics, began to build up support to replace English. On 28 October, Brash gained sufficient backing in Caucus to defeat English in a leadership contest.\n\nShadow cabinet roles and deputy leader\nOn 2 November 2003, when Brash changed responsibilities for certain MPs, English became National's spokesman for education, ranked at fifth place in the party's parliamentary hierarchy. He remained in parliament after the 2005 election. In his new shadow education portfolio, English performed strongly, and remained a party favourite despite his election defeat as leader in 2002, eventually being returned to the finance portfolio in August 2004 as deputy spokesman (while still retaining responsibility for education).\n\nIn November 2006, Brash resigned as leader. English was considered as a potential replacement leader (running against John Key) or deputy leader (against incumbent Gerry Brownlee) in the ensuing leadership election. However, a contest was avoided when the MPs agreed a Key/English ticket would run unopposed in a display of party unity. English took over the deputy leadership and the finance portfolio in the Key shadow cabinet.\n\nFifth National Government (2008–2017)\n\nDeputy Prime Minister and Minister of Finance (2008–2016)\n\nAt the 2008 election, English was re-elected by his electorate, winning by a margin of about 15,500 votes. He became Deputy Prime Minister of New Zealand and Minister of Finance in the fifth National Government, being sworn into office on 19 November 2008 and continued to serve in those roles until becoming Prime Minister on 12 December 2014. He was also made Minister of Infrastructure in National's first term of government and Minister responsible for Housing New Zealand Corporation and minister responsible for the New Zealand flag consideration process in its third.\n\nHe was comfortably re-elected in Clutha-Southland in the 2011 election but opted to run as a party-list candidate in 2014. \n\nThe pairing of John Key as leader of the National Party and English as his deputy has been compared to that of Bob Hawke and Paul Keating (in Australia) and Tony Blair and Gordon Brown (in the UK).\n\nEnglish acceded to the role of Finance Minister in the continuing wake of the financial crisis. In response to New Zealand's rising debt, English made budget deficit-reduction his main priority. His first budget outlined three focuses in New Zealand's financial recovery: \"improving the business environment and removing roadblocks to growth; investment in productive infrastructure; and improving the way government works\". One of his first acts was creating the National Infrastructure Unit, charged with formulating a plan for infrastructure projects and investments. He commissioned a government-wide spending review, with an aim to reducing government expenditure—with the exceptions of a two-year stimulus package and long-term increases on infrastructure spending.\n\nIn April 2011, the Opposition criticised English for suggesting that New Zealand businesses could use New Zealand's low wages to help it compete with Australia. The National Government campaigned for re-election in 2011 on its economic record. The Government boasted growth for five consecutive quarters up to mid-2010, totalling 1.6% of real GDP.\n\nStrong growth resulted in a surplus of $473 million for the 2015/16 financial year, projected to rise to $8.5 billion by 2020/21. In his 2016 Economic and Fiscal Update address, English stated that reducing debt and tackling the costs of the 2016 Kaikōura earthquake were higher priorities than reducing rates of tax.\n\nAllowances issue\nIn 2009, the media, including TVNZ and TV3 revealed that English was receiving about NZ$900 a week as part of a living allowance for ministers, to live in his own NZ$1.2 million Wellington home. At the time, English also received $276,200 in his annual salary as Deputy Prime Minister. It was also revealed other ministers with homes in the capital city were also claiming accommodation allowances. On 3 August 2009, Prime Minister John Key started a review of the housing allowances claimed by cabinet ministers. English subsequently paid back $12,000 and only claimed about $24,000 a year in living allowances. The Auditor-General's office said in September 2009 that they were making \"preliminary enquiries\" into parliamentary housing expenses in response to a letter of complaint from Progressive party leader Jim Anderton. Two days later English stated that he would no longer take up any housing allowance and had paid back all the allowance he had received since the November 2008 election.\n\nPrime Minister (2016–2017)\n\nJohn Key resigned on 12 December, and endorsed English as his successor in the resulting leadership election. Following the drop-out of both Judith Collins and Jonathan Coleman from the leadership election, English was sworn in as the 39th Prime Minister of New Zealand on 12 December 2016.\n\nEnglish appointed his first cabinet on 18 December. In a reshuffle, he appointed Steven Joyce to succeed him as Finance Minister, while most ministerial portfolios remained the same.\n\nIn February 2017, English did not attend Waitangi Day commemorations at the historic treaty grounds, reportedly in response to the Ngāpuhi iwi's decision to stop the Prime Minister from speaking at the marae. Ngāpuhi have protested the Government's negotiation of the Trans Pacific Partnership Agreement (TPPA), which the iwi believe infringes upon Māori sovereignty, and thus does not adhere to the Treaty of Waitangi. English had been invited to attend in an official capacity; his non-attendance was criticised by a Ngāpuhi elder and Opposition leader Andrew Little.\n\nIn his first overseas trip as Prime Minister, English travelled to Europe to discuss trade ties, including a prospective New Zealand–European Union free trade agreement. He first travelled to London on 13 January 2017 to meet British Prime Minister Theresa May. Discussing trade relations, English said the two nations were \"natural partners\" and would \"continue to forge ties\" after the UK's withdrawal from the EU. He also arranged to meet with London Mayor Sadiq Khan, Belgian Prime Minister Charles Michel and German Chancellor Angela Merkel. In a meeting with Merkel, English received crucial backing from Germany for a trade deal with the EU. On 16 January, English stated that his government would continue to promote TPPA, despite the United States' decision to withdraw from the agreement. He explained that Southeast Asian countries would now be treated as a priority in negotiations—he also asserted that the United States was ceding influence to China by its rejection of the trade pact.\n\nAt a press conference at the Beehive on 1 February 2017, English announced that the 2017 general election would be held on 23 September. The Prime Minister later confirmed that his party would approach ACT, United Future and the Māori Party if confidence and supply agreements were required to form a government following the election. In his second cabinet reshuffle on 24 April, English appointed Gerry Brownlee as his new Foreign Affairs Minister; he also promoted Nikki Kaye to the portfolio of Education Minister, and moved Mark Mitchell into the cabinet to become Defence Minister. The reshuffle was perceived as an election preparation.\n\nOn 13 February 2017, English welcomed Australian Prime Minister Malcolm Turnbull to Wellington. The two leaders reaffirmed their shared trade agenda, and discussed changes to the Australian citizenship pathway which will affect permanent residents originating from New Zealand.\n\nOn 19 June, it was reported that Todd Barclay, who succeeded English as MP for Clutha-Southland, had clandestinely recorded one of his employee's conversations the previous year, and that John Key's leaders' budget was used to pay a confidential settlement after the employee resigned. English admitted that he had been aware of the illegal recording and the settlement, and thus implicated in the scandal.\n\nDuring the 2017 National campaign launch, English introduced a $379 million social investment package including digital learning academies for high school students, more resources for mathematics, and boosting support for teaching second languages in schools, and maintaining National Standards in the school curriculum. Prime Minister English also sought to defend National's financial management and economic track record and claimed that the opposition Labour Party would raise taxes. Early opinion polling had forecast a poor showing in the election for the Labour Party, but in early August 37-year-old Jacinda Ardern took over as Labour leader and seemingly energised younger voters.\n\nAt the 2017 general election, National won the largest share of the party vote (44.4%) and the largest number of seats (56) in the House Representatives. However, National lacked enough seats to govern alone due to two of the party's support partners, the Māori Party and United Future, losing their parliamentary seats. In response, English stated that the party would be entering into talks to form a coalition with New Zealand First. Following talks with the two largest parties, New Zealand First entered a coalition arrangement with the Labour Party. English was succeeded as prime minister by Jacinda Ardern on 26 October.\n\nOpposition (2017–2018)\n\nLeader of the Opposition\nEnglish was re-elected as National Party leader on 24 October 2017. At the time of his re-election, English announced his intention to stay on as leader until the next general election. On 13 February 2018, however, he stood down as National Party leader due to personal reasons, and instructed the party to put into motion the processes to elect a new leader. He also retired from Parliament. English's resignation followed weeks of speculation that he would step aside for a new leader. On 27 February, he was succeeded as party leader by Simon Bridges as the result of the leadership election held that day.\n\nPost-premiership \nIn 2018, English joined the board of Australian conglomerate, Wesfarmers. English serves in Chairmanships of Mount Cook Alpine Salmon, Impact Lab Ltd and Manawanui Support Ltd. He is also a director of The Instillery, Centre for Independent Studies and The Todd Corporation Limited, and is a member of the Impact Advisory Group of Macquarie Infrastructure and Real Assets.\n\nPolitical and social views\n\nEnglish is regarded as more socially conservative than his predecessor, John Key. He has stated his opposition to voluntary euthanasia and physician-assisted suicide, same-sex civil unions, and the decriminalisation of prostitution. As Prime Minister he opposed any \"liberalisation\" of abortion law.\n\nIn 2004, English voted against a bill to establish civil unions for both same-sex and opposite-sex couples. In 2005, he voted for the Marriage (Gender Clarification) Amendment Bill, which would have amended the Marriage Act to define marriage as only between a man and a woman. English voted against the Marriage (Definition of Marriage) Amendment Bill, a bill that legalised same-sex marriage in New Zealand. However, in December 2016 he stated, \"I'd probably vote differently now on the gay marriage issue. I don't think that gay marriage is a threat to anyone else's marriage\".\n\nIn 2009, English voted against the Misuse of Drugs (Medicinal Cannabis) Amendment Bill, a bill aimed at amending the Misuse of Drugs Act so that cannabis could be used for medical purposes.\n\nPersonal life \nEnglish met his future wife, Mary Scanlon, at university. She was studying medicine at the time, and became a general practitioner. Both her parents were immigrants, her father being Samoan and her mother Italian, born on the island of Stromboli. They have six children: a daughter and five sons.\n\nEnglish is a practising Roman Catholic, but has stated that he considers his religious beliefs personal and thus separate from politics.\n\nIn June 2002, English took part in TV3's Fight For Life, a celebrity boxing fundraiser to raise money for the Yellow Ribbon anti-youth-suicide campaign, influenced by the death of a teenage nephew in 1997. He lost a split decision to former university colleague Ted Clarke.\n\nHonours\nIn the 2018 Queen's Birthday Honours, English was appointed a Knight Companion of the New Zealand Order of Merit, for services of over 27 years to the State.", "answers": ["23 September."], "length": 3422, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "94ccf4d54b36e7e9d4aa594a7bc426528630a5b8aff296a4"} {"input": "What is the relationship between the maximum velocity and the amplitude of the blob or depletion?", "context": "\\section{Model equations} \\label{sec:equations}\n\nIn drift-fluid models the continuity equation\n\\begin{align}\n \\frac{\\partial n}{\\partial t} + \\nabla\\cdot\\left( n \\vec u_E \\right) &= 0 \\label{eq:generala} \n\\end{align}\ndescribes the dynamics of the electron density $n$. Here\n$\\vec u_E := (\\hat{\\vec b} \\times \\nabla \\phi)/B$ gives the electric drift\nvelocity in a magnetic field $\\vec B := B \\hat{\\vec b}$ and an electric\npotential $\\phi$. We neglect contributions of the diamagnetic drift~\\cite{Kube2016}.\n\n\n\n\nEquation~\\eqref{eq:generala} is closed by invoking quasineutrality, i.e. the divergence of the ion polarization, \nthe electron diamagnetic and the gravitational drift currents must vanish\n\\begin{align}\n \\nabla\\cdot\\left( \\frac{n}{\\Omega} \\left( \\frac{\\partial}{\\partial t} \n + \\vec u_E \\cdot\\nabla \\right)\\frac{\\nabla_\\perp \\phi}{B} + n\\vec u_d - n\\vec u_g\\right) &=0\n . \n \n \n \\label{eq:generalb}\n\\end{align}\nHere we denote \n$\\nabla_\\perp\\phi/B := - \\hat{\\vec b} \\times \\vec u_E$, \nthe electron diamagnetic drift\n$\\vec u_d := - T_e(\\hat{\\vec b} \\times\\nabla n ) /enB$\nwith the electron temperature $T_e$,\nthe ion gravitational drift velocity \n$\\vec u_g := m_i \\hat{\\vec b} \\times \\vec g /B$\nwith ion mass $m_i$, and the ion gyro-frequency\n$\\Omega := eB/m_i$.\n\nCombining Eq.~\\eqref{eq:generalb} with Eq.~\\eqref{eq:generala} yields\n\\begin{align}\n \\frac{\\partial \\rho}{\\partial t} + \\nabla\\cdot\\left( \\rho\\vec u_E \\right) + \\nabla \\cdot\\left( n(\\vec u_\\psi + \\vec u_d + \\vec u_g) \\right) &= 0\\label{eq:vorticity}\n\\end{align}\nwith the polarization charge density \n$\\rho = \\nabla\\cdot( n\\nabla_\\perp \\phi / \\Omega B)$ \nand\n$\\vec u_\\psi := \\hat{\\vec b}\\times \\nabla\\psi /B$ \nwith \n$\\psi:= m_i\\vec u_E^2 /2e$.\nWe exploit this form of Eq.~\\eqref{eq:generalb} in our numerical simulations.\n\nEquations~\\eqref{eq:generala} and \\eqref{eq:generalb} respectively \\eqref{eq:vorticity} have several invariants.\nFirst, in Eq.~\\eqref{eq:generala} the relative particle number \n$M(t) := \\int \\mathrm{dA}\\, (n-n_0)$ is conserved over time\n$\\d M(t)/\\d t = 0$. \nFurthermore, we integrate \n$( T_e(1+\\ln n) -T_e \\ln B)\\partial_t n$\nas well as\n$-e\\phi \\partial_t\\rho - (m_i\\vec u_E^2/2+gm_ix - T_e\\ln B)\\partial_t n$ \nover the domain to get, disregarding boundary contributions,\n\\begin{align}\n \\frac{\\d}{\\d t}\\left[T_eS(t) + H(t) \\right] = 0, \\label{eq:energya}\\\\ \n \\frac{\\d}{\\d t} \\left[ E(t) - G(t) - H(t)\\right] = 0,\n \\label{eq:energyb}\n\\end{align}\nwhere we define \nthe entropy\n$S(t):=\\int \\mathrm{dA}\\, [n\\ln(n/n_0) - (n-n_0)]$, \nthe kinetic energy \n$E(t):=m_i \\int \\mathrm{dA}\\, n\\vec u_E^2/2$ \nand the potential energies\n$G(t) := m_i g\\int \\mathrm{dA}\\, x(n-n_0)$\nand\n$H(t) := T_e\\int \\mathrm{dA}\\, (n-n_0) \\ln (B^{-1})$.\nNote that $n\\ln( n/n_0) - n + n_0 \\approx (n-n_0)^2/2$ for $|(n-n_0)/n_0| \\ll 1$ and $S(t)$ thus reduces to the \nlocal entropy form in Reference~\\cite{Kube2016}. \n\nWe now set up a gravitational field $\\vec g = g\\hat x$ and a constant homogeneous background\nmagnetic field $\\vec B = B_0 \\hat z$ in a Cartesian coordinate system.\nThen the divergences of the electric and gravitational drift velocities $\\nabla\\cdot\\vec u_E$ and $\\nabla\\cdot\\vec u_g$\nand the diamagnetic current $\\nabla\\cdot(n\\vec u_d)$ vanish, which makes the \nflow incompressible. Furthermore, the magnetic potential energy vanishes $H(t) = 0$.\n\nIn a second system we model the inhomogeneous magnetic field present in tokamaks as\n$\\vec B := B_0 (1+ x/R_0)^{-1}\\hat z$ and neglect the gravitational drift $\\vec u_g = 0$.\nThen, the potential energy $G(t) = 0$. \nNote that \n$H(t) = m_i \\ensuremath{C_\\mathrm{s}}^2/R_0\\int\\mathrm{dA}\\, x(n-n_0) +\\mathcal O(R_0^{-2}) $\nreduces to $G(t)$ with the effective gravity $g_\\text{eff}:= \\ensuremath{C_\\mathrm{s}}^2/R_0$ with $\\ensuremath{C_\\mathrm{s}}^2 := T_e/m_i$. \nFor the rest of this letter we treat $g$ and $g_\\text{eff}$ as well as $G(t)$ and $H(t)$ on the same footing.\nThe magnetic field inhomogeneity thus entails compressible flows, which is \nthe only difference to the model describing dynamics in a homogeneous magnetic field introduced above. \nSince both $S(t)\\geq 0$ and $E(t)\\geq 0$ we further derive from Eq.~\\eqref{eq:energya} and Eq.~\\eqref{eq:energyb} that the kinetic energy\nis bounded by $E(t) \\leq T_eS(t) + E(t) = T_e S(0)$; a feature absent from the gravitational system with \nincompressible flows, where $S(t) = S(0)$. \n\nWe now show that the invariants Eqs.~\\eqref{eq:energya} and \\eqref{eq:energyb} present restrictions on the velocity and\nacceleration of plasma blobs. \nFirst, we define the blobs' center of mass (COM) via $X(t):= \\int\\mathrm{dA}\\, x(n-n_0)/M$ and \nits COM velocity as $V(t):=\\d X(t)/\\d t$. \nThe latter is proportional to the total radial particle flux~\\cite{Garcia_Bian_Fundamensky_POP_2006, Held2016a}.\nWe assume\nthat $n>n_0$ and $(n-n_0)^2/2 \\leq [ n\\ln (n/n_0) - (n-n_0)]n $ to show for both systems \n\\begin{align}\n (MV)^2 &= \\left( \\int \\mathrm{dA}\\, n{\\phi_y}/{B} \\right)^2\n = \\left( \\int \\mathrm{dA}\\, (n-n_0){\\phi_y}/{B} \\right)^2\\nonumber\\\\\n \n&\\leq 2 \\left( \\int \\mathrm{dA}\\, \\left[n\\ln (n/n_0) -(n-n_0)\\right]^{1/2}\\sqrt{n}{\\phi_y}/{B}\\right)^2\\nonumber\\\\\n \n &\\leq 4 S(0) E(t)/m_i \n \n \\label{eq:inequality}\n\\end{align}\nHere we use the Cauchy-Schwartz inequality and \n$\\phi_y:=\\partial\\phi/\\partial y$. \nNote that although we derive the inequality Eq.~\\eqref{eq:inequality} only for amplitudes $\\triangle n >0$ we assume that the results also hold for depletions. This is justified by our numerical results later in this letter. \nIf we initialize our density field with a seeded blob of radius $\\ell$ and amplitude $\\triangle n$ as \n\\begin{align}\n n(\\vec x, 0) &= n_0 + \\triangle n \\exp\\left( -\\frac{\\vec x^2}{2\\ell^2} \\right), \\label{eq:inita}\n \n \n\\end{align}\nand \n$\\phi(\\vec x, 0 ) = 0$,\nwe immediately have $M := M(0) = 2\\pi \\ell^2 \\triangle n$, $E(0) = G(0) = 0$ and \n$S(0) = 2\\pi \\ell^2 f(\\triangle n)$, where $f(\\triangle n)$ captures the amplitude dependence of \nthe integral for $S(0)$. \n\nThe acceleration for both incompressible and compressible flows can be estimated\nby assuming a linear acceleration $V=A_0t$ and $X=A_0t^2/2$~\\cite{Held2016a} and using \n$E(t) = G(t) = m_igMX(t)$ in Eq.~\\eqref{eq:inequality}\n\\begin{align}\n \\frac{A_0}{g} = \\mathcal Q\\frac{2S(0)}{M} \\approx \\frac{\\mathcal Q}{2} \\frac{\\triangle n }{n_0+2\\triangle n/9}.\n \\label{eq:acceleration}\n\\end{align}\nHere, we use the Pad\\'e approximation of order $(1/1)$ of $2S(0)/M $\nand define a model parameter $\\mathcal Q$ with $0<\\mathcal Q\\leq1$ to be determined by numerical simulations.\nNote that the Pad\\'e approximation is a better approximation than a simple \ntruncated Taylor expansion especially for large relative amplitudes of order unity.\nEq.~\\eqref{eq:acceleration} predicts that $A_0/g\\sim \\triangle n/n_0$ for small \namplitudes $|\\triangle n/n_0| < 1$ and $A_0 \\sim g $ for very large amplitudes $\\triangle n /n_0 \\gg 1$, \nwhich confirms the predictions in~\\cite{Pecseli2016} and reproduces the limits discussed in~\\cite{Angus2014}.\n\nAs pointed out earlier for compressible flows Eq.~\\eqref{eq:inequality} can be further estimated\n\\begin{align}\n (MV)^2 \\leq 4 T_eS(0)^2/m_i. \n \\label{}\n\\end{align}\nWe therefore have a restriction on the maximum COM velocity for compressible flows, which is absent for incompressible flows\n\\begin{align}\n \\frac{\\max |V|}{\\ensuremath{C_\\mathrm{s}}} = {\\mathcal Q}\\frac{2S(0)}{M} \\approx \\frac{\\mathcal Q}{2} \\frac{|\\triangle n| }{n_0+2/9 \\triangle n } \\approx \\frac{\\mathcal Q}{2} \\frac{|\\triangle n|}{n_0}.\n \\label{eq:linear}\n\\end{align}\nFor $|\\triangle n /n_0|< 1$ Eq.~\\eqref{eq:linear} reduces to the linear scaling derived in~\\cite{Kube2016}. \nFinally, a scale analysis of Eq.~\\eqref{eq:vorticity} shows that~\\cite{Ott1978, Garcia2005, Held2016a}\n\\begin{align}\n \\frac{\\max |V|}{\\ensuremath{C_\\mathrm{s}}} = \\mathcal R \\left( \\frac{\\ell}{R_0}\\frac{|\\triangle n|}{n_0} \\right)^{1/2}.\n \\label{eq:sqrt}\n\\end{align}\nThis equation predicts a square root dependence of the center of mass velocity \non amplitude and size. \n\n\n\n\n\nWe now propose a simple phenomenological model that captures the essential dynamics\nof blobs and depletions in the previously stated systems. More specifically \nthe model reproduces the acceleration Eq.~\\eqref{eq:acceleration} with and without\nBoussinesq approximation, the square root scaling for the COM velocity \nEq.~\\eqref{eq:sqrt} for incompressible flows as well as the relation between the \nsquare root scaling Eq.~\\eqref{eq:sqrt} and the linear scaling \nEq.~\\eqref{eq:linear} for compressible flows. \nThe basic idea is that the COM of blobs behaves like \nthe one of an infinitely long plasma column immersed in an ambient plasma. \nThe dynamics of this column reduces to the one of a two-dimensional ball.\nThis idea is similar to the analytical ``top hat'' density solution for\nblob dynamics recently studied in~\\cite{Pecseli2016}.\nThe ball is subject to buoyancy as well as linear and nonlinear friction\n\\begin{align}\n M_{\\text{i}} \\frac{d V}{d t} = (M_{\\text{g}} - M_\\text{p}) g - c_1 V - \\mathrm{sgn}(V ) \\frac{1}{2}c_2 V^2.\n \\label{eq:ball}\n\\end{align}\nThe gravity $g$ has a positive sign in the coordinate system; sgn$(f)$ is the sign function. \nThe first term on the right hand side is the buoyancy, where \n$M_{\\text{g}} := \\pi \\ell^2 (n_0 + \\mathcal Q \\triangle n/2)$ \nis the gravitational mass of the ball with radius $\\ell$ and \n$M_\\mathrm{p} := n_0 \\pi \\ell^2 $ \nis the mass of the displaced ambient plasma.\nNote that if $\\triangle n<0$ the ball represents a depletion and the buoyancy term has a negative sign, i.e. the depletion will rise. \nWe introduce an inertial mass \n$M_{\\text{i}} := \\pi\\ell^2 (n_0 +2\\triangle n/9)$ \ndifferent from the gravitational mass $M_{\\text{g}}$ in order to \nrecover the initial acceleration in Eq.~\\eqref{eq:acceleration}. \nWe interpret the parameters $\\mathcal Q$ and $2/9$ as geometrical factors \nthat capture the difference of the actual blob form from the idealized\n``top hat'' solution. \nAlso note that the Boussinesq approximation appears in the model as a neglect of inertia, $M_{\\text{i}} = \\pi\\ell^2n_0$.\n\nThe second term is the linear friction term with coefficient $c_1(\\ell)$, which\ndepends on the size of the ball.\nIf we disregard the nonlinear friction, $c_2=0$, Eq.~\\eqref{eq:ball} directly yields a \nmaximum velocity $c_1V^*=\\pi \\ell^2 n g \\mathcal Q\\triangle n/2$.\nFrom our previous considerations $\\max V/\\ensuremath{C_\\mathrm{s}}=\\mathcal Q \\triangle n /2n_0$, we thus identify \n\\begin{align}\n c_1 = \\pi\\ell^2 n_0 g/\\ensuremath{C_\\mathrm{s}}. \n \\label{}\n\\end{align}\nThe linear friction coefficient thus depends on the gravity and the size of the\nball. \n\nThe last term in \\eqref{eq:ball} is the nonlinear friction. The sign of the force depends on whether\nthe ball rises or falls in the ambient plasma. \nIf we disregard linear friction $c_1=0$, we have the maximum velocity \n$V^*= \\sigma(\\triangle n)\\sqrt{\\pi \\ell^2|\\triangle n| g\\mathcal Q/c_2}$, \nwhich must equal \n$\\max V= \\sigma(\\triangle n) \\mathcal R \\sqrt{g \\ell |\\triangle n/n_0|}$ \nand thus\n\\begin{align}\n c_2 = {\\mathcal Q\\pi n_0\\ell }/{\\mathcal R^2}.\n \\label{}\n\\end{align}\nInserting $c_1$ and $c_2$ into Eq.~\\eqref{eq:ball}\nwe can derive the maximum absolute velocity in the form \n\\begin{align}\n \\frac{\\max |V|}{\\ensuremath{C_\\mathrm{s}}} = \n \\left(\\frac{\\mathcal R^2}{\\mathcal Q}\\right) \\frac{\\ell}{R_0} \\left( \n \\left({1+\\left( \\frac{\\mathcal Q}{\\mathcal R} \\right)^{2} \\frac{|\\triangle n|/n_0 }{\\ell/R_0}}\\right)^{1/2}-1 \\right)\n \\label{eq:vmax_theo}\n\\end{align}\nand thus have a concise expression for $\\max |V|$ that captures both the linear\nscaling \\eqref{eq:linear} as well as the square root scaling \\eqref{eq:sqrt}.\nWith Eq.~\\eqref{eq:acceleration} and Eq.~\\eqref{eq:sqrt} respectively Eq.~\\eqref{eq:vmax_theo} we \nfinally arrive at an analytical expression for the time at which the maximum velocity is reached via \n$t_{\\max V} \\sim \\max V/A_0$. Its inverse $\\gamma:=t_{\\max V}^{-1}$ gives the\nglobal interchange growth rate, for which an empirical expression was\npresented in Reference~\\cite{Held2016a}.\n\n\nWe use the open source library FELTOR \nto simulate \nEqs.~\\eqref{eq:generala} and \\eqref{eq:vorticity} with and without \ndrift compression.\nFor numerical stabilty we added small diffusive terms on the right hand \nsides of the equations.\nThe discontinuous Galerkin methods employ three polynomial coefficients and a minimum of $N_x=N_y=768$ grid cells. The box size is $50\\ell$ in order to mitigate \ninfluences of the finite box size on the blob dynamics. \nMoreover, we used the invariants in Eqs. \\eqref{eq:energya} and \\eqref{eq:energyb} as consistency tests to verify the code and repeated simulations \nalso in a gyrofluid model. \nNo differences to the results presented here were found. \nInitial perturbations on the particle density field are given by Eq.~\\eqref{eq:inita},\nwhere the perturbation amplitude $\\triangle n/n_0$ was chosen between $10^{-3}$ and $20$ for blobs and $-10^0$ and $ -10^{-3}$ for depletions. \nDue to computational reasons we show results only for $\\triangle n/n_0\\leq 20$. \n\n\nFor compressible flows we consider two different cases $\\ell/R_0 = 10^{-2}$ and\n$\\ell /R_0 = 10^{-3}$. \n For incompressible flows Eq.~\\eqref{eq:generala} and \\eqref{eq:vorticity}\n can be normalized such that the blob radius is absent from the equations~\\cite{Ott1978, Kube2012}. \n The simulations of incompressible flows can thus be used for both sizes. \nThe numerical code as well as input parameters and output data can be found \nin the supplemental dataset to this contribution~\\cite{Data2017}.\n\n\\begin{figure}[htb]\n \\includegraphics[width=\\columnwidth]{com_blobs}\n \\caption{\n The maximum radial COM velocities of blobs for compressible and incompressible flows are shown. \n The continuous lines show Eq.~\\eqref{eq:vmax_theo} while the \n dashed line shows the square root scaling Eq.~\\eqref{eq:sqrt} with \n $\\mathcal Q = 0.32$ and $\\mathcal R=0.85$.\n }\n \\label{fig:com_blobs}\n\\end{figure}\nIn Fig.~\\ref{fig:com_blobs} we plot the maximum COM velocity for blobs \nwith and without drift compression.\nFor incompressible flows blobs follow the square root scaling almost \nperfectly. Only at very large amplitudes velocities are slightly below\nthe predicted values. \nFor small amplitudes we observe that the compressible blobs follow\na linear scaling. When the amplitudes increase there is a transition to the\nsquare root scaling at around $\\triangle n/n_0 \\simeq 0.5$ for \n$\\ell/R_0=10^{-2}$ and $\\triangle n/n_0 \\simeq 0.05$ for $\\ell/R_0=10^{-3}$, which is consistent with Eq.~\\eqref{eq:vmax_theo} and Reference~\\cite{Kube2016}. \nIn the transition regions the simulated velocities are slightly larger than the predicted ones from Eq.~\\eqref{eq:vmax_theo}.\nBeyond these amplitudes\nthe velocities of compressible and incompressible blobs align. \n\n\\begin{figure}[htb]\n \\includegraphics[width=\\columnwidth]{com_holes}\n \\caption{\n The maximum radial COM velocities of depletions for compressible and incompressible flows are shown. \n The continuous lines show Eq.~\\eqref{eq:vmax_theo} while the \n dashed line shows the square root scaling Eq.~\\eqref{eq:sqrt} with \n $\\mathcal Q = 0.32$ and $\\mathcal R=0.85$.\n Note that small amplitudes are on the right and amplitudes close to unity are on the left side.\n }\n \\label{fig:com_depletions}\n\\end{figure}\nIn Fig.~\\ref{fig:com_depletions} we show the maximum radial COM velocity \nfor depletions instead of blobs.\nFor relative amplitudes below $|\\triangle n|/n_0 \\simeq 0.5$ (right of unity in the plot) the velocities\ncoincide with the corresponding blob velocities in Fig.~\\ref{fig:com_blobs}. \n For amplitudes larger than $|\\triangle n|/n_0\\simeq 0.5$ the \nvelocities follow the square root scaling.\nWe observe that for plasma depletions beyond $90$ percent the velocities \nin both systems reach a constant value that is very well predicted by the\nsquare root scaling. \n\n\\begin{figure}[htb]\n \\includegraphics[width=\\columnwidth]{acc_blobs}\n \\caption{\n Average acceleration of blobs for compressible and incompressible flows are shown.\n The continuous line shows the acceleration in Eq.~\\eqref{eq:acceleration} \n with $\\mathcal Q=0.32$\n while the dashed line is a linear reference line, which corresponds to the Boussinesq approximation. \n }\n \\label{fig:acc_blobs}\n\\end{figure}\nIn Fig.~\\ref{fig:acc_blobs} we show the average acceleration of blobs \nfor compressible and incompressible flows computed\nby dividing the maximum velocity $\\max V$ by the time \nto reach this velocity $t_{\\max V}$. \nWe compare the simulation results\nto the theoretical predictions Eq.~\\eqref{eq:acceleration} of our model with and without inertia. \nThe results of the compressible and incompressible systems coincide and fit very\nwell to our theoretical values. \nFor amplitudes larger than unity the acceleration deviates significantly from the prediction with Boussinesq approximation.\n\n\\begin{figure}[htb]\n \\includegraphics[width=\\columnwidth]{acc_holes}\n \\caption{\n Average acceleration of depletions for compressible and incompressible flows are shown.\n The continuous line shows the acceleration in Eq.~\\eqref{eq:acceleration} \n with $\\mathcal Q=0.32$\n while the dashed line is a linear reference line, which corresponds to the Boussinesq approximation. \n }\n \\label{fig:acc_depletions}\n\\end{figure}\nIn Fig.~\\ref{fig:acc_depletions} we show the simulated acceleration of depletions in the\ncompressible and the incompressible systems. We compare the simulation results\nto the theoretical predictions Eq.~\\eqref{eq:acceleration} of our model with and without inertia.\nDeviations from our theoretical prediction Eq.~\\eqref{eq:acceleration} are visible for amplitudes smaller than $\\triangle n/n_0 \\simeq -0.5$ (left of unity in the plot). The relative deviations are small at around $20$ percent. \nAs in Fig.~\\ref{fig:com_depletions} the acceleration reaches a constant values\nfor plasma depletions of more than $90$ percent.\nComparing Fig.~\\ref{fig:acc_depletions} to Fig.~\\ref{fig:acc_blobs} the asymmetry between blobs and depletions becomes \napparent. While the acceleration of blobs is reduced for large \namplitudes compared to a linear dependence the acceleration \nof depletions is increased. In the language of our simple buoyancy \nmodel the inertia of depletions is reduced but increased for blobs. \n\n\n\nIn conclusion \n we discuss the dynamics of seeded blobs and depletions in a \n compressible and an incompressible system.\n With only two fit parameters our theoretical results reproduce the \n numerical COM velocities and accelerations over five orders of magnitude.\n We derive the amplitude dependence of the acceleration of blobs and depletions from \n the conservation laws of our systems in Eq.~\\eqref{eq:acceleration}. \n From the same inequality a linear regime is derived in the compressible system for \n ratios of amplitudes to sizes smaller than a critical value.\n In this regime \n the blob and depletion velocity depends linearly on the initial amplitude and \n is independent of size. The regime is absent from the system with incompressible flows.\n Our theoretical results are verified by numerical simulations for all \n amplitudes that are relevant in magnetic fusion devices.\n Finally, we suggest a new empirical blob model that captures the detailed dynamics of more complicated models. \n The Boussinesq approximation is clarified as the absence of inertia and a thus altered acceleration of blobs and depletions.\n The maximum blob velocity is not altered by the Boussinesq approximation.\n\nThe authors were supported with financial subvention from the Research Council of Norway under grant\n240510/F20. M.W. and M.H. were supported by the Austrian Science Fund (FWF) Y398. The computational\nresults presented have been achieved in part using the Vienna Scientific Cluster (VSC). Part of this work was performed on the Abel Cluster, owned by the University of Oslo and the Norwegian metacenter\nfor High Performance Computing (NOTUR), and operated by the Department for Research Computing at USIT,\nthe University of Oslo IT-department.\nThis work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under grant agreement No 633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission.", "answers": ["The maximum velocity scales with the square root of the amplitude."], "length": 2748, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "7a503a81877d3baca86f8d7179209e4899823433ab3326f3"} {"input": "What algorithm is engaged in the PLMS-PPIC method?", "context": "\\section{Introduction}\\label{S1}\n\nThe multiple access interferences (MAI) is the root of user\nlimitation in CDMA systems \\cite{R1,R3}. The parallel least mean\nsquare-partial parallel interference cancelation (PLMS-PPIC) method\nis a multiuser detector for code division multiple access (CDMA)\nreceivers which reduces the effect of MAI in bit detection. In this\nmethod and similar to its former versions like LMS-PPIC \\cite{R5}\n(see also \\cite{RR5}), a weighted value of the MAI of other users is\nsubtracted before making the decision for a specific user in\ndifferent stages \\cite{cohpaper}. In both of these methods, the\nnormalized least mean square (NLMS) algorithm is engaged\n\\cite{Haykin96}. The $m^{\\rm th}$ element of the weight vector in\neach stage is the true transmitted binary value of the $m^{\\rm th}$\nuser divided by its hard estimate value from the previous stage. The\nmagnitude of all weight elements in all stages are equal to unity.\nUnlike the LMS-PPIC, the PLMS-PPIC method tries to keep this\nproperty in each iteration by using a set of NLMS algorithms with\ndifferent step-sizes instead of one NLMS algorithm used in LMS-PPIC.\nIn each iteration, the parameter estimate of the NLMS algorithm is\nchosen whose element magnitudes of cancelation weight estimate have\nthe best match with unity. In PLMS-PPIC implementation it is assumed\nthat the receiver knows the phases of all user channels. However in\npractice, these phases are not known and should be estimated. In\nthis paper we improve the PLMS-PPIC procedure \\cite{cohpaper} in\nsuch a way that when there is only a partial information of the\nchannel phases, this modified version simultaneously estimates the\nphases and the cancelation weights. The partial information is the\nquarter of each channel phase in $(0,2\\pi)$.\n\nThe rest of the paper is organized as follows: In section \\ref{S4}\nthe modified version of PLMS-PPIC with capability of channel phase\nestimation is introduced. In section \\ref{S5} some simulation\nexamples illustrate the results of the proposed method. Finally the\npaper is concluded in section \\ref{S6}.\n\n\\section{Multistage Parallel Interference Cancelation: Modified PLMS-PPIC Method}\\label{S4}\n\nWe assume $M$ users synchronously send their symbols\n$\\alpha_1,\\alpha_2,\\cdots,\\alpha_M$ via a base-band CDMA\ntransmission system where $\\alpha_m\\in\\{-1,1\\}$. The $m^{th}$ user\nhas its own code $p_m(.)$ of length $N$, where $p_m(n)\\in \\{-1,1\\}$,\nfor all $n$. It means that for each symbol $N$ bits are transmitted\nby each user and the processing gain is equal to $N$. At the\nreceiver we assume that perfect power control scheme is applied.\nWithout loss of generality, we also assume that the power gains of\nall channels are equal to unity and users' channels do not change\nduring each symbol transmission (it can change from one symbol\ntransmission to the next one) and the channel phase $\\phi_m$ of\n$m^{th}$ user is unknown for all $m=1,2,\\cdots,M$ (see\n\\cite{cohpaper} for coherent transmission). According to the above\nassumptions the received signal is\n\\begin{equation}\n\\label{e1} r(n)=\\sum\\limits_{m=1}^{M}\\alpha_m\ne^{j\\phi_m}p_m(n)+v(n),~~~~n=1,2,\\cdots,N,\n\\end{equation}\nwhere $v(n)$ is the additive white Gaussian noise with zero mean and\nvariance $\\sigma^2$. Multistage parallel interference cancelation\nmethod uses $\\alpha^{s-1}_1,\\alpha^{s-1}_2,\\cdots,\\alpha^{s-1}_M$,\nthe bit estimates outputs of the previous stage, $s-1$, to estimate\nthe related MAI of each user. It then subtracts it from the received\nsignal $r(n)$ and makes a new decision on each user variable\nindividually to make a new variable set\n$\\alpha^{s}_1,\\alpha^{s}_2,\\cdots,\\alpha^{s}_M$ for the current\nstage $s$. Usually the variable set of the first stage (stage $0$)\nis the output of a conventional detector. The output of the last\nstage is considered as the final estimate of transmitted bits. In\nthe following we explain the structure of a modified version of the\nPLMS-PIC method \\cite{cohpaper} with simultaneous capability of\nestimating the cancelation weights and the channel phases.\n\nAssume $\\alpha_m^{(s-1)}\\in\\{-1,1\\}$ is a given estimate of\n$\\alpha_m$ from stage $s-1$. Define\n\\begin{equation}\n\\label{e6} w^s_{m}=\\frac{\\alpha_m}{\\alpha_m^{(s-1)}}e^{j\\phi_m}.\n\\end{equation}\nFrom (\\ref{e1}) and (\\ref{e6}) we have\n\\begin{equation}\n\\label{e7} r(n)=\\sum\\limits_{m=1}^{M}w^s_m\\alpha^{(s-1)}_m\np_m(n)+v(n).\n\\end{equation}\nDefine\n\\begin{subequations}\n\\begin{eqnarray}\n\\label{e8} W^s&=&[w^s_{1},w^s_{2},\\cdots,w^s_{M}]^T,\\\\\n\\label{e9}\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!X^{s}(n)\\!\\!\\!&=&\\!\\!\\![\\alpha^{(s-1)}_1p_1(n),\\alpha^{(s-1)}_2p_2(n),\\cdots,\\alpha^{(s-1)}_Mp_M(n)]^T.\n\\end{eqnarray}\n\\end{subequations}\nwhere $T$ stands for transposition. From equations (\\ref{e7}),\n(\\ref{e8}) and (\\ref{e9}), we have\n\\begin{equation}\n\\label{e10} r(n)=W^{s^T}X^{s}(n)+v(n).\n\\end{equation}\nGiven the observations $\\{r(n),X^{s}(n)\\}^{N}_{n=1}$, in modified\nPLMS-PPIC, like the PLMS-PPIC \\cite{cohpaper}, a set of NLMS\nadaptive algorithm are used to compute\n\\begin{equation}\n\\label{te1} W^{s}(N)=[w^{s}_1(N),w^{s}_2(N),\\cdots,w^{s}_M(N)]^T,\n\\end{equation}\nwhich is an estimate of $W^s$ after iteration $N$. To do so, from\n(\\ref{e6}), we have\n\\begin{equation}\n\\label{e13} |w^s_{m}|=1 ~~~m=1,2,\\cdots,M,\n\\end{equation}\nwhich is equivalent to\n\\begin{equation}\n\\label{e14} \\sum\\limits_{m=1}^{M}||w^s_{m}|-1|=0.\n\\end{equation}\nWe divide $\\Psi=\\left(0,1-\\sqrt{\\frac{M-1}{M}}\\right]$, a sharp\nrange for $\\mu$ (the step-size of the NLMS algorithm) given in\n\\cite{sg2005}, into $L$ subintervals and consider $L$ individual\nstep-sizes $\\Theta=\\{\\mu_1,\\mu_2,\\cdots,\\mu_L\\}$, where\n$\\mu_1=\\frac{1-\\sqrt{\\frac{M-1}{M}}}{L}, \\mu_2=2\\mu_1,\\cdots$, and\n$\\mu_L=L\\mu_1$. In each stage, $L$ individual NLMS algorithms are\nexecuted ($\\mu_l$ is the step-size of the $l^{th}$ algorithm). In\nstage $s$ and at iteration $n$, if\n$W^{s}_k(n)=[w^s_{1,k},\\cdots,w^s_{M,k}]^T$, the parameter estimate\nof the $k^{\\rm th}$ algorithm, minimizes our criteria, then it is\nconsidered as the parameter estimate at time iteration $n$. In other\nwords if the next equation holds\n\\begin{equation}\n\\label{e17} W^s_k(n)=\\arg\\min\\limits_{W^s_l(n)\\in I_{W^s}\n}\\left\\{\\sum\\limits_{m=1}^{M}||w^s_{m,l}(n)|-1|\\right\\},\n\\end{equation}\nwhere $W^{s}_l(n)=W^{s}(n-1)+\\mu_l \\frac{X^s(n)}{\\|X^s(n)\\|^2}e(n),\n~~~ l=1,2,\\cdots,k,\\cdots,L-1,L$ and\n$I_{W^s}=\\{W^s_1(n),\\cdots,W^s_L(n)\\}$, then we have\n$W^s(n)=W^s_k(n)$, and therefore all other algorithms replace their\nweight estimate by $W^{s}_k(n)$. At time instant $n=N$, this\nprocedure gives $W^s(N)$, the final estimate of $W^s$, as the true\nparameter of stage $s$.\n\nNow consider $R=(0,2\\pi)$ and divide it into four equal parts\n$R_1=(0,\\frac{\\pi}{2})$, $R_2=(\\frac{\\pi}{2},\\pi)$,\n$R_3=(\\pi,\\frac{3\\pi}{2})$ and $R_4=(\\frac{3\\pi}{2},2\\pi)$. The\npartial information of channel phases (given by the receiver) is in\na way that it shows each $\\phi_m$ ($m=1,2,\\cdots,M$) belongs to\nwhich one of the four quarters $R_i,~i=1,2,3,4$. Assume\n$W^{s}(N)=[w^{s}_1(N),w^{s}_2(N),\\cdots,w^{s}_M(N)]^T$ is the weight\nestimate of the modified algorithm PLMS-PPIC at time instant $N$ of\nthe stage $s$. From equation (\\ref{e6}) we have\n\\begin{equation}\n\\label{tt3}\n\\phi_m=\\angle({\\frac{\\alpha^{(s-1)}_m}{\\alpha_m}w^s_m}).\n\\end{equation}\nWe estimate $\\phi_m$ by $\\hat{\\phi}^s_m$, where\n\\begin{equation}\n\\label{ee3}\n\\hat{\\phi}^s_m=\\angle{(\\frac{\\alpha^{(s-1)}_m}{\\alpha_m}w^s_m(N))}.\n\\end{equation}\nBecause $\\frac{\\alpha^{(s-1)}_m}{\\alpha_m}=1$ or $-1$, we have\n\\begin{eqnarray}\n\\hat{\\phi}^s_m=\\left\\{\\begin{array}{ll} \\angle{w^s_m(N)} &\n\\mbox{if}~\n\\frac{\\alpha^{(s-1)}_m}{\\alpha_m}=1\\\\\n\\pm\\pi+\\angle{w^s_m(N)} & \\mbox{if}~\n\\frac{\\alpha^{(s-1)}_m}{\\alpha_m}=-1\\end{array}\\right.\n\\end{eqnarray}\nHence $\\hat{\\phi}^s_m\\in P^s=\\{\\angle{w^s_m(N)},\n\\angle{w^s_m(N)+\\pi, \\angle{w^s_m(N)}-\\pi}\\}$. If $w^s_m(N)$\nsufficiently converges to its true value $w^s_m$, the same region\nfor $\\hat{\\phi}^s_m$ and $\\phi_m$ is expected. In this case only one\nof the three members of $P^s$ has the same region as $\\phi_m$. For\nexample if $\\phi_m \\in (0,\\frac{\\pi}{2})$, then $\\hat{\\phi}^s_m \\in\n(0,\\frac{\\pi}{2})$ and therefore only $\\angle{w^s_m(N)}$ or\n$\\angle{w^s_m(N)}+\\pi$ or $\\angle{w^s_m(N)}-\\pi$ belongs to\n$(0,\\frac{\\pi}{2})$. If, for example, $\\angle{w^s_m(N)}+\\pi$ is such\na member between all three members of $P^s$, it is the best\ncandidate for phase estimation. In other words,\n\\[\\phi_m\\approx\\hat{\\phi}^s_m=\\angle{w^s_m(N)}+\\pi.\\]\nWe admit that when there is a member of $P^s$ in the quarter of\n$\\phi_m$, then $w^s_m(N)$ converges. What would happen when non of\nthe members of $P^s$ has the same quarter as $\\phi_m$? This\nsituation will happen when the absolute difference between $\\angle\nw^s_m(N)$ and $\\phi_m$ is greater than $\\pi$. It means that\n$w^s_m(N)$ has not converged yet. In this case where we can not\ncount on $w^s_m(N)$, the expected value is the optimum choice for\nthe channel phase estimation, e.g. if $\\phi_m \\in (0,\\frac{\\pi}{2})$\nthen $\\frac{\\pi}{4}$ is the estimation of the channel phase\n$\\phi_m$, or if $\\phi_m \\in (\\frac{\\pi}{2},\\pi)$ then\n$\\frac{3\\pi}{4}$ is the estimation of the channel phase $\\phi_m$.\nThe results of the above discussion are summarized in the next\nequation\n\\begin{eqnarray}\n\\nonumber \\hat{\\phi}^s_m = \\left\\{\\begin{array}{llll} \\angle\n{w^s_m(N)} & \\mbox{if}~\n\\angle{w^s_m(N)}, \\phi_m\\in R_i,~~i=1,2,3,4\\\\\n\\angle{w^s_m(N)}+\\pi & \\mbox{if}~ \\angle{w^s_m(N)}+\\pi, \\phi_m\\in\nR_i,~~i=1,2,3,4\\\\\n\\angle{w^n_m(N)}-\\pi & \\mbox{if}~ \\angle{w^s_m(N)}-\\pi, \\phi_m\\in\nR_i,~~i=1,2,3,4\\\\\n\\frac{(i-1)\\pi+i\\pi}{4} & \\mbox{if}~ \\phi_m\\in\nR_i,~~\\angle{w^s_m(N)},\\angle\n{w^s_m(N)}\\pm\\pi\\notin R_i,~~i=1,2,3,4.\\\\\n\\end{array}\\right.\n\\end{eqnarray}\nHaving an estimation of the channel phases, the rest of the proposed\nmethod is given by estimating $\\alpha^{s}_m$ as follows:\n\\begin{equation}\n\\label{tt4}\n\\alpha^{s}_m=\\mbox{sign}\\left\\{\\mbox{real}\\left\\{\\sum\\limits_{n=1}^{N}\nq^s_m(n)e^{-j\\hat{\\phi}^s_m}p_m(n)\\right\\}\\right\\},\n\\end{equation}\nwhere\n\\begin{equation} \\label{tt5}\nq^{s}_{m}(n)=r(n)-\\sum\\limits_{m^{'}=1,m^{'}\\ne\nm}^{M}w^{s}_{m^{'}}(N)\\alpha^{(s-1)}_{m^{'}} p_{m^{'}}(n).\n\\end{equation}\nThe inputs of the first stage $\\{\\alpha^{0}_m\\}_{m=1}^M$ (needed for\ncomputing $X^1(n)$) are given by\n\\begin{equation}\n\\label{qte5}\n\\alpha^{0}_m=\\mbox{sign}\\left\\{\\mbox{real}\\left\\{\\sum\\limits_{n=1}^{N}\nr(n)e^{-j\\hat{\\phi}^0_m}p_m(n)\\right\\}\\right\\}.\n\\end{equation}\nAssuming $\\phi_m\\in R_i$, then\n\\begin{equation}\n\\label{qqpp} \\hat{\\phi}^0_m =\\frac{(i-1)\\pi+i\\pi}{4}.\n\\end{equation}\nTable \\ref{tab4} shows the structure of the modified PLMS-PPIC\nmethod. It is to be notified that\n\\begin{itemize}\n\\item Equation (\\ref{qte5}) shows the conventional bit detection\nmethod when the receiver only knows the quarter of channel phase in\n$(0,2\\pi)$. \\item With $L=1$ (i.e. only one NLMS algorithm), the\nmodified PLMS-PPIC can be thought as a modified version of the\nLMS-PPIC method.\n\\end{itemize}\n\nIn the following section some examples are given to illustrate the\neffectiveness of the proposed method.\n\n\\section{Simulations}\\label{S5}\n\nIn this section we have considered some simulation examples.\nExamples \\ref{ex2}-\\ref{ex4} compare the conventional, the modified\nLMS-PPIC and the modified PLMS-PPIC methods in three cases: balanced\nchannels, unbalanced channels and time varying channels. In all\nexamples, the receivers have only the quarter of each channel phase.\nExample \\ref{ex2} is given to compare the modified LMS-PPIC and the\nPLMS-PPIC in the case of balanced channels.\n\n\\begin{example}{\\it Balanced channels}:\n\\label{ex2}\n\\begin{table}\n\\caption{Channel phase estimate of the first user (example\n\\ref{ex2})} \\label{tabex5} \\centerline{{\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\multirow{6}{*}{\\rotatebox{90}{$\\phi_m=\\frac{3\\pi}{8},M=15~~$}} & N(Iteration) & Stage Number& NLMS & PNLMS \\\\\n&&&&\\\\\n\\cline{2-5} & \\multirow{2}{*}{64}& s = 2 & $\\hat{\\phi}^s_m=\\frac{3.24\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{3.18\\pi}{8}$ \\\\\n\\cline{3-5} & & s = 3 & $\\hat{\\phi}^s_m=\\frac{3.24\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{3.18\\pi}{8}$ \\\\\n\\cline{2-5} & \\multirow{2}{*}{256}& s = 2 & $\\hat{\\phi}^s_m=\\frac{2.85\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{2.88\\pi}{8}$ \\\\\n\\cline{3-5} & & s = 3 & $\\hat{\\phi}^s_m=\\frac{2.85\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{2.88\\pi}{8}$ \\\\\n\\cline{2-5} \\hline\n\\end{tabular} }}\n\\end{table}\nConsider the system model (\\ref{e7}) in which $M$ users\nsynchronously send their bits to the receiver through their\nchannels. It is assumed that each user's information consists of\ncodes of length $N$. It is also assumd that the signal to noise\nratio (SNR) is 0dB. In this example there is no power-unbalanced or\nchannel loss is assumed. The step-size of the NLMS algorithm in\nmodified LMS-PPIC method is $\\mu=0.1(1-\\sqrt{\\frac{M-1}{M}})$ and\nthe set of step-sizes of the parallel NLMS algorithms in modified\nPLMS-PPIC method are\n$\\Theta=\\{0.01,0.05,0.1,0.2,\\cdots,1\\}(1-\\sqrt{\\frac{M-1}{M}})$,\ni.e. $\\mu_1=0.01(1-\\sqrt{\\frac{M-1}{M}}),\\cdots,\n\\mu_4=0.2(1-\\sqrt{\\frac{M-1}{M}}),\\cdots,\n\\mu_{12}=(1-\\sqrt{\\frac{M-1}{M}})$. Figure~\\ref{Figexp1NonCoh}\nillustrates the bit error rate (BER) for the case of two stages and\nfor $N=64$ and $N=256$. Simulations also show that there is no\nremarkable difference between results in two stage and three stage\nscenarios. Table~\\ref{tabex5} compares the average channel phase\nestimate of the first user in each stage and over $10$ runs of\nmodified LMS-PPIC and PLMS-PPIC, when the the number of users is\n$M=15$.\n\\end{example}\n\nAlthough LMS-PPIC and PLMS-PPIC, as well as their modified versions,\nare structured based on the assumption of no near-far problem\n(examples \\ref{ex3} and \\ref{ex4}), these methods and especially the\nsecond one have remarkable performance in the cases of unbalanced\nand/or time varying channels.\n\n\\begin{example}{\\it Unbalanced channels}:\n\\label{ex3}\n\\begin{table}\n\\caption{Channel phase estimate of the first user (example\n\\ref{ex3})} \\label{tabex6} \\centerline{{\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\multirow{6}{*}{\\rotatebox{90}{$\\phi_m=\\frac{3\\pi}{8},M=15~~$}} & N(Iteration) & Stage Number& NLMS & PNLMS \\\\\n&&&&\\\\\n\\cline{2-5} & \\multirow{2}{*}{64}& s=2 & $\\hat{\\phi}^s_m=\\frac{2.45\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{2.36\\pi}{8}$ \\\\\n\\cline{3-5} & & s=3 & $\\hat{\\phi}^s_m=\\frac{2.71\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{2.80\\pi}{8}$ \\\\\n\\cline{2-5} & \\multirow{2}{*}{256}& s=2 & $\\hat{\\phi}^s_m=\\frac{3.09\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{2.86\\pi}{8}$ \\\\\n\\cline{3-5} & & s=3 & $\\hat{\\phi}^s_m=\\frac{2.93\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{3.01\\pi}{8}$ \\\\\n\\cline{2-5} \\hline\n\\end{tabular} }}\n\\end{table}\nConsider example \\ref{ex2} with power unbalanced and/or channel loss\nin transmission system, i.e. the true model at stage $s$ is\n\\begin{equation}\n\\label{ve7} r(n)=\\sum\\limits_{m=1}^{M}\\beta_m\nw^s_m\\alpha^{(s-1)}_m c_m(n)+v(n),\n\\end{equation}\nwhere $0<\\beta_m\\leq 1$ for all $1\\leq m \\leq M$. Both the LMS-PPIC\nand the PLMS-PPIC methods assume the model (\\ref{e7}), and their\nestimations are based on observations $\\{r(n),X^s(n)\\}$, instead of\n$\\{r(n),\\mathbf{G}X^s(n)\\}$, where the channel gain matrix is\n$\\mathbf{G}=\\mbox{diag}(\\beta_1,\\beta_2,\\cdots,\\beta_m)$. In this\ncase we repeat example \\ref{ex2}. We randomly get each element of\n$G$ from $[0,0.3]$. Figure~\\ref{Figexp2NonCoh} illustrates the BER\nversus the number of users. Table~\\ref{tabex6} compares the channel\nphase estimate of the first user in each stage and over $10$ runs of\nmodified LMS-PPIC and modified PLMS-PPIC for $M=15$.\n\\end{example}\n\n\\begin{example}\n\\label{ex4} {\\it Time varying channels}: Consider example \\ref{ex2}\nwith time varying Rayleigh fading channels. In this case we assume\nthe maximum Doppler shift of $40$HZ, the three-tap\nfrequency-selective channel with delay vector of $\\{2\\times\n10^{-6},2.5\\times 10^{-6},3\\times 10^{-6}\\}$sec and gain vector of\n$\\{-5,-3,-10\\}$dB. Figure~\\ref{Figexp3NonCoh} shows the average BER\nover all users versus $M$ and using two stages.\n\\end{example}\n\n\n\\section{Conclusion}\\label{S6}\n\nIn this paper, parallel interference cancelation using adaptive\nmultistage structure and employing a set of NLMS algorithms with\ndifferent step-sizes is proposed, when just the quarter of the\nchannel phase of each user is known. In fact, the algorithm has been\nproposed for coherent transmission with full information on channel\nphases in \\cite{cohpaper}. This paper is a modification on the\npreviously proposed algorithm. Simulation results show that the new\nmethod has a remarkable performance for different scenarios\nincluding Rayleigh fading channels even if the channel is\nunbalanced.\n\n", "answers": ["The normalized least mean square (NLMS) algorithm."], "length": 2008, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "73b9b4130b4a4931bed5ea7e24331ce61b59271eac6c6c01"} {"input": "What is the significance of the interlayer Berry connection polarizability?", "context": "Paper Info\n\nTitle: Crossed Nonlinear Dynamical Hall Effect in Twisted Bilayers\nPublish Date: 17 Mar 2023\nAuthor List: \n\nFigure\n\nFIG. 1.(a) Schematics of experimental setup.(b, c) Valence band structure and intrinsic Hall conductivity with respect to in-plane input for tMoTe2 at twist angles (b) θ = 1.2 • and (c) θ = 2 • in +K valley.Color coding in (b) and (c) denotes the layer composition σ z n (k).\nFIG. 2. (a) The interlayer BCP G, and (b) its vorticity [∂ k × G]z on the first valence band from +K valley of 1.2 • tMoTe2.Background color and arrows in (a) denote the magnitude and vector flow, respectively.Grey curves in (b) show energy contours at 1/2 and 3/4 of the band width.The black dashed arrow denotes direction of increasing hole doping level.Black dashed hexagons in (a, b) denote the boundary of moiré Brillouin zone (mBZ).\nFIG. 3. (a-c) Three high-symmetry stacking registries for tBG with a commensurate twist angle θ = 21.8 • .Lattice geometries with rotation center on an overlapping atomic site (a, b) and hexagonal center (c).(d) Schematic of the moiré pattern when the twist angle slightly deviates from 21.8 • , here θ = 21 • .Red squares marked by A, B and C are the local regions that resemble commensurate 21.8 • patterns in (a), (b) and (c), respectively.(e, f) Low-energy band structures and intrinsic Hall conductivity of the two geometries [(a) and (b) are equivalent].The shaded areas highlight energy windows ∼ ω around band degeneracies where interband transitions, not considered here, may quantitatively affect the conductivity measured.\nFIG. S4.Band structure and layer composition σ z n in +K valley of tBG (left panel) and the intrinsic Hall conductivity (right panel) at three different twist angle θ.The shaded areas highlight energy windows ∼ ω around band degeneracies in which the conductivity results should not be considered.Here σH should be multiplied by a factor of 2 accounting for spin degeneracy.\n\nabstract\n\nWe propose an unconventional nonlinear dynamical Hall effect characteristic of twisted bilayers. The joint action of in-plane and out-of-plane ac electric fields generates Hall currents j ∼ Ė⊥ × E in both sum and difference frequencies, and when the two orthogonal fields have common frequency their phase difference controls the on/off, direction and magnitude of the rectified dc Hall current.\nThis novel intrinsic Hall response has a band geometric origin in the momentum space curl of interlayer Berry connection polarizability, arising from layer hybridization of electrons by the twisted interlayer coupling. The effect allows a unique rectification functionality and a transport probe of chiral symmetry in bilayer systems.\nWe show sizable effects in twisted homobilayer transition metal dichalcogenides and twisted bilayer graphene over broad range of twist angles. Nonlinear Hall-type response to an in-plane electric field in a two dimensional (2D) system with time reversal symmetry has attracted marked interests . Intensive studies have been devoted to uncovering new types of nonlinear Hall transport induced by quantum geometry and their applications such as terahertz rectification and magnetic information readout .\nRestricted by symmetry , the known mechanisms of nonlinear Hall response in quasi-2D nonmagnetic materials are all of extrinsic nature, sensitive to fine details of disorders , which have limited their utilization for practical applications. Moreover, having a single driving field only, the effect has not unleashed the full potential of nonlinearity for enabling controlled gate in logic operation, where separable inputs (i.e., in orthogonal directions) are desirable.\nThe latter, in the context of Hall effect, calls for control by both out-of-plane and in-plane electric fields. A strategy to introduce quantum geometric response to out-of-plane field in quasi-2D geometry is made possible in van der Waals (vdW) layered structures with twisted stacking . Taking homobilayer as an example, electrons have an active layer degree of freedom that is associated with an out-of-plane electric dipole , whereas interlayer quantum tunneling rotates this pseudospin about in-plane axes that are of topologically nontrivial textures in the twisted landscapes .\nSuch layer pseudospin structures can underlie novel quantum geometric properties when coupled with out-ofplane field. Recent studies have found layer circular photogalvanic effect and layer-contrasted time-reversaleven Hall effect , arising from band geometric quantities. In this work we unveil a new type of nonlinear Hall effect in time-reversal symmetric twisted bilayers, where an intrinsic Hall current emerges under the combined action of an in-plane electric field E and an out-of-plane ac field E ⊥ (t): j ∼ Ė⊥ × E [see Fig. ].\nHaving the two driving fields (inputs) and the current response (output) all orthogonal to each other, the effect is dubbed as the crossed nonlinear dynamical Hall effect. This is also the first nonlinear Hall contribution of an intrinsic nature in nonmagnetic materials without external magnetic field, determined solely by the band structures, not relying on extrinsic factors such as disorders and relaxation times.\nThe effect arises from the interlayer hybridization of electronic states under the chiral crystal symmetry characteristic of twisted bilayers, and has a novel band geometric origin in the momentum space curl of interlayer Berry connection polarizability (BCP). Having two driving fields of the same frequency, a dc Hall current develops, whose on/off, direction and magnitude can all be controlled by the phase difference of the two fields, which does not affect the magnitude of the double-frequency component.\nSuch a characteristic tunability renders this effect a unique approach to rectification and transport probe of chiral bilayers. As examples, we show sizable effects in small angle twisted transition metal dichalcogenides (tTMDs) and twisted bilayer graphene (tBG), as well as tBG of large angles where Umklapp interlayer tunneling dominates.\nGeometric origin of the effect. A bilayer system couples to in-plane and out-of-plane driving electric fields in completely different ways. The in-plane field couples to the 2D crystal momentum, leading to Berry-phase effects in the 2D momentum space . In comparison, the outof-plane field is coupled to the interlayer dipole moment p in the form of −E ⊥ p, where p = ed 0 σz with σz as the Pauli matrix in the layer index subspace and d 0 the interlayer distance.\nWhen the system has a more than twofold rotational axis in the z direction, as in tBG and tTMDs, any in-plane current driven by the out-of-plane field alone is forbidden. It also prohibits the off-diagonal components of the symmetric part of the conductivity tensor σ ab = ∂j a /∂E ||,b with respect to the in-plane input and output.\nSince the antisymmetric part of σ ab is not allowed by the Onsager reciprocity in nonmagnetic systems, all the off-diagonal components of σ ab is forbidden, irrespective of the order of out-of-plane field. On the other hand, as we will show, an in-plane Hall conductivity σ xy = −σ yx can still be driven by the product of an in-plane field and the time variation rate of an outof-plane ac field, which is a characteristic effect of chiral bilayers.\nTo account for the effect, we make use of the semiclassical theory . The velocity of an electron in a bilayer system is given by where k is the 2D crystal momentum. Here and hereafter we suppress the band index for simplicity, unless otherwise noted. The three contributions in this equation come from the band velocity, the anomalous velocities induced by the k -space Berry curvature Ω k and by the hybrid Berry curvature Ω kE ⊥ in the (k, E ⊥ ) space.\nFor the velocity at the order of interest, the k-space Berry curvature is corrected to the first order of the variation rate of out-of-plane field Ė⊥ as Here A = u k |i∂ k |u k is the unperturbed k-space Berry connection, with |u k being the cell-periodic part of the Bloch wave, whereas is its gauge invariant correction , which can be identified physically as an in-plane positional shift of an electron induced by the time evolution of the out-of-plane field.\nFor a band with index n, we have whose numerator involves the interband matrix elements of the interlayer dipole and velocity operators, and ε n is the unperturbed band energy. Meanwhile, up to the first order of in-plane field, the hybrid Berry curvature reads Here A E || is the k-space Berry connection induced by E || field , which represents an intralayer positional shift and whose detailed expression is not needed for our purpose.\nand is its first order correction induced by the in-plane field. In addition, ε = ε + δε, where δε = eE • G Ė⊥ is the field-induced electron energy . Given that A E || is the E ⊥ -space counterpart of intralayer shift A E || , and that E ⊥ is conjugate to the interlayer dipole moment, we can pictorially interpret A E || as the interlayer shift induced by in-plane field.\nIt indeed has the desired property of flipping sign under the horizontal mirror-plane reflection, hence is analogous to the so-called interlayer coordinate shift introduced in the study of layer circular photogalvanic effect , which is nothing but the E ⊥ -space counterpart of the shift vector well known in the nonlinear optical phenomenon of shift current.\nTherefore, the E ⊥ -space BCP eG/ can be understood as the interlayer BCP. This picture is further augmented by the connotation that the interlayer BCP is featured exclusively by interlayer-hybridized electronic states: According to Eq. ( ), if the state |u n is fully polarized in a specific layer around some momentum k, then G (k) is suppressed.\nWith the velocity of individual electrons, the charge current density contributed by the electron system can be obtained from where [dk] is shorthand for n d 2 k/(2π) 2 , and the distribution function is taken to be the Fermi function f 0 as we focus on the intrinsic response. The band geometric contributions to ṙ lead to a Hall current\nwhere is intrinsic to the band structure. This band geometric quantity measures the k-space curl of the interlayer BCP over the occupied states, and hence is also a characteristic of layer-hybridized electronic states. Via an integration by parts, it becomes clear that χ int is a Fermi surface property.\nSince χ int is a time-reversal even pseudoscalar, it is invariant under rotation, but flips sign under space inversion, mirror reflection and rotoreflection symmetries. As such, χ int is allowed if and only if the system possesses a chiral crystal structure, which is the very case of twisted bilayers .\nMoreover, since twisted structures with opposite twist angles are mirror images of each other, whereas the mirror reflection flips the sign of χ int , the direction of Hall current can be reversed by reversing twist direction. Hall rectification and frequency doubling. This effect can be utilized for the rectification and frequency doubling of an in-plane ac input E = E 0 cos ωt, provided that the out-of-plane field has the same frequency, namely E ⊥ = E 0 ⊥ cos (ωt + ϕ).\nThe phase difference ϕ between the two fields plays an important role in determining the Hall current, which takes the form of j = j 0 sin ϕ + j 2ω sin(2ωt + ϕ). ( Here ω is required to be below the threshold for direct interband transition in order to validate the semiclassical treatment, and σ H has the dimension of conductance and quantifies the Hall response with respect to the in-plane input.\nIn experiment, the Hall output by the crossed nonlinear dynamic Hall effect can be distinguished readily from the conventional nonlinear Hall effect driven by in-plane field alone, as they are odd and even, respectively, in the inplane field. One notes that while the double-frequency component appears for any ϕ, the rectified output is allowed only if the two crossed driving fields are not in-phase or antiphase.\nIts on/off, chirality (right or left), and magnitude are all controlled by the phase difference of the two fields. Such a unique tunability provides not only a prominent experimental hallmark of this effect, but also a controllable route to Hall rectification. In addition, reversing the direction of the out-of-plane field switches that of the Hall current, which also serves as a control knob.\nApplication to tTMDs. We now study the effect quantitatively in tTMDs, using tMoTe 2 as an example (see details of the continuum model in ). For illustrative purposes, we take ω/2π = 0.1 THz and E 0 ⊥ d 0 = 10 mV in what follows. Figures ) and (c) present the electronic band structures along with the layer composition σ z n (k) at twist angles θ = 1.2 • and θ = 2 • .\nIn both cases, the energy spectra exhibit isolated narrow bands with strong layer hybridization. At θ = 1.2 • , the conductivity shows two peaks ∼ 0.1e 2 /h at low energies associated with the first two valence bands. The third band does not host any sizable conductivity signal. At higher hole-doping levels, a remarkable conductivity peak ∼ e 2 /h appears near the gap separating the fourth and fifth bands.\nAt θ = 2 • , the conductivity shows smaller values, but the overall trends are similar: A peak ∼ O(0.01)e 2 /h appears at low energies, while larger responses ∼ O(0.1)e 2 /h can be spotted as the Fermi level decreases. One can understand the behaviors of σ H from the interlayer BCP in Eq. ( ). It favors band near-degeneracy regions in k -space made up of strongly layer hybridized electronic states.\nAs such, the conductivity is most pro- nounced when the Fermi level is located around such regions, which directly accounts for the peaks of response in Fig. that [∂ k × G] z is negligible at lower energies, and it is dominated by positive values as the doping increases, thus the conductivity rises initially.\nWhen the doping level is higher, regions with [∂ k × G] z < 0 start to contribute, thus the conductivity decreases after reaching a maximum. Application to tBG. The second example is tBG. We focus on commensurate twist angles in the large angle limit in the main text , which possess moiré-lattice assisted strong interlayer tunneling via Umklapp processes .\nThis case is appealing because the Umklapp interlayer tunneling is a manifestation of discrete translational symmetry of moiré superlattice, which is irrelevant at small twist angles and not captured by the continuum model but plays important roles in physical contexts such as higher order topological insulator and moiré excitons .\nThe Umklapp tunneling is strongest for the commensurate twist angles of θ = 21.8 • and θ = 38.2 • , whose corresponding periodic moiré superlattices have the smallest lattice constant ( √ 7 of the monolayer counterpart). Such a small moiré scale implies that the exact crystalline symmetry, which depends sensitively on fine details of rotation center, has critical influence on lowenergy response properties.\nTo capture the Umklapp tunneling, we employ the tight-binding model . Figures ) and (c) show two distinct commensurate structures of tBG at θ = 21.8 • belonging to chiral point groups D 3 and D 6 , respectively. The atomic configurations in Figs. ) are equivalent, which are constructed by twisting AA-stacked bilayer graphene around an overlapping atom site, and that in Fig. ) is obtained by rotating around a hexagonal center.\nBand structures of these two configurations are drastically different within a low-energy window of ∼ 10 meV around the κ point . Remarkably, despite large θ, we still get σ H ∼ O(0.001) e 2 /h (D 3 ) and ∼ O(0.1) e 2 /h (D 6 ), which are comparable to those at small angles (cf. Fig. in the Supplemental Material ).\nSuch sizable responses can be attributed to the strong interlayer coupling enabled by Umklapp processes . Apart from different intensities, the Hall conductivities in the two stacking configurations have distinct energy dependence: In Fig. , σ H shows a single peak centered at zero energy; In Fig. (f), it exhibits two antisymmetric peaks around zero.\nThe peaks are centered around band degeneracies, and their profiles can be understood from the distribution of [∂ k × G] z . Figure (d) illustrates the atomic structure of tBG with a twist angle slightly deviating from θ = 21.8 • , forming a supermoiré pattern. In short range, the local stacking geometries resemble the commensurate configurations at θ = 21.8 • , while the stacking registries at different locales differ by a translation.\nSimilar to the moiré landscapes in the small-angle limit, there also exist high-symmetry locales: Regions A and B enclose the D 3 structure, and region C contains the D 6 configuration. Position-dependent Hall response is therefore expected in such a supermoiré. As the intrinsic Hall signal from the D 6 configuration dominates [see Figs.\n3(e) vs (f)], the net response mimics that in Fig. . Discussion. We have uncovered the crossed nonlinear dynamical intrinsic Hall effect characteristic of layer hybridized electronic states in twisted bilayers, and elucidated its geometric origin in the k -space curl of interlayer BCP. It offers a new tool for rectification and frequency doubling in chiral vdW bilayers, and is sizable in tTMD and tBG.\nHere our focus is on the intrinsic effect, which can be evaluated quantitatively for each material and provides a benchmark for experiments. There may also be extrinsic contributions, similar to the side jump and skew scattering ones in anomalous Hall effect. They typically have distinct scaling behavior with the relaxation time τ from the intrinsic effect, hence can be distinguished from the latter in experiments .\nMoreover, they are suppressed in the clean limit ωτ 1 [(ωτ ) 2 1, more precisely] . In high-quality tBG samples, τ ∼ ps at room temperature . Much longer τ can be obtained at lower temperatures. In fact, a recent theory explaining well the resistivity of tBG predicted τ ∼ 10 −8 s at 10 K . As such, high-quality tBG under low temperatures and sub-terahertz input (ω/2π = 0.1 THz) is located in the clean limit, rendering an ideal platform for isolating the intrinsic effect.\nThis work paves a new route to driving in-plane response by out-of-plane dynamical control of layered vdW structures . The study can be generalized to other observables such as spin current and spin polarization, and the in-plane driving can be statistical forces, like temperature gradient. Such orthogonal controls rely critically on the nonconservation of layer pseudospin degree of freedom endowed by interlayer coupling, and constitute an emerging research field at the crossing of 2D vdW materials, layertronics, twistronics and nonlinear electronics.\nThis work is supported by the Research Grant Council of Hong Kong (AoE/P-701/20, HKU SRFS2122-7S05), and the Croucher Foundation. W.Y. also acknowledges support by Tencent Foundation. Cong Chen, 1, 2, * Dawei Zhai, 1, 2, * Cong Xiao, 1, 2, † and Wang Yao 1, 2, ‡ 1 Department of Physics, The University of Hong Kong, Hong Kong, China 2 HKU-UCAS Joint Institute of Theoretical and Computational Physics at Hong Kong, China Extra figures for tBG at small twist angles Figure (a) shows the band structure of tBG with θ = 1.47 • obtained from the continuum model .\nThe central bands are well separated from higher ones, and show Dirac points at κ/κ points protected by valley U (1) symmetry and a composite operation of twofold rotation and time reversal C 2z T . Degeneracies at higher energies can also be identified, for example, around ±75 meV at the γ point. As the two Dirac cones from the two layers intersect around the same area, such degeneracies are usually accompanied by strong layer hybridization [see the color in the left panel of Fig. ].\nAdditionally, it is well-known that the two layers are strongly coupled when θ is around the magic angle (∼ 1.08 • ), rendering narrow bandwidths for the central bands. As discussed in the main text, coexistence of strong interlayer hybridization and small energy separations is expected to contribute sharp conductivity peaks near band degeneracies, as shown in Fig. .\nIn this case, the conductivity peak near the Dirac point can reach ∼ 0.1e 2 /h, while the responses around ±0.08 eV are smaller at ∼ 0.01e 2 /h. The above features are maintained when θ is enlarged, as illustrated in Figs. ) and (c) using θ = 2.65 • and θ = 6.01 • . Since interlayer coupling becomes weaker and the bands are more separated at low energies when θ increases, intensity of the conductivity drops significantly.\nWe stress that G is not defined at degenerate points, and interband transitions may occur when energy separation satisfies |ε n − ε m | ∼ ω, the effects of which are not included in the current formulations. Consequently, the results around band degeneracies within energy ∼ ω [shaded areas in Fig. ] should be excluded.", "answers": ["The momentum space curl of the interlayer Berry connection polarizability generates the crossed nonlinear dynamical Hall effect."], "length": 3508, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "7d9bc0ed11dfc39ab91980d15a95fcd9f5902d25f85ec436"} {"input": "What was the name of the first white settlement in McPherson County?", "context": "McPherson County (standard abbreviation: MP) is a county located in the U.S. state of Kansas. As of the 2020 census, the county population was 30,223. The largest city and county seat is McPherson. The county is named for Civil War General James B. McPherson.\n\nHistory\n\nEarly history\n\nFor many millennia, the Great Plains of North America was inhabited by nomadic Native Americans. From the 16th century to 18th century, the Kingdom of France claimed ownership of large parts of North America. In 1762, after the French and Indian War, France secretly ceded New France to Spain, per the Treaty of Fontainebleau. In 1802, Spain returned most of the land to France, but keeping title to about 7,500 square miles.\n\nIn 1803, most of the land for modern day Kansas was acquired by the United States from France as part of the 828,000 square mile Louisiana Purchase for 2.83 cents per acre. In 1848, after the Mexican–American War, the Treaty of Guadalupe Hidalgo with Spain brought into the United States all or part of land for ten future states, including southwest Kansas. In 1854, the Kansas Territory was organized, then in 1861 Kansas became the 34th U.S. state.\n\n19th century\n\nFrom the 1820s to 1870s, the Santa Fe Trail passed through, what is now McPherson County. The trail entered the county, east of Canton, then south of Galva, then north of Inman, and west towards Lyons. In 1855, Charles O. Fuller established a ranch adjacent to the Running Turkey Creek Crossing about two miles south and one mile east of Galva. Fuller's Ranch provided accommodations for travelers on the Santa Fe Trail and was probably the first white settlement in McPherson County.\n\nPeketon County was established in 1860, by the passage of a bill by S. N. Wood: An act to establish Peketon County. Section 1. - That all that territory west of the sixth principal meridian and south of Township 16, in Kansas Territory, be and the same is hereby erected into a county, to be known by the name of Peketon County. On February 17, 1865, Peketon County was abolished, and McPherson County was made a part of Marion County, which extended from the west line of Chase County to the present western boundary of Kansas.\n\nIn 1868, Solomon Stephens and L. N. Holmberg were appointed Justices of the Peace—the first officers in what is now McPherson County. The next year (1869) occurred the first election for the township, now the county of McPherson. McPherson was regularly organized as a county in the spring of 1870, a mass meeting being held at Sweadal. Sweadal, the county seat thus selected, was located about one mile and a half southwest of the present site of Lindsborg. In September, however, the County Commissioners resolved to meet at the latter place, McPherson which had already been located some two years.\n\nIn April, 1873, a petition was filed for the county seat re-location. It was signed by 483 voters, and a special election was accordingly ordered for June 10. Upon that day, McPherson received 605 votes, New Gottland 325, King City 3 and Lindsborg 1; McPherson's majority over all, 276. In May the McPherson Town Company had offered, as an inducement for the location of the county seat at this point, the free use of rooms for ten years, and the donation of two squares of land on the town site. The offer was accepted the next month, the County Commissioners selecting blocks 56 and 65. Thus the county seat was established at McPherson and has remained since.\n\nAs early as 1875, city leaders of Marion held a meeting to consider a branch railroad from Florence. In 1878, Atchison, Topeka and Santa Fe Railway and parties from Marion County and McPherson County chartered the Marion and McPherson Railway Company. In 1879, a branch line was built from Florence to McPherson, in 1880 it was extended to Lyons, in 1881 it was extended to Ellinwood. The line was leased and operated by the Atchison, Topeka and Santa Fe Railway. The line from Florence to Marion, was abandoned in 1968. In 1992, the line from Marion to McPherson was sold to Central Kansas Railway. In 1993, after heavy flood damage, the line from Marion to McPherson was abandoned. The original branch line connected Florence, Marion, Canada, Hillsboro, Lehigh, Canton, Galva, McPherson, Conway, Windom, Little River, Mitchell, Lyons, Chase, then connected with the original AT&SF main line at Ellinwood.\n\nIn 1887, the Chicago, Kansas and Nebraska Railway extended its main line from Herington to Pratt. This main line connected Herington, Ramona, Tampa, Durham, Waldeck, Canton, Galva, McPherson, Groveland, Inman, Medora, Hutchinson, Whiteside, Partridge, Arlington, Langdon, Turon, Preston, Natrona, Pratt. In 1888, this main line was extended to Liberal. Later, this line was extended to Tucumcari, New Mexico and Santa Rosa, New Mexico, where it made a connection with the Southern Pacific from El Paso, Texas. The Chicago, Kansas and Nebraska Railway was absorbed by the Chicago, Rock Island and Pacific Railway. This line is also called the \"Golden State Route\".\n\n20th century\nThe National Old Trails Road, also known as the Ocean-to-Ocean Highway, was established in 1912, and was routed through Windom, Conway, McPherson.\n\nGeography\n\nAccording to the U.S. Census Bureau, the county has a total area of , of which is land and (0.3%) is water.\n\nAdjacent counties\n Saline County (north)\n Dickinson County (northeast)\n Marion County (east)\n Harvey County (southeast)\n Reno County (southwest)\n Rice County (west)\n Ellsworth County (northwest)\n\nMajor highways\n Interstate 135\n U.S. Route 56\n U.S. Route 81\n K-4\n K-61\n K-153\n\nDemographics\n\nThe McPherson Micropolitan Statistical Area includes all of McPherson County.\n\n2000 census\nAs of the census of 2000, there were 29,554 people, 11,205 households, and 7,966 families residing in the county. The population density was 33 people per square mile (13/km2). There were 11,830 housing units at an average density of 13 per square mile (5/km2). The racial makeup of the county was 96.53% White, 0.81% Black or African American, 0.34% Native American, 0.32% Asian, 0.06% Pacific Islander, 0.79% from other races, and 1.16% from two or more races. 1.94% of the population were Hispanic or Latino of any race. 37.1% were of German, 12.9% Swedish, 12.1% American, 6.7% English and 6.3% Irish ancestry according to Census 2000.\n\nThere were 11,205 households, out of which 33.00% had children under the age of 18 living with them, 62.50% were married couples living together, 6.00% had a female householder with no husband present, and 28.90% were non-families. 25.50% of all households were made up of individuals, and 11.80% had someone living alone who was 65 years of age or older. The average household size was 2.49 and the average family size was 2.99.\n\nIn the county, the population was spread out, with 25.40% under the age of 18, 10.30% from 18 to 24, 25.20% from 25 to 44, 21.80% from 45 to 64, and 17.30% who were 65 years of age or older. The median age was 38 years. For every 100 females there were 95.90 males. For every 100 females age 18 and over, there were 92.90 males.\n\nThe median income for a household in the county was $41,138, and the median income for a family was $48,243. Males had a median income of $33,530 versus $21,175 for females. The per capita income for the county was $18,921. About 4.20% of families and 6.60% of the population were below the poverty line, including 5.20% of those under age 18 and 8.10% of those age 65 or over.\n\nGovernment\n\nPresidential elections\nMcPherson county is often carried by Republican candidates. The last time a Democratic candidate has carried this county was in 1964 by Lyndon B. Johnson.\n\nLaws\nFollowing amendment to the Kansas Constitution in 1986, the county remained a prohibition, or \"dry\", county until 1996, when voters approved the sale of alcoholic liquor by the individual drink with a 30 percent food sales requirement.\n\nEducation\n\nColleges\n McPherson College in McPherson\n Bethany College in Lindsborg\n Central Christian College in McPherson\n\nUnified school districts\n Smoky Valley USD 400\n McPherson USD 418\n Canton-Galva USD 419\n Moundridge USD 423\n Inman USD 448\n\nSchool district office in neighboring county\n Goessel USD 411\n Little River-Windom USD 444\n\nMuseums\n Birger Sandzén Memorial Gallery in Lindsborg\n McCormick-Deering Days Museum in Inman\n McPherson Museum in McPherson\n Lindsborg Old Mill & Swedish Heritage Museum in Lindsborg\n Kansas Motorcycle Museum in Marquette\n\nCommunities\n\nCities\n\n Canton\n Galva\n Inman\n Lindsborg\n Marquette\n McPherson (county seat) \n Moundridge\n Windom\n\nUnincorporated communities\n† means a Census-Designated Place (CDP) by the United States Census Bureau.\n Conway\n Elyria†\n Groveland\n Johnstown\n New Gottland\n Roxbury†\n\nGhost towns\n Alta Mills\n Battle Hill\n Christian\n Doles Park\n Elivon\n King City\n Sweadal\n\nTownships\nMcPherson County is divided into twenty-five townships. The cities of Lindsborg and McPherson are considered governmentally independent and are excluded from the census figures for the townships. In the following table, the population center is the largest city (or cities) included in that township's population total, if it is of a significant size.\n\nSee also\n List of people from McPherson County, Kansas\n National Register of Historic Places listings in McPherson County, Kansas\n McPherson Valley Wetlands\n Maxwell Wildlife Refuge\n\nReferences\n\nNotes\n\nFurther reading\n\n Wheeler, Wayne Leland. \"An Analysis of Social Change in a Swedish-Immigrant Community: The Case of Lindsborg, Kansas.\" (PhD dissertation, University of Missouri-Columbia; ProQuest Dissertations Publishing, 1959. 5905657).\n\nCounty\n Through the Years: A Pictorial History of McPherson County; McPherson Sentinel' Heritage House Publishing Co; 1992.\n McPherson County First Courthouse Built About 1869 or 1870; Lindsborg News-Record; March 30, 1959.\n Pioneer Life and Lore of McPherson County, Kansas; Edna Nyquist; Democratic-Opinion Press; 1932.\n A History of the Church of the Brethren in Kansas (includes McPherson College history); Elmer LeRoy Craik; McPherson Daily; Republican Press; 397 pages; 1922.\n Portrait and Biographical Record of Dickinson, Saline, McPherson, and Marion Counties, Kansas; Chapman Bros; 614 pages; 1893.\n Standard Atlas of McPherson County, Kansas; Geo. A. Ogle & Co; 82 pages; 1921.\n Plat Book of McPherson County, Kansas; North West Publishing Co; 50 pages; 1903.\n Edwards' Atlas of McPherson County, Kansas; John P. Edwards; 51 pages; 1884.\n\nTrails\n The Story of the Marking of the Santa Fe Trail by the Daughters of the American Revolution in Kansas and the State of Kansas; Almira Cordry; Crane Co; 164 pages; 1915. (Download 4MB PDF eBook)\n The National Old Trails Road To Southern California, Part 1 (LA to KC); Automobile Club Of Southern California; 64 pages; 1916. (Download 6.8MB PDF eBook)\n\nMennonite Settlements\n Impact of Mennonite settlement on the cultural landscape of Kansas; Brenda Martin; Kansas State University; 1985/1988. \n Mennonite settlement : the relationship between the physical and cultural environment; Susan Movle; University of Utah; 1975/1886.\n Status of Mennonite women in Kansas in their church and home relationships; Eva Harshbarger; Bluffton College; 1925/1945.\n\nExternal links\n\nCounty\n \n McPherson County - Directory of Public Officials\nHistorical\n , from Hatteberg's People'' on KAKE TV news\nMaps\n McPherson County Maps: Current, Historic, KDOT\n Kansas Highway Maps: Current, Historic, KDOT\n Kansas Railroad Maps: Current, 1996, 1915, KDOT and Kansas Historical Society\n\n \nKansas counties\n1867 establishments in Kansas\n", "answers": ["The first white settlement in McPherson County was Fuller's Ranch, established by Charles O. Fuller."], "length": 1865, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "92bf77fcae3c753b768f1289eeb1e899fc8e75a14509a5f9"} {"input": "Who is the county seat of McPherson County?", "context": "McPherson County (standard abbreviation: MP) is a county located in the U.S. state of Kansas. As of the 2020 census, the county population was 30,223. The largest city and county seat is McPherson. The county is named for Civil War General James B. McPherson.\n\nHistory\n\nEarly history\n\nFor many millennia, the Great Plains of North America was inhabited by nomadic Native Americans. From the 16th century to 18th century, the Kingdom of France claimed ownership of large parts of North America. In 1762, after the French and Indian War, France secretly ceded New France to Spain, per the Treaty of Fontainebleau. In 1802, Spain returned most of the land to France, but keeping title to about 7,500 square miles.\n\nIn 1803, most of the land for modern day Kansas was acquired by the United States from France as part of the 828,000 square mile Louisiana Purchase for 2.83 cents per acre. In 1848, after the Mexican–American War, the Treaty of Guadalupe Hidalgo with Spain brought into the United States all or part of land for ten future states, including southwest Kansas. In 1854, the Kansas Territory was organized, then in 1861 Kansas became the 34th U.S. state.\n\n19th century\n\nFrom the 1820s to 1870s, the Santa Fe Trail passed through, what is now McPherson County. The trail entered the county, east of Canton, then south of Galva, then north of Inman, and west towards Lyons. In 1855, Charles O. Fuller established a ranch adjacent to the Running Turkey Creek Crossing about two miles south and one mile east of Galva. Fuller's Ranch provided accommodations for travelers on the Santa Fe Trail and was probably the first white settlement in McPherson County.\n\nPeketon County was established in 1860, by the passage of a bill by S. N. Wood: An act to establish Peketon County. Section 1. - That all that territory west of the sixth principal meridian and south of Township 16, in Kansas Territory, be and the same is hereby erected into a county, to be known by the name of Peketon County. On February 17, 1865, Peketon County was abolished, and McPherson County was made a part of Marion County, which extended from the west line of Chase County to the present western boundary of Kansas.\n\nIn 1868, Solomon Stephens and L. N. Holmberg were appointed Justices of the Peace—the first officers in what is now McPherson County. The next year (1869) occurred the first election for the township, now the county of McPherson. McPherson was regularly organized as a county in the spring of 1870, a mass meeting being held at Sweadal. Sweadal, the county seat thus selected, was located about one mile and a half southwest of the present site of Lindsborg. In September, however, the County Commissioners resolved to meet at the latter place, McPherson which had already been located some two years.\n\nIn April, 1873, a petition was filed for the county seat re-location. It was signed by 483 voters, and a special election was accordingly ordered for June 10. Upon that day, McPherson received 605 votes, New Gottland 325, King City 3 and Lindsborg 1; McPherson's majority over all, 276. In May the McPherson Town Company had offered, as an inducement for the location of the county seat at this point, the free use of rooms for ten years, and the donation of two squares of land on the town site. The offer was accepted the next month, the County Commissioners selecting blocks 56 and 65. Thus the county seat was established at McPherson and has remained since.\n\nAs early as 1875, city leaders of Marion held a meeting to consider a branch railroad from Florence. In 1878, Atchison, Topeka and Santa Fe Railway and parties from Marion County and McPherson County chartered the Marion and McPherson Railway Company. In 1879, a branch line was built from Florence to McPherson, in 1880 it was extended to Lyons, in 1881 it was extended to Ellinwood. The line was leased and operated by the Atchison, Topeka and Santa Fe Railway. The line from Florence to Marion, was abandoned in 1968. In 1992, the line from Marion to McPherson was sold to Central Kansas Railway. In 1993, after heavy flood damage, the line from Marion to McPherson was abandoned. The original branch line connected Florence, Marion, Canada, Hillsboro, Lehigh, Canton, Galva, McPherson, Conway, Windom, Little River, Mitchell, Lyons, Chase, then connected with the original AT&SF main line at Ellinwood.\n\nIn 1887, the Chicago, Kansas and Nebraska Railway extended its main line from Herington to Pratt. This main line connected Herington, Ramona, Tampa, Durham, Waldeck, Canton, Galva, McPherson, Groveland, Inman, Medora, Hutchinson, Whiteside, Partridge, Arlington, Langdon, Turon, Preston, Natrona, Pratt. In 1888, this main line was extended to Liberal. Later, this line was extended to Tucumcari, New Mexico and Santa Rosa, New Mexico, where it made a connection with the Southern Pacific from El Paso, Texas. The Chicago, Kansas and Nebraska Railway was absorbed by the Chicago, Rock Island and Pacific Railway. This line is also called the \"Golden State Route\".\n\n20th century\nThe National Old Trails Road, also known as the Ocean-to-Ocean Highway, was established in 1912, and was routed through Windom, Conway, McPherson.\n\nGeography\n\nAccording to the U.S. Census Bureau, the county has a total area of , of which is land and (0.3%) is water.\n\nAdjacent counties\n Saline County (north)\n Dickinson County (northeast)\n Marion County (east)\n Harvey County (southeast)\n Reno County (southwest)\n Rice County (west)\n Ellsworth County (northwest)\n\nMajor highways\n Interstate 135\n U.S. Route 56\n U.S. Route 81\n K-4\n K-61\n K-153\n\nDemographics\n\nThe McPherson Micropolitan Statistical Area includes all of McPherson County.\n\n2000 census\nAs of the census of 2000, there were 29,554 people, 11,205 households, and 7,966 families residing in the county. The population density was 33 people per square mile (13/km2). There were 11,830 housing units at an average density of 13 per square mile (5/km2). The racial makeup of the county was 96.53% White, 0.81% Black or African American, 0.34% Native American, 0.32% Asian, 0.06% Pacific Islander, 0.79% from other races, and 1.16% from two or more races. 1.94% of the population were Hispanic or Latino of any race. 37.1% were of German, 12.9% Swedish, 12.1% American, 6.7% English and 6.3% Irish ancestry according to Census 2000.\n\nThere were 11,205 households, out of which 33.00% had children under the age of 18 living with them, 62.50% were married couples living together, 6.00% had a female householder with no husband present, and 28.90% were non-families. 25.50% of all households were made up of individuals, and 11.80% had someone living alone who was 65 years of age or older. The average household size was 2.49 and the average family size was 2.99.\n\nIn the county, the population was spread out, with 25.40% under the age of 18, 10.30% from 18 to 24, 25.20% from 25 to 44, 21.80% from 45 to 64, and 17.30% who were 65 years of age or older. The median age was 38 years. For every 100 females there were 95.90 males. For every 100 females age 18 and over, there were 92.90 males.\n\nThe median income for a household in the county was $41,138, and the median income for a family was $48,243. Males had a median income of $33,530 versus $21,175 for females. The per capita income for the county was $18,921. About 4.20% of families and 6.60% of the population were below the poverty line, including 5.20% of those under age 18 and 8.10% of those age 65 or over.\n\nGovernment\n\nPresidential elections\nMcPherson county is often carried by Republican candidates. The last time a Democratic candidate has carried this county was in 1964 by Lyndon B. Johnson.\n\nLaws\nFollowing amendment to the Kansas Constitution in 1986, the county remained a prohibition, or \"dry\", county until 1996, when voters approved the sale of alcoholic liquor by the individual drink with a 30 percent food sales requirement.\n\nEducation\n\nColleges\n McPherson College in McPherson\n Bethany College in Lindsborg\n Central Christian College in McPherson\n\nUnified school districts\n Smoky Valley USD 400\n McPherson USD 418\n Canton-Galva USD 419\n Moundridge USD 423\n Inman USD 448\n\nSchool district office in neighboring county\n Goessel USD 411\n Little River-Windom USD 444\n\nMuseums\n Birger Sandzén Memorial Gallery in Lindsborg\n McCormick-Deering Days Museum in Inman\n McPherson Museum in McPherson\n Lindsborg Old Mill & Swedish Heritage Museum in Lindsborg\n Kansas Motorcycle Museum in Marquette\n\nCommunities\n\nCities\n\n Canton\n Galva\n Inman\n Lindsborg\n Marquette\n McPherson (county seat) \n Moundridge\n Windom\n\nUnincorporated communities\n† means a Census-Designated Place (CDP) by the United States Census Bureau.\n Conway\n Elyria†\n Groveland\n Johnstown\n New Gottland\n Roxbury†\n\nGhost towns\n Alta Mills\n Battle Hill\n Christian\n Doles Park\n Elivon\n King City\n Sweadal\n\nTownships\nMcPherson County is divided into twenty-five townships. The cities of Lindsborg and McPherson are considered governmentally independent and are excluded from the census figures for the townships. In the following table, the population center is the largest city (or cities) included in that township's population total, if it is of a significant size.\n\nSee also\n List of people from McPherson County, Kansas\n National Register of Historic Places listings in McPherson County, Kansas\n McPherson Valley Wetlands\n Maxwell Wildlife Refuge\n\nReferences\n\nNotes\n\nFurther reading\n\n Wheeler, Wayne Leland. \"An Analysis of Social Change in a Swedish-Immigrant Community: The Case of Lindsborg, Kansas.\" (PhD dissertation, University of Missouri-Columbia; ProQuest Dissertations Publishing, 1959. 5905657).\n\nCounty\n Through the Years: A Pictorial History of McPherson County; McPherson Sentinel' Heritage House Publishing Co; 1992.\n McPherson County First Courthouse Built About 1869 or 1870; Lindsborg News-Record; March 30, 1959.\n Pioneer Life and Lore of McPherson County, Kansas; Edna Nyquist; Democratic-Opinion Press; 1932.\n A History of the Church of the Brethren in Kansas (includes McPherson College history); Elmer LeRoy Craik; McPherson Daily; Republican Press; 397 pages; 1922.\n Portrait and Biographical Record of Dickinson, Saline, McPherson, and Marion Counties, Kansas; Chapman Bros; 614 pages; 1893.\n Standard Atlas of McPherson County, Kansas; Geo. A. Ogle & Co; 82 pages; 1921.\n Plat Book of McPherson County, Kansas; North West Publishing Co; 50 pages; 1903.\n Edwards' Atlas of McPherson County, Kansas; John P. Edwards; 51 pages; 1884.\n\nTrails\n The Story of the Marking of the Santa Fe Trail by the Daughters of the American Revolution in Kansas and the State of Kansas; Almira Cordry; Crane Co; 164 pages; 1915. (Download 4MB PDF eBook)\n The National Old Trails Road To Southern California, Part 1 (LA to KC); Automobile Club Of Southern California; 64 pages; 1916. (Download 6.8MB PDF eBook)\n\nMennonite Settlements\n Impact of Mennonite settlement on the cultural landscape of Kansas; Brenda Martin; Kansas State University; 1985/1988. \n Mennonite settlement : the relationship between the physical and cultural environment; Susan Movle; University of Utah; 1975/1886.\n Status of Mennonite women in Kansas in their church and home relationships; Eva Harshbarger; Bluffton College; 1925/1945.\n\nExternal links\n\nCounty\n \n McPherson County - Directory of Public Officials\nHistorical\n , from Hatteberg's People'' on KAKE TV news\nMaps\n McPherson County Maps: Current, Historic, KDOT\n Kansas Highway Maps: Current, Historic, KDOT\n Kansas Railroad Maps: Current, 1996, 1915, KDOT and Kansas Historical Society\n\n \nKansas counties\n1867 establishments in Kansas\nPopulated places established in 1867", "answers": ["McPherson."], "length": 1852, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "ce720011515773e6bd7b4b355c52a7215ad2920f18a1ec14"} {"input": "Where is McPherson County located?", "context": "McPherson County (standard abbreviation: MP) is a county located in the U.S. state of Kansas. As of the 2020 census, the county population was 30,223. The largest city and county seat is McPherson. The county is named for Civil War General James B. McPherson.\n\nHistory\n\nEarly history\n\nFor many millennia, the Great Plains of North America was inhabited by nomadic Native Americans. From the 16th century to 18th century, the Kingdom of France claimed ownership of large parts of North America. In 1762, after the French and Indian War, France secretly ceded New France to Spain, per the Treaty of Fontainebleau. In 1802, Spain returned most of the land to France, but keeping title to about 7,500 square miles.\n\nIn 1803, most of the land for modern day Kansas was acquired by the United States from France as part of the 828,000 square mile Louisiana Purchase for 2.83 cents per acre. In 1848, after the Mexican–American War, the Treaty of Guadalupe Hidalgo with Spain brought into the United States all or part of land for ten future states, including southwest Kansas. In 1854, the Kansas Territory was organized, then in 1861 Kansas became the 34th U.S. state.\n\n19th century\n\nFrom the 1820s to 1870s, the Santa Fe Trail passed through, what is now McPherson County. The trail entered the county, east of Canton, then south of Galva, then north of Inman, and west towards Lyons. In 1855, Charles O. Fuller established a ranch adjacent to the Running Turkey Creek Crossing about two miles south and one mile east of Galva. Fuller's Ranch provided accommodations for travelers on the Santa Fe Trail and was probably the first white settlement in McPherson County.\n\nPeketon County was established in 1860, by the passage of a bill by S. N. Wood: An act to establish Peketon County. Section 1. - That all that territory west of the sixth principal meridian and south of Township 16, in Kansas Territory, be and the same is hereby erected into a county, to be known by the name of Peketon County. On February 17, 1865, Peketon County was abolished, and McPherson County was made a part of Marion County, which extended from the west line of Chase County to the present western boundary of Kansas.\n\nIn 1868, Solomon Stephens and L. N. Holmberg were appointed Justices of the Peace—the first officers in what is now McPherson County. The next year (1869) occurred the first election for the township, now the county of McPherson. McPherson was regularly organized as a county in the spring of 1870, a mass meeting being held at Sweadal. Sweadal, the county seat thus selected, was located about one mile and a half southwest of the present site of Lindsborg. In September, however, the County Commissioners resolved to meet at the latter place, McPherson which had already been located some two years.\n\nIn April, 1873, a petition was filed for the county seat re-location. It was signed by 483 voters, and a special election was accordingly ordered for June 10. Upon that day, McPherson received 605 votes, New Gottland 325, King City 3 and Lindsborg 1; McPherson's majority over all, 276. In May the McPherson Town Company had offered, as an inducement for the location of the county seat at this point, the free use of rooms for ten years, and the donation of two squares of land on the town site. The offer was accepted the next month, the County Commissioners selecting blocks 56 and 65. Thus the county seat was established at McPherson and has remained since.\n\nAs early as 1875, city leaders of Marion held a meeting to consider a branch railroad from Florence. In 1878, Atchison, Topeka and Santa Fe Railway and parties from Marion County and McPherson County chartered the Marion and McPherson Railway Company. In 1879, a branch line was built from Florence to McPherson, in 1880 it was extended to Lyons, in 1881 it was extended to Ellinwood. The line was leased and operated by the Atchison, Topeka and Santa Fe Railway. The line from Florence to Marion, was abandoned in 1968. In 1992, the line from Marion to McPherson was sold to Central Kansas Railway. In 1993, after heavy flood damage, the line from Marion to McPherson was abandoned. The original branch line connected Florence, Marion, Canada, Hillsboro, Lehigh, Canton, Galva, McPherson, Conway, Windom, Little River, Mitchell, Lyons, Chase, then connected with the original AT&SF main line at Ellinwood.\n\nIn 1887, the Chicago, Kansas and Nebraska Railway extended its main line from Herington to Pratt. This main line connected Herington, Ramona, Tampa, Durham, Waldeck, Canton, Galva, McPherson, Groveland, Inman, Medora, Hutchinson, Whiteside, Partridge, Arlington, Langdon, Turon, Preston, Natrona, Pratt. In 1888, this main line was extended to Liberal. Later, this line was extended to Tucumcari, New Mexico and Santa Rosa, New Mexico, where it made a connection with the Southern Pacific from El Paso, Texas. The Chicago, Kansas and Nebraska Railway was absorbed by the Chicago, Rock Island and Pacific Railway. This line is also called the \"Golden State Route\".\n\n20th century\nThe National Old Trails Road, also known as the Ocean-to-Ocean Highway, was established in 1912, and was routed through Windom, Conway, McPherson.\n\nGeography\n\nAccording to the U.S. Census Bureau, the county has a total area of , of which is land and (0.3%) is water.\n\nAdjacent counties\n Saline County (north)\n Dickinson County (northeast)\n Marion County (east)\n Harvey County (southeast)\n Reno County (southwest)\n Rice County (west)\n Ellsworth County (northwest)\n\nMajor highways\n Interstate 135\n U.S. Route 56\n U.S. Route 81\n K-4\n K-61\n K-153\n\nDemographics\n\nThe McPherson Micropolitan Statistical Area includes all of McPherson County.\n\n2000 census\nAs of the census of 2000, there were 29,554 people, 11,205 households, and 7,966 families residing in the county. The population density was 33 people per square mile (13/km2). There were 11,830 housing units at an average density of 13 per square mile (5/km2). The racial makeup of the county was 96.53% White, 0.81% Black or African American, 0.34% Native American, 0.32% Asian, 0.06% Pacific Islander, 0.79% from other races, and 1.16% from two or more races. 1.94% of the population were Hispanic or Latino of any race. 37.1% were of German, 12.9% Swedish, 12.1% American, 6.7% English and 6.3% Irish ancestry according to Census 2000.\n\nThere were 11,205 households, out of which 33.00% had children under the age of 18 living with them, 62.50% were married couples living together, 6.00% had a female householder with no husband present, and 28.90% were non-families. 25.50% of all households were made up of individuals, and 11.80% had someone living alone who was 65 years of age or older. The average household size was 2.49 and the average family size was 2.99.\n\nIn the county, the population was spread out, with 25.40% under the age of 18, 10.30% from 18 to 24, 25.20% from 25 to 44, 21.80% from 45 to 64, and 17.30% who were 65 years of age or older. The median age was 38 years. For every 100 females there were 95.90 males. For every 100 females age 18 and over, there were 92.90 males.\n\nThe median income for a household in the county was $41,138, and the median income for a family was $48,243. Males had a median income of $33,530 versus $21,175 for females. The per capita income for the county was $18,921. About 4.20% of families and 6.60% of the population were below the poverty line, including 5.20% of those under age 18 and 8.10% of those age 65 or over.\n\nGovernment\n\nPresidential elections\nMcPherson county is often carried by Republican candidates. The last time a Democratic candidate has carried this county was in 1964 by Lyndon B. Johnson.\n\nLaws\nFollowing amendment to the Kansas Constitution in 1986, the county remained a prohibition, or \"dry\", county until 1996, when voters approved the sale of alcoholic liquor by the individual drink with a 30 percent food sales requirement.\n\nEducation\n\nColleges\n McPherson College in McPherson\n Bethany College in Lindsborg\n Central Christian College in McPherson\n\nUnified school districts\n Smoky Valley USD 400\n McPherson USD 418\n Canton-Galva USD 419\n Moundridge USD 423\n Inman USD 448\n\nSchool district office in neighboring county\n Goessel USD 411\n Little River-Windom USD 444\n\nMuseums\n Birger Sandzén Memorial Gallery in Lindsborg\n McCormick-Deering Days Museum in Inman\n McPherson Museum in McPherson\n Lindsborg Old Mill & Swedish Heritage Museum in Lindsborg\n Kansas Motorcycle Museum in Marquette\n\nCommunities\n\nCities\n\n Canton\n Galva\n Inman\n Lindsborg\n Marquette\n McPherson (county seat) \n Moundridge\n Windom\n\nUnincorporated communities\n† means a Census-Designated Place (CDP) by the United States Census Bureau.\n Conway\n Elyria†\n Groveland\n Johnstown\n New Gottland\n Roxbury†\n\nGhost towns\n Alta Mills\n Battle Hill\n Christian\n Doles Park\n Elivon\n King City\n Sweadal\n\nTownships\nMcPherson County is divided into twenty-five townships. The cities of Lindsborg and McPherson are considered governmentally independent and are excluded from the census figures for the townships. In the following table, the population center is the largest city (or cities) included in that township's population total, if it is of a significant size.\n\nSee also\n List of people from McPherson County, Kansas\n National Register of Historic Places listings in McPherson County, Kansas\n McPherson Valley Wetlands\n Maxwell Wildlife Refuge\n\nReferences\n\nNotes\n\nFurther reading\n\n Wheeler, Wayne Leland. \"An Analysis of Social Change in a Swedish-Immigrant Community: The Case of Lindsborg, Kansas.\" (PhD dissertation, University of Missouri-Columbia; ProQuest Dissertations Publishing, 1959. 5905657).\n\nCounty\n Through the Years: A Pictorial History of McPherson County; McPherson Sentinel' Heritage House Publishing Co; 1992.\n McPherson County First Courthouse Built About 1869 or 1870; Lindsborg News-Record; March 30, 1959.\n Pioneer Life and Lore of McPherson County, Kansas; Edna Nyquist; Democratic-Opinion Press; 1932.\n A History of the Church of the Brethren in Kansas (includes McPherson College history); Elmer LeRoy Craik; McPherson Daily; Republican Press; 397 pages; 1922.\n Portrait and Biographical Record of Dickinson, Saline, McPherson, and Marion Counties, Kansas; Chapman Bros; 614 pages; 1893.\n Standard Atlas of McPherson County, Kansas; Geo. A. Ogle & Co; 82 pages; 1921.\n Plat Book of McPherson County, Kansas; North West Publishing Co; 50 pages; 1903.\n Edwards' Atlas of McPherson County, Kansas; John P. Edwards; 51 pages; 1884.\n\nTrails\n The Story of the Marking of the Santa Fe Trail by the Daughters of the American Revolution in Kansas and the State of Kansas; Almira Cordry; Crane Co; 164 pages; 1915. (Download 4MB PDF eBook)\n The National Old Trails Road To Southern California, Part 1 (LA to KC); Automobile Club Of Southern California; 64 pages; 1916. (Download 6.8MB PDF eBook)\n\nMennonite Settlements\n Impact of Mennonite settlement on the cultural landscape of Kansas; Brenda Martin; Kansas State University; 1985/1988. \n Mennonite settlement : the relationship between the physical and cultural environment; Susan Movle; University of Utah; 1975/1886.\n Status of Mennonite women in Kansas in their church and home relationships; Eva Harshbarger; Bluffton College; 1925/1945.\n\nExternal links\n\nCounty\n \n McPherson County - Directory of Public Officials\nHistorical\n , from Hatteberg's People'' on KAKE TV news\nMaps\n McPherson County Maps: Current, Historic, KDOT\n Kansas Highway Maps: Current, Historic, KDOT\n Kansas Railroad Maps: Current, 1996, 1915, KDOT and Kansas Historical Society\n\n \nKansas counties\n1867 establishments in Kansas\n", "answers": ["McPherson County is located in the U.S. state of Kansas."], "length": 1853, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "71902c8027dca26b265f12709ec21224b5abe8b9ef5750fd"} {"input": "Who was Brooksley Elizabeth's first husband?", "context": "Brooksley Elizabeth Born (born August 27, 1940) is an American attorney and former public official who, from August 26, 1996, to June 1, 1999, was chair of the Commodity Futures Trading Commission (CFTC), the federal agency which oversees the U.S. futures and commodity options markets. During her tenure on the CFTC, Born lobbied Congress and the President to give the CFTC oversight of off-exchange markets for derivatives, in addition to its role with respect to exchange-traded derivatives, but her warnings were ignored or dismissed, and her calls for reform resisted by other regulators.Goodman, Peter S. The Reckoning - Taking Hard New Look at a Greenspan Legacy, The New York Times, October 9, 2008. Born resigned as chairperson on June 1, 1999, shortly after Congress passed legislation prohibiting her agency from regulating derivatives.\n\nIn 2009, Born received the John F. Kennedy Profiles in Courage Award, along with Sheila Bair of the Federal Deposit Insurance Corporation, in recognition of the \"political courage she demonstrated in sounding early warnings about conditions that contributed\" to the 2007-08 financial crisis.\n\nEarly life and education\nBorn graduated from Abraham Lincoln High School (San Francisco, California) at the age of 16. She then attended Stanford University, where she majored in English and was graduated with the class of 1961. She initially wanted to become a doctor, but a guidance counsellor at Stanford advised her against medicine, so she majored in English literature instead.\n\nShe then attended Stanford Law School, one of only seven women in her class. She was the first female student ever to be named president of the Stanford Law Review. She received the \"Outstanding Senior\" award and graduated as valedictorian of the class of 1964.\n\nLegal career\nImmediately after law school Born was selected as a law clerk to judge Henry Edgerton of the U.S. Court of Appeals for the District of Columbia Circuit. It was during this time that she met her first husband, Jacob C. Landau, who was a journalist covering the Federal courts at the time. Following her clerkship, she became an associate at the Washington, D.C.-based international law firm of Arnold & Porter. Born was attracted to Arnold & Porter because it was one of the few major law firms to have a woman partner at that time, Carolyn Agger, who was the head of the tax practice. Born took a two-year leave of absence from Arnold & Porter to accompany her first husband to Boston, where he had received a fellowship. During that time she worked as a research assistant to law professor Alan Dershowitz.\n\nBorn's early career at Arnold & Porter focused on international trade law, in which she represented a number of Swiss industries and the government of Switzerland. She developed a practice representing clients in numerous complex litigation and arbitration cases involving financial market transactions. Among her high-profile cases was the matter of the Hunt Brothers attempt to corner the market in silver in the 1970s. She made partner at Arnold & Porter, after moving to a three-day schedule to help raise her second child, and eventually rose to be the head of the firm's derivatives practice.\n\nBorn was among the first female attorneys to systematically address inequities regarding how the laws treated women. Born and another female lawyer, Marna Tucker, taught what is considered to have been the first \"Women and the Law\" course at Catholic University’s Columbus School of Law. The class exclusively concerned prejudicial treatment of women under the laws of the United States, past and present. Born and Tucker were surprised to discover that there was no textbook on the issue at the time. Born is also one of the co-founders of the National Women's Law Center. Born also helped rewrite the American Bar Association rules to make it possible for more women and minorities to sit on federal bench.\n\nDuring her long legal career, and into her retirement, Born did much pro bono and other types of volunteer work. She was active in the American Bar Association, the largest professional organization of lawyers in the United States. Initially Born was named a member of the governing council of the ABA's Individual Rights Section, eventually becoming chairperson. Born and Tucker founded the ABA Women's Caucus, the first organization of female lawyers in the ABA. She held several other senior positions in the ABA, including being named the first woman member of the ABA's Standing Committee on the Federal Judiciary. As a member of the Judiciary Committee, Born provided testimony and opinion on persons nominated for federal judgeships. In 1980 she was named chair of the committee. As chair of the committee, Born was invited to address the U.S. Congress regarding the nomination of Judge Sandra Day O'Connor to the U.S. Supreme Court.\n\nIn 1993, Born's name was floated as a possible candidate for Attorney General of the United States, but Janet Reno was nominated.\n\nIn July 2009, Nancy Pelosi appointed Brooksley Born as a commissioner to the Financial Crisis Inquiry Commission (FCIC).\n\nBorn and the OTC derivatives market\nBorn was appointed to the CFTC on April 15, 1994, by President Bill Clinton. Due to litigation against Bankers Trust Company by Procter and Gamble and other corporate clients, Born and her team at the CFTC sought comments on the regulation of over-the-counter derivatives, a first step in the process of writing CFTC regulations to supplement the existing regulations of the Federal Reserve System, the Options Clearing Corporation, and the National Association of Insurance Commissioners. Born was particularly concerned about swaps, financial instruments that are traded over the counter between banks, insurance companies or other funds or companies, and thus have no transparency except to the two counterparties and the counterparties' regulators, if any. CFTC regulation was strenuously opposed by Federal Reserve chairman Alan Greenspan, and by Treasury Secretaries Robert Rubin and Lawrence Summers. On May 7, 1998, former SEC Chairman Arthur Levitt joined Rubin and Greenspan in objecting to the issuance of the CFTC's concept release. Their response dismissed Born's analysis and focused on the hypothetical possibility that CFTC regulation of swaps and other OTC derivative instruments could create a \"legal uncertainty\" regarding such financial instruments, hypothetically reducing the value of the instruments. They argued that the imposition of regulatory costs would \"stifle financial innovation\" and encourage financial capital to transfer its transactions offshore. The disagreement between Born and the Executive Office's top economic policy advisors has been described not only as a classic Washington turf war, but also a war of ideologies, insofar as it is possible to argue that Born's actions were consistent with Keynesian and neoclassical economics while Greenspan, Rubin, Levitt, and Summers consistently espoused neoliberal, and neoconservative policies.\n\nIn 1998, a trillion-dollar hedge fund called Long Term Capital Management (LTCM) was near collapse. Using mathematical models to calculate debt risk, LTCM used derivatives to leverage $5 billion into more than $1 trillion, doing business with fifteen of Wall Street's largest financial institutions. The derivative transactions were not regulated, nor were investors able to evaluate LTCM's exposures. Born stated, \"I thought that LTCM was exactly what I had been worried about\". In the last weekend of September 1998, the President's working group was told that the entire American economy hung in the balance. After intervention by the Federal Reserve, the crisis was averted. In congressional hearings into the crisis, Greenspan acknowledged that language had been introduced into an agriculture bill that would prevent CFTC from regulating the derivatives which were at the center of the crisis that threatened the US economy. U.S. Representative Maurice Hinchey (D-NY) asked \"How many more failures do you think we'd have to have before some regulation in this area might be appropriate?\" In response, Greenspan brushed aside the substance of Born's warnings with the simple assertion that \"the degree of supervision of regulation of the over-the-counter derivatives market is quite adequate to maintain a degree of stability in the system\". Born's warning was that there wasn't any regulation of them. Born's chief of staff, Michael Greenberger summed up Greenspan's position this way: \"Greenspan didn't believe that fraud was something that needed to be enforced, and he assumed she probably did. And of course, she did.\" Under heavy pressure from the financial lobby, legislation prohibiting regulation of derivatives by Born's agency was passed by the Congress. Born resigned on June 1, 1999.\n\nThe derivatives market continued to grow yearly throughout both terms of George W. Bush's administration. On September 15, 2008, the bankruptcy of Lehman Brothers forced a broad recognition of a financial crisis in both the US and world capital markets. As Lehman Brothers' failure temporarily reduced financial capital's confidence, a number of newspaper articles and television programs suggested that the failure's possible causes included the conflict between the CFTC and the other regulators.Faiola, Anthony, Nakashima, Ellen and Drew, Jill. The Crash: Risk and Regulation - What Went Wrong, The Washington Post, October 15, 2008.\n\nBorn declined to publicly comment on the unfolding 2008 crisis until March 2009, when she said: \"The market grew so enormously, with so little oversight and regulation, that it made the financial crisis much deeper and more pervasive than it otherwise would have been.\" She also lamented the influence of Wall Street lobbyists on the process and the refusal of regulators to discuss even modest reforms.\n\nAn October 2009 Frontline documentary titled \"The Warning\" described Born's thwarted efforts to regulate and bring transparency to the derivatives market, and the continuing opposition thereto. The program concluded with an excerpted interview with Born sounding another warning: \"I think we will have continuing danger from these markets and that we will have repeats of the financial crisis -- may differ in details but there will be significant financial downturns and disasters attributed to this regulatory gap, over and over, until we learn from experience.\"\n\nIn 2009 Born, along with Sheila Bair of the FDIC, was awarded the John F. Kennedy Profiles in Courage Award in recognition of the \"political courage she demonstrated in sounding early warnings about conditions that contributed\" to the 2007-08 financial crisis. According to Caroline Kennedy, \"Brooksley Born recognized that the financial security of all Americans was being put at risk by the greed, negligence and opposition of powerful and well connected interests.... The catastrophic financial events of recent months have proved them [Born and Sheila Bair] right.\" One member of the President's working group had a change of heart about Brooksley Born. SEC Chairman Arthur Levitt stated \"I've come to know her as one of the most capable, dedicated, intelligent and committed public servants that I have ever come to know\", adding that \"I could have done much better. I could have made a difference\" in response to her warnings.\n\nIn 2010, a documentary film Inside Job further alleged that derivatives regulation was ineffective from the Clinton administration on. Along with fellow whistleblower, former IMF Chief Economist Raghuram Rajan, who was also scorned by the economic establishment, Brooksley Born was cited as one of the authorities arguing that financial derivatives increase economic risk.\n\n Personal life \nBorn is married to Alexander E. Bennett (also retired from Arnold & Porter). She has five adult children - two from a previous marriage to Jacob Landau and three stepchildren. Notably, Born was named a partner at Arnold & Porter while working part-time so she could raise her two young children. When both of her children were school-age, Born returned to practice full-time.\n\nReferences\n\nExternal links\nAttorney profile at Arnold & Porter\nBrooksley Born (2009 Winner) of the Profiles in Courage Award, with acceptance speech transcript and NECN video\n\nProfile at MarketsWiki\nSpeeches and statements\n\"Testimony Of Brooksley Born Chairperson of the CFTC Concerning The Over-The-Counter Derivatives Market\", before the House Committee On Banking And Financial Services, July 24, 1998.\n\"The Lessons of Long Term Capital Management L.P.\", Remarks of Brooksley Born, Chairperson of the CFTC, Chicago-Kent-IIT Commodities Law Institute, Chicago, Illinois, October 15, 1998.\n Interview: Brooksley Born for \"PBS Frontline: The Warning\", PBS, (streaming VIDEO 1 hour), October 20, 2009.\nArticles\nManuel Roig-Franzia. \"Credit Crisis Cassandra:Brooksley Born's Unheeded Warning Is a Rueful Echo 10 Years On\", The Washington Post, May 26, 2009\n Taibbi, Matt. \"The Great American Bubble Machine\", Rolling Stone'', July 9–23, 2009\n\n1940 births\nAmerican women lawyers\nArnold & Porter people\nClinton administration personnel\nColumbus School of Law faculty\nCommodity Futures Trading Commission personnel\nHeads of United States federal agencies\nLawyers from San Francisco\nLiving people\nStanford Law School alumni\n21st-century American women\nStanford University alumni.", "answers": ["Jacob C. Landau."], "length": 2085, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "470018af720bc15decf8f7a9643250c9a6548c8efeb394cd"} {"input": "What models were used for dialect identification?", "context": "Paper Info\n\nTitle: Two-stage Pipeline for Multilingual Dialect Detection\nPublish Date: Unkown\nAuthor List: Ankit Vaidya (from Pune Institute of Computer Technology), Aditya Kane (from Pune Institute of Computer Technology)\n\nFigure\n\nFigure 1: Class distribution of dialects\nFigure 2: System diagram for dialect classification.The LID classifies the input into one of 3 languages.The sample is then further classified into dialects by language specific models.\nFigure 3: Confusion matrix of 9-way classification.Note that rows are normalized according to the number of samples is that class.\nOur complete results for Track-1 using the two-stage dialect detection pipeline.Model-* denotes the language of the models used for the experiments.\nPerformance on Track-1 validation dataset of individual models used in the two-stage pipeline.\"Lg\" stands for language of the model used.\nComparative results of two-way classification using the finetuned (F.T.) predictions and predictions adapted from three-way classification models.\n\nabstract\n\nDialect Identification is a crucial task for localizing various Large Language Models. This paper outlines our approach to the VarDial 2023 DSL-TL shared task. Here we have to identify three or two dialects from three languages each which results in a 9-way classification for Track-1 and 6-way classification for Track-2 respectively.\nOur proposed approach consists of a two-stage system and outperforms other participants' systems and previous works in this domain. We achieve a score of 58.54% for Track-1 and 85.61% for Track-2. Our codebase is available publicly 1 .\n\nIntroduction\n\nLanguage has been the primary mode of communication for humans since the pre-historic ages. Studies have explored the evolution of language and outlined mathematical models that govern the intricacies of natural language . Inevitably, as humans established civilization in various parts of the world, this language was modified by, and for the group of people occupied by that particular geographical region.\nThis gave rise to multiple national dialects of the same language. The VarDial workshop (colocated with EACL 2023) explores various dialects and variations of the same language. We participated in the Discriminating Between Similar Languages -True Labels (DSL-TL) shared task. In this task, the participants were provided with data from three languages, with each language having three varieties.\nThis shared task consisted of two tracks -Track-1 featuring nine-way classification and Track-2 featuring six-way classification. The second track included two particular national dialects of each language (eg. American English and British English), and the first track had one general We ranked 1 st in both of the tracks.\nMoreover, we beat the next best submission by a margin of 4.5% in the first task and 5.6% in the second task.We were the only team to surpass the organizer baseline scores. We present our winning solution in this paper. We used an end-to-end deep learning pipeline which consisted of a language identification model and three language-specific models, one for each language.\nWe converged upon the best combination by doing an elaborate analysis of various models available. Furthermore, in this work we also analyze the performance of the pipeline as a whole and also provide an ablation study. Lastly, we provide some future directions in this area of research.\n\nRelated Work\n\nThe present literature encompasses various aspects of dialect identification. We study this from three perspectives: large language models, language identification and dialect classification problems.\n\nLarge Language Models\n\nThe success of transformers and BERT based models was inevitable since the initial boom of the transformer 2017) model. In recent years, many other architectures like RoBERTa and ELECTRA have further pushed the state-of-the-art in this domain. Moreover, autoregressive models like GPT and GPT-2 have also shown their prowess.\nMultilingual versions of RoBERTA, namely XLM-RoBERTa are also available. Lastly, language specific models like Spanish BERT (la Rosa y Eduardo G. Ponferrada y Manu Romero y Paulo Villegas y Pablo González de Prado Salas y María Grandury, 2022) and Portuguese BERT are available as well. Our winning solution makes use of these large language models trained on specific languages.\n\nLanguage Identification Models\n\nMany multilingual language identification models have been developed in order to classify the language of the input sentence beforehand. Even though the initial works used n-gram models and generative mixture models or even conditional random fields and other classical machine learning methods like naive bayes , modern methods have shifted to the use of deep learning for language identification .\nRecent works have mainly focused on deep learning based language identification, where handling codemixed data is a big challenge in the domain. For our experiments, we use a version of XLM-RoBERTa finetuned on a language identification dataset 2 . This model has near-perfect test accuracy of 99.6%.\n\nDialect Classification\n\nDialect classification has been previously solved using statistical methods like Gaussian Mixture Models and Frame Selection Decoding or Support Vector Machines (SVM) . It has been explored relatively sparsely, mostly in the case for local languages . Deep learning approaches have been explored in previous editions of the VarDial workshop shared tasks and otherwise .\nDialect classification was also explored previously as a part of other shared tasks . We want to stress that given the multilingual nature of the dataset, using the present methods directly was not an option. In our work, although we take inspiration from the previous works, we propose a novel system that surpasses the performance of the previous systems by a large margin.\n\nData\n\nThe dataset We observed that the class PT-BR had the most number of samples (2,724) and the class EN had the least number of samples (349), and thus the imbalance ratio was almost 1:8. We have illustrated the data distribution in Figure . We tried to mitigate this imbalance using over-sampling and weighted sampling methods.\nHowever, the improved data sampling method did not affect the performance.\n\nSystem Description\n\nThis was a problem of multi-class classification having 9 classes for Track-1 and 6 classes for Track-2. The samples were belonging to 3 languages having 3 varieties each, so the classification pipeline was made in 2 stages. The Language Identification (LID) model which is the first stage classifies the sentence into 3 languages: English (EN), Spanish (ES) and Portuguese (PT).\nThe LID is a pretrained XLM-RoBERTa that is fine-tuned for the task of language identification. It is able to classify the input sentence into 20 languages. We classify and separate the samples according to their language. The samples corresponding to the specific languages are then fed into the language specific models for dialect identification.\nFor dialect identification we have used models like BERT and RoBERTa with a linear layer connected to the pooler output of the models. Then fine-tuning is done on the models for dialect identification using the samples corresponding to the specific languages. For the task of dialect identification we experimented with several pretrained models like XLM-RoBERTa, BERT, ELECTRA, GPT-2 and RoBERTa.\nAll models were fine-tuned for 20 epochs with a learning rate of 1e-6 and weight decay 1e-6 with a batch size of 8. The best performing model checkpoint was chosen according to the epoch-wise validation macro-F1 score. 5 Experiments and Results\n\nExperiments using Large Language Models\n\nFor the task of Dialect Identification we have tried various language specific models like XLM-RoBERTa, BERT, ELECTRA, RoBERTa and GPT- 2. The base variant of all these models were used and all the models were used through the Hugging-Face library. The pooler output of these models was passed through a linear layer and the models were fine-tuned.\nFirst, we experimented with different models for Track-1. All the models were trained for 20 epochs with learning rate 1e-6, weight decay 1e-6 and a batch size of 8. We used XLM-RoBERTa as the baseline for all 3 languages. The best performing models for the English language were RoBERTa and BERT whereas GPT-2 was the worst performing.\nSimilarly the language specific versions of RoBERTa and BERT performed well for the Spanish and Portuguese respectively. Overall the worst performing model was GPT-2 across all 3 languages. The validation F1 scores are present in Table . The two best-performing models for every language were chosen for Track-2.\nThe same procedure as specified above was used and the F1 scores are present in Table . The train and validation F1 scores for 2-class classification are higher for all models as compared to the F1 score of the same models for 3-class classification. This was mainly due to the poor representation and accuracy of classification of the third class.\nWe observed symptoms of overfitting in all models after 12-15 epochs and the best validation F1 score was obtained in the range of 4-8 epochs.\n\nLID experiments\n\nThe pipeline for dialect identification is divided into two parts as the sentences in the dataset belong to different languages. The stages are described in Section 4. The XLM-RoBERTa we have used for language classification has a test accuracy of 99.6% meaning it correctly classifies all input sentences and hence, can be considered as a perfect classifier.\nFor the final pipeline we experimented using the two best performing models for each language in Track-1 and Track-2. For both the tracks we experimented with all 8 (2 3 ) possible combinations of models and calculated the validation F1 score for the combined validation dataset which had sentences belonging to all languages.\nThe validation scores for Track-1 and Track-2 are shown in Table and Table respectively. For both the tracks, the three pipelines with the best validation F1 scores were chosen for submission.\n\nUsing 3-way classifier as a 2-way classifier\n\nIn Track-1, participants are expected to train a classifier which classifies amongst 9 classes, and in Track-2, participants are expected to train a classifier which classifies amongst 6 classes. These 6 classes are a proper subset of the 9 classes from Track-1. Thus, an intuitive baseline for Track-2 is to use the model finetuned for Track-1, whilst considering only the relevant classes for the latter task.\nThe classes EN , ES and P T , i.e. the classes without any national dialect associated with them are not included in Track-2 as compared to Track-1. Thus, we calculate the predictions for the Track-2 validation dataset using the models for Track-1 and exclude the metrics for Track-1 specific classes to get the metrics for this \"adapted\" 2-way classification.\nWe show the results of this experiment in Table and observe that, as expected, the adapted 2-way classification performs worse compared to the explicitly finetuned variant.\n\nResults for Track-1 and Track-2\n\nWe now present our experiments and their performance for both tracks. Our experiments for Track-1 are described in Table and our experiments for Track-2 are described in Table . The participants were allowed three submissions for evaluation on the test set, so we submitted predictions using the three systems which performed the best on the validation set.\nAs mentioned in Section 5.2, we performed 2 3 , i.e. a total of 8 experiments using the two best models for each language. We observed that RoBERTa base on English, Spanish BERT base on Spanish and Portuguese BERT base performed the best on the testing set for Track-1. The same combination, with RoBERTa base for English, worked best for Track-2.\nAll of our submissions were the top submissions for each track, which surpassed the next best competitors by a margin of 4.5% and 5.6% for Track-1 and Track-2 respectively.\n\nAblation of best submissions\n\nWe hereby make some observations of our submissions and other experiments. To assist this, we plot the confusion matrices of our best submissions for Track-1 and Track-2 in Figures respectively. Note that these confusion matrices have their rows (i.e. true labels axes) normalized according to the number of samples in the class.\nHere are observations from our experiments: 1. BERT-based models outperform other models across all languages: We observe that BERT-based models outperform ELECTRA-based and GPT-2-based models, as shown in Table . We speculate this is because of the inherent architecture of BERT, which combines semantic learning with knowledge retention.\nThis combination of traits is particularly useful for this task. 2. Common labels perform the worst across all languages: We observe that the common labels EN , ES and P T perform the worst, both in the individual as well as the two-stage setup. We hypothesize this is because of the absence of dialect specific words, or words that are specific to the geographical origin of the national dialect (for example, \"Yankees\" for EN-US and \"Oxford\" for EN-GB).\n3. English models work better than models of other languages: It can be noted from Figures 4 and 3 that the English models have the best performance across all classes. This can be attributed to two reasons: absence of national dialect specific words and lesser pretraining data in the case of Portuguese.\n4. British English is most correctly classified class: We can observe that the Spanish or Portuguese models make equal number of mistakes in the case of either national dialect, in the case of Track-2 (see Figure ). However, in the case of English, the label EN − GB is correctly classified for more than 95% of the cases.\nWe speculate this is because British English involves slightly distinctive grammar and semantics, which help the model separate it from other classes. 5. The proposed 2-step method is scalable for multiple language dialect classification: We can strongly assert that the novel 2-step deep learning method for multilingual dialect classification is a scalable method for the task due to two specific reasons: firstly, the multilingual models (like XLM-RoBERTa) might not have the vocabulary as well as the learning capabilities to learn the minute differences between individual dialects.\nSecondly, this system can be quickly expanded for a new language by simply adding a language specific dialect classifier, provided the language identification model supports that particular language.\n\nConclusion\n\nIn this paper we propose a two-stage classification pipeline for dialect identification for multilingual corpora. We conduct thorough ablations on this setup and provide valuable insights. We foresee multiple future directions for this work. The first is to expand this work to many languages and dialects.\nSecondly, it is a worthwhile research direction to distill this multi-model setup into a single model with multiple prediction heads. The obvious limitation of this system is the excessive memory consumption due to the usage of language specific models. For low resource languages this system is difficult to train and scale.\nWe hope that these problems will be addressed by researchers in future works.", "answers": ["BERT, RoBERTa, ELECTRA, GPT-2, and XLM-RoBERTa."], "length": 2397, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "635ad5e3696e0d297f3ea8909d42975a5c1eb49a7f4a8466"} {"input": "Where is the club's headquarters located?", "context": "Football Club Urartu (, translated Futbolayin Akumb Urartu), commonly known as Urartu, is an Armenian professional football team based in the capital Yerevan that currently plays in the Armenian Premier League. The club won the Armenian Cup three times, in 1992, 2007 and 2016. In 2013–2014, they won the Armenian Premier League for the first time in their history.\n\nIn early 2016, the Russia-based Armenian businessman Dzhevan Cheloyants became a co-owner of the club after purchasing the major part of the club shares. The club was known as FC Banants until 1 August 2019, when it was officially renamed FC Urartu.\n\nHistory\n\nKotayk\nUrartu FC were founded as FC Banants by Sarkis Israelyan on 21 January 1992 in the village of Kotayk, representing the Kotayk Province. He named the club after his native village of Banants (currently known as Bayan). Between 1992 and 1995, the club was commonly referred to as Banants Kotayk. During the 1992 season, the club won the first Armenian Cup. At the end of the 1995 transitional season, Banants suffered a financial crisis. The club owners decided that it was better to merge the club with FC Kotayk of Abovyan, rather than disband it. In 2001, Banants demerged from FC Kotayk, and was moved from Abovyan to the capital Yerevan.\n\nYerevan\n\nFC Banants was relocated to Yerevan in 2001. At the beginning of 2003, Banants merged with FC Spartak Yerevan, but was able to limit the name of the new merger to FC Banants. Spartak became Banants's youth academy and later changed the name to Banants-2. Because of the merger, Banants acquired many players from Spartak Yerevan, including Samvel Melkonyan. After the merger, Banants took a more serious approach and have finished highly in the league table ever since. The club managed to lift the Armenian Cup in 2007.\nExperience is making way for youth for the 2008 and 2009 seasons. The departures of most of the experienced players have left the club's future to the youth. Along with two Ukrainian players, Ugandan international, Noah Kasule, has been signed.\n\nThe club headquarters are located on Jivani Street 2 of the Malatia-Sebastia District, Yerevan.\n\nDomestic\n\nEuropean\n\nStadium\n\nThe construction of the Banants Stadium was launched in 2006 in the Malatia-Sebastia District of Yerevan, with the assistance of the FIFA goal programme. It was officially opened in 2008 with a capacity of 3,600 seats. Further developments were implemented later in 2011, when the playing pitch was modernized and the capacity of the stadium was increased up to 4,860 seats (2,760 at the northern stand, 1,500 at the southern stand and 600 at the western stand).\n\nTraining centre/academy\nBanants Training Centre is the club's academy base located in the Malatia-Sebastia District of Yerevan. In addition to the main stadium, the centre houses 3 full-size training pitches, mini football pitches as well as an indoor facility. The current technical director of the academy is the former Russian footballer Ilshat Faizulin.\n\nFans\nThe most active group of fans is the South West Ultras fan club, mainly composed of residents from several neighbourhoods within the Malatia-Sebastia District of Yerevan, since the club is a de facto representer of the district. Members of the fan club benefit from events organized by the club and many facilities of the Banants training centre, such as the mini football pitch, the club store and other entertainments.\n\nAchievements\n Armenian Premier League\n Winner (1): 2013–14.\n Runner-up (5): 2003, 2006, 2007, 2010, 2018.\n\n Armenian Cup\n Winner (3): 1992, 2007, 2016.\n Runner-up (6): 2003, 2004, 2008, 2009, 2010, 2021–22\n\n Armenian Supercup\n Winner (1): 2014.\n Runner-up (5): 2004, 2007, 2009, 2010, 2016.\n\nCurrent squad\n\nOut on loan\n\nPersonnel\n\nTechnical staff\n\nManagement\n\nUrartu-2\n\nFC Banants' reserve squad play as FC Banants-2 in the Armenian First League. They play their home games at the training field with artificial turf of the Urartu Training Centre.\n\nManagerial history\n Varuzhan Sukiasyan (1992–94)\n Poghos Galstyan (July 1, 1996 – June 30, 1998)\n Oganes Zanazanyan (2001–05)\n Ashot Barseghyan (2005–06)\n Nikolay Kiselyov (2006–07)\n Jan Poštulka (2007)\n Nikolay Kostov (July 1, 2007 – April 8, 2008)\n Nedelcho Matushev (April 8, 2008 – June 30, 2008)\n Kim Splidsboel (2008)\n Armen Gyulbudaghyants (Jan 1, 2009 – Dec 1, 2009)\n Ashot Barseghyan (interim) (2009)\n Stevica Kuzmanovski (Jan 1, 2010 – Dec 31, 2010)\n Rafael Nazaryan (Jan 1, 2011 – Jan 15, 2012)\n Volodymyr Pyatenko (Jan 17, 2013 – June 30, 2013)\n Zsolt Hornyák (July 1, 2013 – May 30, 2015)\n Aram Voskanyan (July 1, 2015 – Oct 11, 2015)\n Tito Ramallo (Oct 12, 2015 – Oct 3, 2016)\n Artur Voskanyan (Oct 3, 2016 – Aug 11, 2018)\n Ilshat Faizulin (Aug 12, 2018 –Nov 24, 2019)\n Aleksandr Grigoryan (Nov 25, 2019 –Mar 10, 2021)\n Robert Arzumanyan (10 March 2021–24 June 2022)\n Dmitri Gunko (27 June 2022–)", "answers": ["The club's headquarters are located on Jivani Street 2 of the Malatia-Sebastia District, Yerevan."], "length": 812, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "d5422b0fdbed42c0f1d2213edb6c7637802452e4ca7287eb"} {"input": "Where was Margaret Way born and where did she die?", "context": "Margaret Way (b. Brisbane d. Cleveland, Queensland, Australia ) was an Australian writer of romance novels and women's fiction. A prolific author, Way wrote more than 120 novels since 1970, many through Mills & Boon, a romance imprint of British publisher Harlequin UK Ltd., owned by Harlequin Enterprises.\n\nBiography\nBefore her marriage, she was a well-known pianist, teacher, vocal coach and accompanist. She began writing when her son, Laurence Way, was born, a friend took a pile of Mills & Boon books to her, she read all and decided that she also could write these types of novels. She began to write and promote her country with her stories set in Australia. She sold her first novels in 1970. Margaret Way lives with her family in her native Brisbane. Beginning in 2013, Margaret began to self-publish, releasing her first \"e-book\" mid-July.\n\nMargaret died on the 10th of August 2022 in Cleveland, Queensland.\n\nBibliography\n\nSingle Novels\nKing Country (1970)\nBlaze of Silk (1970)\nThe Time of the Jacaranda (1970)\nBauhinia Junction (1971)\nMan from Bahl Bahla (1971)\nSummer Magic (1971)\nReturn to Belle Amber (1971)\nRing of Jade (1972)\nCopper Moon (1972)\nRainbow Bird (1972)\nMan Like Daintree (1972)\nNoonfire (1972)\nStorm Over Mandargi (1973)\nWind River (1973)\nLove Theme (1974)\nMcCabe's Kingdom (1974)\nSweet Sundown (1974)\nReeds of Honey (1975)\nStorm Flower (1975)\nLesson in Loving (1975)\nFlight into Yesterday (1976)\nRed Cliffs of Malpara (1976)\nMan on Half-moon (1976)\nSwan's Reach (1976)\nMutiny in Paradise (1977)\nOne Way Ticket (1977)\nPortrait of Jaime (1977)\nBlack Ingo (1977)\nAwakening Flame (1978)\nWild Swan (1978)\nRing of Fire (1978)\nWake the Sleeping Tiger (1978)\nValley of the Moon (1979)\nWhite Magnolia (1979)\nWinds of Heaven (1979)\nBlue Lotus (1979)\nButterfly and the Baron (1979)\nGolden Puma (1980)\nTemple of Fire (1980)\nLord of the High Valley (1980)\nFlamingo Park (1980)\nNorth of Capricorn (1981)\nSeason for Change (1981)\nShadow Dance (1981)\nMcIvor Affair (1981)\nHome to Morning Star (1981)\nBroken Rhapsody (1982)\nThe Silver Veil (1982)\nSpellbound (1982)\nHunter's Moon (1982)\nGirl at Cobalt Creek (1983)\nNo Alternative (1983)\nHouse of Memories (1983)\nAlmost a Stranger (1984)\nA place called Rambulara (1984)\nFallen Idol (1984)\nHunt the Sun (1985)\nEagle's Ridge (1985)\nThe Tiger's Cage (1986)\nInnocent in Eden (1986)\nDiamond Valley (1986)\nMorning Glory (1988)\nDevil Moon (1988)\nMowana Magic (1988)\nHungry Heart (1988)\nRise of an Eagle (1988)\nOne Fateful Summer (1993)\nThe Carradine Brand (1994)\nHolding on to Alex (1997)\nThe Australian Heiress (1997)\nClaiming His Child (1999)\nThe Cattleman's Bride (2000)\nThe Cattle Baron (2001)\nThe Husbands of the Outback (2001)\nSecrets of the Outback (2002)\nWith This Ring (2003)\nInnocent Mistress (2004)\nCattle Rancher, Convenient Wife (2007)\nOutback Marriages (2007)\nPromoted: Nanny to Wife (2007)\nCattle Rancher, Secret Son (2007)\nGenni's Dilemma (2008)\nBride At Briar Ridge (2009)\nOutback Heiress, Surprise Proposal (2009)\nCattle Baron, Nanny Needed (2009)\n\nLegends of the Outback Series\nMail Order Marriage (1999)\nThe Bridesmaid's Wedding (2000)\nThe English Bride (2000)\nA Wife at Kimbara (2000)\n\nKoomera Crossing Series\nSarah's Baby (2003)\nRunaway Wife (2003)\nOutback Bridegroom (2003)\nOutback Surrender (2003)\nHome to Eden (2004)\n\nMcIvor Sisters Series\nThe Outback Engagement (2005)\nMarriage at Murraree (2005)\n\nMen Of The Outback Series\nThe Cattleman (2006)\nThe Cattle Baron's Bride (2006)\nHer Outback Protector (2006)\nThe Horseman (2006)\n\nOutback Marriages Series\nOutback Man Seeks Wife (2007)\nCattle Rancher, Convenient Wife (2007)\n\nBarons of the Outback Series Multi-Author\nWedding At Wangaree Valley (2008)\nBride At Briar's Ridge (2008)\n\nFamily Ties Multi-Author\nOnce Burned (1995)\n\nHitched! Multi-Author\nA Faulkner Possession (1996)\n\nSimply the Best Multi-Author\nGeorgia and the Tycoon (1997)\n\nThe Big Event Multi-Author\nBeresford's Bride (1998)\n\nGuardian Angels Multi-Author\nGabriel's Mission (1998)\n\nAustralians Series Multi-Author\n7. Her Outback Man (1998)\n17. Master of Maramba (2001)\n19. Outback Fire (2001)\n22. Mistaken Mistress (2002)\n24. Outback Angel (2002)\n33. The Australian Tycoon's Proposal (2004)\n35. His Heiress Wife (2004)\n\nMarrying the Boss Series Multi-Author\nBoardroom Proposal (1999)\n\nContract Brides Series Multi-Author\nStrategy for Marriage (2002)\n\nEverlasting Love Series Multi-Author\nHidden Legacy (2008)\n\nDiamond Brides Series Multi-Author\nThe Australian's Society Bride (2008)\n\nCollections\nSummer Magic / Ring of Jade / Noonfire (1981)\nWife at Kimbara / Bridesmaid's Wedding (2005)\n\nOmnibus in Collaboration\nPretty Witch / Without Any Amazement / Storm Over Mandargi (1977) (with Lucy Gillen and Margaret Malcolm)\nDear Caliban / Heart of the Eagle / Swans' Reach (1978) (with Jane Donnelly and Elizabeth Graham)\nThe Bonds of Matrimony / Dragon Island / Reeds of Honey (1979) (with Elizabeth Hunter and Henrietta Reid)\nThe Man Outside / Castles in Spain / McCabe's Kingdom (1979) (with Jane Donnelly and Rebecca Stratton)\nWinds From The Sea / Island of Darkness / Wind River (1979) (with Margaret Pargeter and Rebecca Stratton)\nMoorland Magic / Tree of Idleness / Sweet Sundown (1980) (with Elizabeth Ashton and Elizabeth Hunter)\nThe Shifting Sands / Portrait of Jaime / Touched by Fire (1982) (with Jane Donnelly and Kay Thorpe)\nHead of Chancery / Wild Heart / One-Way Ticket (1986) (with Betty Beaty and Doris Smith)\nHeart of the Scorpion / The Winds of Heaven / Sweet Compulsion (1987) (with Janice Gray and Victoria Woolf)\nOne Brief Sweet Hour / Once More With Feeling / Blue Lotus (1990) (with Jane Arbor and Natalie Sparks)\nMarry Me Cowboy (1995) (with Janet Dailey, Susan Fox and Anne McAllister)\nHusbands on Horseback (1996) (with Diana Palmer)\nWedlocked (1999) (with Day Leclaire and Anne McAllister)\nMistletoe Magic (1999) (with Betty Neels and Rebecca Winters)\nThe Australians (2000) (with Helen Bianchin and Miranda Lee)\nWeddings Down Under (2001) (with Helen Bianchin and Jessica Hart)\nOutback Husbands (2002) (with Marion Lennox)\nThe Mother's Day Collection (2002) (with Helen Dickson and Kate Hoffmann)\nAustralian Nights (2003) (with Miranda Lee)\nOutback Weddings (2003) (with Barbara Hannay)\nAustralian Playboys (2003) (with Helen Bianchin and Marion Lennox)\nAustralian Tycoons (2004) (with Emma Darcy and Marion Lennox)\nA Mother's Day Gift (2004) (with Anne Ashley and Lucy Monroe)\nWhite Wedding (2004) (with Judy Christenberry and Jessica Steele)\nA Christmas Engagement (2004) (with Sara Craven and Jessica Matthews)\nA Very Special Mother's Day (2005) (with Anne Herries)\nAll I Want for Christmas... (2005) (with Betty Neels and Jessica Steele)\nThe Mills and Boon Collection (2006) (with Caroline Anderson and Penny Jordan)\nOutback Desire (2006) (with Emma Darcy and Carol Marinelli)\nTo Mum, with Love (2006) (with Rebecca Winters)\nAustralian Heroes (2007) (with Marion Lennox and Fiona McArthur)\nTall, Dark and Sexy (2008) (with Caroline Anderson and Helen Bianchin)\nThe Boss's Proposal (2008) (with Jessica Steele and Patricia Thayer)\nIsland Heat / Outback Man Seeks Wife / Prince's Forbidden Virgin / One Night Before Marriage / Their Lost-and-found Family / Single Dad's Marriage Wish (2008) (with Robyn Donald, Marion Lennox, Carol Marinelli, Sarah Mayberry and Anne Oliver)\nAustralian Billionaires (2009) (with Jennie Adams and Amy Andrews)\nCattle Baron : Nanny Needed / Bachelor Dad on Her Doorstep (2009) (with Michelle Douglas)\n\nExternal links\nMargaret Way at Harlequin Enterprises Ltd\n\nAustralian romantic fiction writers\nAustralian women novelists\nLiving people\nYear of birth missing (living people)\nWomen romantic fiction writers", "answers": ["Margaret Way was born in Brisbane and died in Cleveland, Queensland, Australia."], "length": 1203, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "934162f30867844eb8d74c9d62c1e2aba3fca790b5b1d53e"} {"input": "What was the club known as before being officially renamed FC Urartu?", "context": "Football Club Urartu (, translated Futbolayin Akumb Urartu), commonly known as Urartu, is an Armenian professional football team based in the capital Yerevan that currently plays in the Armenian Premier League. The club won the Armenian Cup three times, in 1992, 2007 and 2016. In 2013–2014, they won the Armenian Premier League for the first time in their history.\n\nIn early 2016, the Russia-based Armenian businessman Dzhevan Cheloyants became a co-owner of the club after purchasing the major part of the club shares. The club was known as FC Banants until 1 August 2019, when it was officially renamed FC Urartu.\n\nHistory\n\nKotayk\nUrartu FC were founded as FC Banants by Sarkis Israelyan on 21 January 1992 in the village of Kotayk, representing the Kotayk Province. He named the club after his native village of Banants (currently known as Bayan). Between 1992 and 1995, the club was commonly referred to as Banants Kotayk. During the 1992 season, the club won the first Armenian Cup. At the end of the 1995 transitional season, Banants suffered a financial crisis. The club owners decided that it was better to merge the club with FC Kotayk of Abovyan, rather than disband it. In 2001, Banants demerged from FC Kotayk, and was moved from Abovyan to the capital Yerevan.\n\nYerevan\n\nFC Banants was relocated to Yerevan in 2001. At the beginning of 2003, Banants merged with FC Spartak Yerevan, but was able to limit the name of the new merger to FC Banants. Spartak became Banants's youth academy and later changed the name to Banants-2. Because of the merger, Banants acquired many players from Spartak Yerevan, including Samvel Melkonyan. After the merger, Banants took a more serious approach and have finished highly in the league table ever since. The club managed to lift the Armenian Cup in 2007.\nExperience is making way for youth for the 2008 and 2009 seasons. The departures of most of the experienced players have left the club's future to the youth. Along with two Ukrainian players, Ugandan international, Noah Kasule, has been signed.\n\nThe club headquarters are located on Jivani Street 2 of the Malatia-Sebastia District, Yerevan.\n\nDomestic\n\nEuropean\n\nStadium\n\nThe construction of the Banants Stadium was launched in 2006 in the Malatia-Sebastia District of Yerevan, with the assistance of the FIFA goal programme. It was officially opened in 2008 with a capacity of 3,600 seats. Further developments were implemented later in 2011, when the playing pitch was modernized and the capacity of the stadium was increased up to 4,860 seats (2,760 at the northern stand, 1,500 at the southern stand and 600 at the western stand).\n\nTraining centre/academy\nBanants Training Centre is the club's academy base located in the Malatia-Sebastia District of Yerevan. In addition to the main stadium, the centre houses 3 full-size training pitches, mini football pitches as well as an indoor facility. The current technical director of the academy is the former Russian footballer Ilshat Faizulin.\n\nFans\nThe most active group of fans is the South West Ultras fan club, mainly composed of residents from several neighbourhoods within the Malatia-Sebastia District of Yerevan, since the club is a de facto representer of the district. Members of the fan club benefit from events organized by the club and many facilities of the Banants training centre, such as the mini football pitch, the club store and other entertainments.\n\nAchievements\n Armenian Premier League\n Winner (1): 2013–14.\n Runner-up (5): 2003, 2006, 2007, 2010, 2018.\n\n Armenian Cup\n Winner (3): 1992, 2007, 2016.\n Runner-up (6): 2003, 2004, 2008, 2009, 2010, 2021–22\n\n Armenian Supercup\n Winner (1): 2014.\n Runner-up (5): 2004, 2007, 2009, 2010, 2016.\n\nCurrent squad\n\nOut on loan\n\nPersonnel\n\nTechnical staff\n\nManagement\n\nUrartu-2\n\nFC Banants' reserve squad play as FC Banants-2 in the Armenian First League. They play their home games at the training field with artificial turf of the Urartu Training Centre.\n\nManagerial history\n Varuzhan Sukiasyan (1992–94)\n Poghos Galstyan (July 1, 1996 – June 30, 1998)\n Oganes Zanazanyan (2001–05)\n Ashot Barseghyan (2005–06)\n Nikolay Kiselyov (2006–07)\n Jan Poštulka (2007)\n Nikolay Kostov (July 1, 2007 – April 8, 2008)\n Nedelcho Matushev (April 8, 2008 – June 30, 2008)\n Kim Splidsboel (2008)\n Armen Gyulbudaghyants (Jan 1, 2009 – Dec 1, 2009)\n Ashot Barseghyan (interim) (2009)\n Stevica Kuzmanovski (Jan 1, 2010 – Dec 31, 2010)\n Rafael Nazaryan (Jan 1, 2011 – Jan 15, 2012)\n Volodymyr Pyatenko (Jan 17, 2013 – June 30, 2013)\n Zsolt Hornyák (July 1, 2013 – May 30, 2015)\n Aram Voskanyan (July 1, 2015 – Oct 11, 2015)\n Tito Ramallo (Oct 12, 2015 – Oct 3, 2016)\n Artur Voskanyan (Oct 3, 2016 – Aug 11, 2018)\n Ilshat Faizulin (Aug 12, 2018 –Nov 24, 2019)\n Aleksandr Grigoryan (Nov 25, 2019 –Mar 10, 2021)\n Robert Arzumanyan (10 March 2021–24 June 2022)\n Dmitri Gunko (27 June 2022–)\n\nReferences\n\nExternal links\n Official website \n Banants at Weltfussball.de \n\n \nUrartu\nUrartu\nUrartu\nUrartu", "answers": ["FC Banants."], "length": 818, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "c4f2dfb06f56a185067d2f147c4d846aa0b895f1968eda12"} {"input": "When did the club win the Armenian Premier League for the first time?", "context": "Football Club Urartu (, translated Futbolayin Akumb Urartu), commonly known as Urartu, is an Armenian professional football team based in the capital Yerevan that currently plays in the Armenian Premier League. The club won the Armenian Cup three times, in 1992, 2007 and 2016. In 2013–2014, they won the Armenian Premier League for the first time in their history.\n\nIn early 2016, the Russia-based Armenian businessman Dzhevan Cheloyants became a co-owner of the club after purchasing the major part of the club shares. The club was known as FC Banants until 1 August 2019, when it was officially renamed FC Urartu.\n\nHistory\n\nKotayk\nUrartu FC were founded as FC Banants by Sarkis Israelyan on 21 January 1992 in the village of Kotayk, representing the Kotayk Province. He named the club after his native village of Banants (currently known as Bayan). Between 1992 and 1995, the club was commonly referred to as Banants Kotayk. During the 1992 season, the club won the first Armenian Cup. At the end of the 1995 transitional season, Banants suffered a financial crisis. The club owners decided that it was better to merge the club with FC Kotayk of Abovyan, rather than disband it. In 2001, Banants demerged from FC Kotayk, and was moved from Abovyan to the capital Yerevan.\n\nYerevan\n\nFC Banants was relocated to Yerevan in 2001. At the beginning of 2003, Banants merged with FC Spartak Yerevan, but was able to limit the name of the new merger to FC Banants. Spartak became Banants's youth academy and later changed the name to Banants-2. Because of the merger, Banants acquired many players from Spartak Yerevan, including Samvel Melkonyan. After the merger, Banants took a more serious approach and have finished highly in the league table ever since. The club managed to lift the Armenian Cup in 2007.\nExperience is making way for youth for the 2008 and 2009 seasons. The departures of most of the experienced players have left the club's future to the youth. Along with two Ukrainian players, Ugandan international, Noah Kasule, has been signed.\n\nThe club headquarters are located on Jivani Street 2 of the Malatia-Sebastia District, Yerevan.\n\nDomestic\n\nEuropean\n\nStadium\n\nThe construction of the Banants Stadium was launched in 2006 in the Malatia-Sebastia District of Yerevan, with the assistance of the FIFA goal programme. It was officially opened in 2008 with a capacity of 3,600 seats. Further developments were implemented later in 2011, when the playing pitch was modernized and the capacity of the stadium was increased up to 4,860 seats (2,760 at the northern stand, 1,500 at the southern stand and 600 at the western stand).\n\nTraining centre/academy\nBanants Training Centre is the club's academy base located in the Malatia-Sebastia District of Yerevan. In addition to the main stadium, the centre houses 3 full-size training pitches, mini football pitches as well as an indoor facility. The current technical director of the academy is the former Russian footballer Ilshat Faizulin.\n\nFans\nThe most active group of fans is the South West Ultras fan club, mainly composed of residents from several neighbourhoods within the Malatia-Sebastia District of Yerevan, since the club is a de facto representer of the district. Members of the fan club benefit from events organized by the club and many facilities of the Banants training centre, such as the mini football pitch, the club store and other entertainments.\n\nAchievements\n Armenian Premier League\n Winner (1): 2013–14.\n Runner-up (5): 2003, 2006, 2007, 2010, 2018.\n\n Armenian Cup\n Winner (3): 1992, 2007, 2016.\n Runner-up (6): 2003, 2004, 2008, 2009, 2010, 2021–22\n\n Armenian Supercup\n Winner (1): 2014.\n Runner-up (5): 2004, 2007, 2009, 2010, 2016.\n\nCurrent squad\n\nOut on loan\n\nPersonnel\n\nTechnical staff\n\nManagement\n\nUrartu-2\n\nFC Banants' reserve squad play as FC Banants-2 in the Armenian First League. They play their home games at the training field with artificial turf of the Urartu Training Centre.\n\nManagerial history\n Varuzhan Sukiasyan (1992–94)\n Poghos Galstyan (July 1, 1996 – June 30, 1998)\n Oganes Zanazanyan (2001–05)\n Ashot Barseghyan (2005–06)\n Nikolay Kiselyov (2006–07)\n Jan Poštulka (2007)\n Nikolay Kostov (July 1, 2007 – April 8, 2008)\n Nedelcho Matushev (April 8, 2008 – June 30, 2008)\n Kim Splidsboel (2008)\n Armen Gyulbudaghyants (Jan 1, 2009 – Dec 1, 2009)\n Ashot Barseghyan (interim) (2009)\n Stevica Kuzmanovski (Jan 1, 2010 – Dec 31, 2010)\n Rafael Nazaryan (Jan 1, 2011 – Jan 15, 2012)\n Volodymyr Pyatenko (Jan 17, 2013 – June 30, 2013)\n Zsolt Hornyák (July 1, 2013 – May 30, 2015)\n Aram Voskanyan (July 1, 2015 – Oct 11, 2015)\n Tito Ramallo (Oct 12, 2015 – Oct 3, 2016)\n Artur Voskanyan (Oct 3, 2016 – Aug 11, 2018)\n Ilshat Faizulin (Aug 12, 2018 –Nov 24, 2019)\n Aleksandr Grigoryan (Nov 25, 2019 –Mar 10, 2021)\n Robert Arzumanyan (10 March 2021–24 June 2022)\n Dmitri Gunko (27 June 2022–)\n\nReferences\n\nExternal links\n Official website \n Banants at Weltfussball.de \n\n \nUrartu\nUrartu\nUrartu\nUrartu", "answers": ["In the 2013-2014 season."], "length": 821, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "8e79ff617e6cfd373020194ce4cb84531a6aec6ab7e6cfc2"} {"input": "What was the population of McPherson County according to the 2020 census?", "context": "McPherson County (standard abbreviation: MP) is a county located in the U.S. state of Kansas. As of the 2020 census, the county population was 30,223. The largest city and county seat is McPherson. The county is named for Civil War General James B. McPherson.\n\nHistory\n\nEarly history\n\nFor many millennia, the Great Plains of North America was inhabited by nomadic Native Americans. From the 16th century to 18th century, the Kingdom of France claimed ownership of large parts of North America. In 1762, after the French and Indian War, France secretly ceded New France to Spain, per the Treaty of Fontainebleau. In 1802, Spain returned most of the land to France, but keeping title to about 7,500 square miles.\n\nIn 1803, most of the land for modern day Kansas was acquired by the United States from France as part of the 828,000 square mile Louisiana Purchase for 2.83 cents per acre. In 1848, after the Mexican–American War, the Treaty of Guadalupe Hidalgo with Spain brought into the United States all or part of land for ten future states, including southwest Kansas. In 1854, the Kansas Territory was organized, then in 1861 Kansas became the 34th U.S. state.\n\n19th century\n\nFrom the 1820s to 1870s, the Santa Fe Trail passed through, what is now McPherson County. The trail entered the county, east of Canton, then south of Galva, then north of Inman, and west towards Lyons. In 1855, Charles O. Fuller established a ranch adjacent to the Running Turkey Creek Crossing about two miles south and one mile east of Galva. Fuller's Ranch provided accommodations for travelers on the Santa Fe Trail and was probably the first white settlement in McPherson County.\n\nPeketon County was established in 1860, by the passage of a bill by S. N. Wood: An act to establish Peketon County. Section 1. - That all that territory west of the sixth principal meridian and south of Township 16, in Kansas Territory, be and the same is hereby erected into a county, to be known by the name of Peketon County. On February 17, 1865, Peketon County was abolished, and McPherson County was made a part of Marion County, which extended from the west line of Chase County to the present western boundary of Kansas.\n\nIn 1868, Solomon Stephens and L. N. Holmberg were appointed Justices of the Peace—the first officers in what is now McPherson County. The next year (1869) occurred the first election for the township, now the county of McPherson. McPherson was regularly organized as a county in the spring of 1870, a mass meeting being held at Sweadal. Sweadal, the county seat thus selected, was located about one mile and a half southwest of the present site of Lindsborg. In September, however, the County Commissioners resolved to meet at the latter place, McPherson which had already been located some two years.\n\nIn April, 1873, a petition was filed for the county seat re-location. It was signed by 483 voters, and a special election was accordingly ordered for June 10. Upon that day, McPherson received 605 votes, New Gottland 325, King City 3 and Lindsborg 1; McPherson's majority over all, 276. In May the McPherson Town Company had offered, as an inducement for the location of the county seat at this point, the free use of rooms for ten years, and the donation of two squares of land on the town site. The offer was accepted the next month, the County Commissioners selecting blocks 56 and 65. Thus the county seat was established at McPherson and has remained since.\n\nAs early as 1875, city leaders of Marion held a meeting to consider a branch railroad from Florence. In 1878, Atchison, Topeka and Santa Fe Railway and parties from Marion County and McPherson County chartered the Marion and McPherson Railway Company. In 1879, a branch line was built from Florence to McPherson, in 1880 it was extended to Lyons, in 1881 it was extended to Ellinwood. The line was leased and operated by the Atchison, Topeka and Santa Fe Railway. The line from Florence to Marion, was abandoned in 1968. In 1992, the line from Marion to McPherson was sold to Central Kansas Railway. In 1993, after heavy flood damage, the line from Marion to McPherson was abandoned. The original branch line connected Florence, Marion, Canada, Hillsboro, Lehigh, Canton, Galva, McPherson, Conway, Windom, Little River, Mitchell, Lyons, Chase, then connected with the original AT&SF main line at Ellinwood.\n\nIn 1887, the Chicago, Kansas and Nebraska Railway extended its main line from Herington to Pratt. This main line connected Herington, Ramona, Tampa, Durham, Waldeck, Canton, Galva, McPherson, Groveland, Inman, Medora, Hutchinson, Whiteside, Partridge, Arlington, Langdon, Turon, Preston, Natrona, Pratt. In 1888, this main line was extended to Liberal. Later, this line was extended to Tucumcari, New Mexico and Santa Rosa, New Mexico, where it made a connection with the Southern Pacific from El Paso, Texas. The Chicago, Kansas and Nebraska Railway was absorbed by the Chicago, Rock Island and Pacific Railway. This line is also called the \"Golden State Route\".\n\n20th century\nThe National Old Trails Road, also known as the Ocean-to-Ocean Highway, was established in 1912, and was routed through Windom, Conway, McPherson.\n\nGeography\n\nAccording to the U.S. Census Bureau, the county has a total area of , of which is land and (0.3%) is water.\n\nAdjacent counties\n Saline County (north)\n Dickinson County (northeast)\n Marion County (east)\n Harvey County (southeast)\n Reno County (southwest)\n Rice County (west)\n Ellsworth County (northwest)\n\nMajor highways\n Interstate 135\n U.S. Route 56\n U.S. Route 81\n K-4\n K-61\n K-153\n\nDemographics\n\nThe McPherson Micropolitan Statistical Area includes all of McPherson County.\n\n2000 census\nAs of the census of 2000, there were 29,554 people, 11,205 households, and 7,966 families residing in the county. The population density was 33 people per square mile (13/km2). There were 11,830 housing units at an average density of 13 per square mile (5/km2). The racial makeup of the county was 96.53% White, 0.81% Black or African American, 0.34% Native American, 0.32% Asian, 0.06% Pacific Islander, 0.79% from other races, and 1.16% from two or more races. 1.94% of the population were Hispanic or Latino of any race. 37.1% were of German, 12.9% Swedish, 12.1% American, 6.7% English and 6.3% Irish ancestry according to Census 2000.\n\nThere were 11,205 households, out of which 33.00% had children under the age of 18 living with them, 62.50% were married couples living together, 6.00% had a female householder with no husband present, and 28.90% were non-families. 25.50% of all households were made up of individuals, and 11.80% had someone living alone who was 65 years of age or older. The average household size was 2.49 and the average family size was 2.99.\n\nIn the county, the population was spread out, with 25.40% under the age of 18, 10.30% from 18 to 24, 25.20% from 25 to 44, 21.80% from 45 to 64, and 17.30% who were 65 years of age or older. The median age was 38 years. For every 100 females there were 95.90 males. For every 100 females age 18 and over, there were 92.90 males.\n\nThe median income for a household in the county was $41,138, and the median income for a family was $48,243. Males had a median income of $33,530 versus $21,175 for females. The per capita income for the county was $18,921. About 4.20% of families and 6.60% of the population were below the poverty line, including 5.20% of those under age 18 and 8.10% of those age 65 or over.\n\nGovernment\n\nPresidential elections\nMcPherson county is often carried by Republican candidates. The last time a Democratic candidate has carried this county was in 1964 by Lyndon B. Johnson.\n\nLaws\nFollowing amendment to the Kansas Constitution in 1986, the county remained a prohibition, or \"dry\", county until 1996, when voters approved the sale of alcoholic liquor by the individual drink with a 30 percent food sales requirement.\n\nEducation\n\nColleges\n McPherson College in McPherson\n Bethany College in Lindsborg\n Central Christian College in McPherson\n\nUnified school districts\n Smoky Valley USD 400\n McPherson USD 418\n Canton-Galva USD 419\n Moundridge USD 423\n Inman USD 448\n\nSchool district office in neighboring county\n Goessel USD 411\n Little River-Windom USD 444\n\nMuseums\n Birger Sandzén Memorial Gallery in Lindsborg\n McCormick-Deering Days Museum in Inman\n McPherson Museum in McPherson\n Lindsborg Old Mill & Swedish Heritage Museum in Lindsborg\n Kansas Motorcycle Museum in Marquette\n\nCommunities\n\nCities\n\n Canton\n Galva\n Inman\n Lindsborg\n Marquette\n McPherson (county seat) \n Moundridge\n Windom\n\nUnincorporated communities\n† means a Census-Designated Place (CDP) by the United States Census Bureau.\n Conway\n Elyria†\n Groveland\n Johnstown\n New Gottland\n Roxbury†\n\nGhost towns\n Alta Mills\n Battle Hill\n Christian\n Doles Park\n Elivon\n King City\n Sweadal\n\nTownships\nMcPherson County is divided into twenty-five townships. The cities of Lindsborg and McPherson are considered governmentally independent and are excluded from the census figures for the townships. In the following table, the population center is the largest city (or cities) included in that township's population total, if it is of a significant size.\n\nSee also\n List of people from McPherson County, Kansas\n National Register of Historic Places listings in McPherson County, Kansas\n McPherson Valley Wetlands\n Maxwell Wildlife Refuge\n\nReferences\n\nNotes\n\nFurther reading\n\n Wheeler, Wayne Leland. \"An Analysis of Social Change in a Swedish-Immigrant Community: The Case of Lindsborg, Kansas.\" (PhD dissertation, University of Missouri-Columbia; ProQuest Dissertations Publishing, 1959. 5905657).\n\nCounty\n Through the Years: A Pictorial History of McPherson County; McPherson Sentinel' Heritage House Publishing Co; 1992.\n McPherson County First Courthouse Built About 1869 or 1870; Lindsborg News-Record; March 30, 1959.\n Pioneer Life and Lore of McPherson County, Kansas; Edna Nyquist; Democratic-Opinion Press; 1932.\n A History of the Church of the Brethren in Kansas (includes McPherson College history); Elmer LeRoy Craik; McPherson Daily; Republican Press; 397 pages; 1922.\n Portrait and Biographical Record of Dickinson, Saline, McPherson, and Marion Counties, Kansas; Chapman Bros; 614 pages; 1893.\n Standard Atlas of McPherson County, Kansas; Geo. A. Ogle & Co; 82 pages; 1921.\n Plat Book of McPherson County, Kansas; North West Publishing Co; 50 pages; 1903.\n Edwards' Atlas of McPherson County, Kansas; John P. Edwards; 51 pages; 1884.\n\nTrails\n The Story of the Marking of the Santa Fe Trail by the Daughters of the American Revolution in Kansas and the State of Kansas; Almira Cordry; Crane Co; 164 pages; 1915. (Download 4MB PDF eBook)\n The National Old Trails Road To Southern California, Part 1 (LA to KC); Automobile Club Of Southern California; 64 pages; 1916. (Download 6.8MB PDF eBook)\n\nMennonite Settlements\n Impact of Mennonite settlement on the cultural landscape of Kansas; Brenda Martin; Kansas State University; 1985/1988. \n Mennonite settlement : the relationship between the physical and cultural environment; Susan Movle; University of Utah; 1975/1886.\n Status of Mennonite women in Kansas in their church and home relationships; Eva Harshbarger; Bluffton College; 1925/1945.\n\nExternal links\n\nCounty\n \n McPherson County - Directory of Public Officials\nHistorical\n , from Hatteberg's People'' on KAKE TV news\nMaps\n McPherson County Maps: Current, Historic, KDOT\n Kansas Highway Maps: Current, Historic, KDOT\n Kansas Railroad Maps: Current, 1996, 1915, KDOT and Kansas Historical Society\n\n \nKansas counties\n1867 establishments in Kansas\nPopulated places established in 1867", "answers": ["30,223."], "length": 1856, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "420ce59c7a0c084e938dd69dfacf59f61ac3ebbd237780c8"} {"input": "How many brother does Njoroge have?", "context": "Weep Not, Child is a 1964 novel by Kenyan author Ngũgĩ wa Thiong'o. It was his first novel, published in 1964 under the name James Ngugi. It was among the African Writers Series. It was the first English language|English novel to be published by an East African. Thiong'o's works deal with the relationship between Africans and white settlers in colonial Kenya, and are heavily critical of colonial rule. Specifically, Weep Not, Child deals with the Mau Mau Uprising, and \"the bewildering dispossession of an entire people from their ancestral land.\" Ngũgĩ wrote the novel while he was a student at Makerere University.\n\nThe book is divided into two parts and eighteen chapters. Part one deals mostly with the education of Njoroge, while part two deals with the rising Mau Mau movement.\n\nPlot summary\n\nNjoroge, a little boy, is urged to attend school by his mother. He is the first one of his family able to go to school. His family lives on the land of Jacobo, an African made rich by his dealings with white settlers, namely Mr. Howlands, the most powerful land owner in the area. Njoroge's brother Kamau works as an apprentice to a carpenter, while Boro, the eldest living son, is troubled by his experiences while in forced service during World War II, including witnessing the death of his elder brother. Ngotho, Njoroge's father and a respected man in the surrounding area, tends Mr. Howlands' crops, but is motivated by his passion to preserve his ancestral land, rather than for any compensation or loyalty.\n\nOne day, black workers call for a strike to obtain higher wages. Ngotho is ambivalent about participating in the strike because he fears he will lose his job. However, he decides to go to the gathering, even though his two wives do not agree. At the demonstration, there are calls for higher wages. Suddenly, the white police inspector brings Jacobo to the gathering to pacify the native people. Jacobo tries to put an end to the strike. Ngotho attacks Jacobo, and the result is a riot where two people are killed. Jacobo survives and swears revenge. Ngotho loses his job and Njoroge’s family is forced to move. Njoroge’s brothers fund his education and seem to lose respect for their father.\n\nMwihaki, Jacobo's daughter and Njoroge's best friend, enters a girls' only boarding school, leaving Njoroge relatively alone. He reflects upon her leaving, and realizes that he was embarrassed by his father's actions towards Jacobo. For this reason, Njoroge is not upset by her exit and their separation. Njoroge switches to another school.\n\nFor a time, everyone's attention is focused on the upcoming trial of Jomo Kenyatta – a revered leader of the movement. Many blacks think that he is going to bring forth Kenya’s independence. But Jomo loses the trial and is imprisoned. This results in further protests and greater suppression of the black population.\n\nJacobo and a white landowner, Mr. Howlands, fight against the rising activities of the Mau Mau, an organization striving for Kenyan economic, political, and cultural independence. Jacobo accuses Ngotho of being the leader of the Mau Mau and tries to imprison the whole family. Meanwhile, the situation in the country is deteriorating. Six black men are taken out of their houses and executed in the woods.\n\nOne day Njoroge meets Mwihaki again, who has returned from boarding school. Although Njoroge had planned to avoid her due to the conflict between their fathers, their friendship is unaffected. Njoroge passes an important exam that allows him to advance to High School. His village is proud of him, and collects money to pay Njoroge's High School tuition.\n\nSeveral months later, Jacobo is murdered in his office by a member of the Mau Mau. Mr. Howlands has Njoroge removed from school for questioning. Both father and son are brutally beaten before release and Ngotho is left barely alive. Although there doesn't seem to be a connection between Njoroge's family and the murder, it is eventually revealed that Njoroge's brothers are behind the assassination, and that Boro is the real leader of the Mau Mau. Ngotho soon dies from his injuries and Njoroge finds out that his father was protecting his brothers. Kamau has been imprisoned for life. Only Njoroge and his two mothers remain free, and Njoroge is left as the sole provider of his two mothers. Njoroge fears that he cannot make ends meet; he gives up hope of continuing in school and loses faith in God.\n\nNjoroge asks Mwihaki's for support, but she is angry because of her father’s death. When he finally pledges his love to her, she refuses to leave with him, realizing her obligation to Kenya and her mother. Njoroge decides to leave town and makes an attempt at suicide; however, he fails when his mothers find him before he is able to hang himself. The novel closes with Njoroge feeling hopeless, and ashamed of cowardice.\n\nCharacters in Weep Not, Child\n Njoroge: the main character of the book whose main goal throughout the book is to become as educated as possible.\n Ngotho: Njoroge's father. He works for Mr.Howlands and is respected by him until he attacks Jacobo at a workers strike. He is fired and the family is forced to move to another section of the country. Over the course of the book his position as the central power of the family weakened, to the point where his self-realization that he has spent his whole life waiting for the prophecy (that proclaims the blacks will be returned their land) to come true rather than fighting for Kenyan independence, leads to his depression.\n Nyokabi and Njeri: the two wives of Ngotho. Njeri is Ngotho's first wife, and mother of Boro, Kamau, and Kori. Nyokabi is his second wife, and the mother of Njoroge and Mwangi.\n Njoroge has four brothers: Boro, Kamau, Kori and Mwangi (who is Njoroge's only full brother, who died in World War II).\n Boro: Son of Njeri who fights for the Allies in World War II. Upon returning his anger against the colonial government is compounded by their confiscation of the his land. Boro's anger and position as eldest son leads him to question and ridicule Ngotho, which eventually defeats their father's will (upon realizing his life was wasted waiting and not acting). It is eventually revealed that Boro is the leader of the Mau Mau (earlier alluded to as \"entering politics\") and murders Mr.Howlands. He is caught by police immediately after and is scheduled to be executed by the book's end. It is highly likely that it is also Boro who kills Jacobo.\n Mwihaki: Njoroge's best friend (and later develops into his love interest). Daughter of Jacobo. When it is revealed that his family killed Jacobo (most likely Boro), Mwihaki distances herself from Njoroge, asking for time to mourn her father and care for her mother.\n Jacobo: Mwihaki's father and an important landowner. Chief of the village.\n Mr. Howlands: A white settler who emigrated to colonial Kenya and now owns a farm made up of land that originally belonged to Ngotho's ancestors. Has three children: Peter who died in World War II before the book's beginning, a daughter who becomes a missionary, and Stephen who met Njoroge while the two were in high school.\n\nThemes and motifs\nWeep Not, Child integrates Gikuyu mythology and the ideology of nationalism that serves as catalyst for much of the novel's action. The novel explores the negative aspects of colonial rule over Kenya. Njoroge's aspiration to attend university is frustrated by both the violence of the Mau Mau rebels and the violent response of the colonial government. This disappointment leads to his alienation from his family and ultimately his suicide attempt.\n\nThe novel also ponders the role of saviours and salvation. The author notes in his The River Between: \"Salvation shall come from the hills. From the blood that flows in me, I say from the same tree, a son shall rise. And his duty shall be to lead and save the people.\" Jomo Kenyatta, the first prime minister of Kenya, is immortalised in Weep Not, Child. The author says, \"Jomo had been his (Ngotho's) hope. Ngotho had come to think that it was Jomo who would drive away the white man. To him, Jomo stood for custom and traditions purified by grace of learning and much travel.\" Njoroge comes to view Jomo as a messiah who will win the struggle against the colonial government.", "answers": ["Four."], "length": 1414, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "69e69439e349a539ca4cff96ae35aa8499ca61886801d488"} {"input": "When did Goodwin become a Naval aviator?", "context": "Hugh Hilton Goodwin (December 21, 1900 – February 25, 1980) was a decorated officer in the United States Navy with the rank of Vice Admiral. A veteran of both World Wars, he commanded escort carrier during the Mariana Islands campaign. Goodwin then served consecutively as Chief of Staff, Carrier Strike Group 6 and as Air Officer, Philippine Sea Frontier and participated in the Philippines campaign in the later part of the War.\n\nFollowing the War, he remained in the Navy and rose to the flag rank and held several important commands including Vice Commander, Military Air Transport Service, Commander, Carrier Division Two and Commander, Naval Air Forces, Continental Air Defense Command.\n\nEarly life and career\n\nHugh H. Goodwin was born on December 21, 1900, in Monroe, Louisiana and attended Monroe High School there (now Neville High School). Following the United States' entry into World War I in April 1917, Goodwin left the school without receiving the diploma in order to see some combat and enlisted the United States Navy on May 7, 1917. He completed basic training and was assigned to the battleship . Goodwin participated in the training of armed guard crews and engine room personnel as the Atlantic Fleet prepared to go to war and in November 1917, he sailed with the rest of Battleship Division 9, bound for Britain to reinforce the Grand Fleet in the North Sea.\n\nAlthough he did not complete the last year of high school, Goodwin was able to earn an appointment to the United States Naval Academy at Annapolis, Maryland in June 1918. While at the academy, he earned a nickname \"Huge\" and among his classmates were several future admirals and generals including: Hyman G. Rickover, Milton E. Miles, Robert E. Blick Jr., Herbert S. Duckworth, Clayton C. Jerome, James P. Riseley, James A. Stuart, Frank Peak Akers, Sherman Clark, Raymond P. Coffman, Delbert S. Cornwell, Frederick J. Eckhoff, Ralph B. DeWitt, John Higgins, Vernon Huber, Albert K. Morehouse, Harold F. Pullen, Michael J. Malanaphy, William S. Parsons, Harold R. Stevens, John P. Whitney, Lyman G. Miller and George J. O'Shea.\n\nGoodwin graduated with Bachelor of Science degree on June 3, 1922, and was commissioned Ensign in the United States Navy. He was subsequently assigned to the battleship and took part in the voyage to Rio de Janeiro, Brazil, before he was ordered to the Naval Torpedo Station at Newport, Rhode Island for submarine instruction in June 1923. Goodwin completed the training several weeks later and was attached to the submarine . He then continued his further training aboard submarine and following his promotion to Lieutenant (junior grade) on June 3, 1925, he qualified as submariner.\n\nHe then served aboard submarine off the coast of California, before he was ordered for the recruiting duty to San Francisco in September 1927. While in this capacity, Goodwin applied for naval aviation training which was ultimately approved and he was ordered to the Naval Air Station Pensacola, Florida in August 1928. Toward the end of the training, he was promoted to lieutenant on December 11, 1928, and upon the completion of the training in January 1929, he was designated Naval aviator.\n\nGoodwin was subsequently attached to the Observation Squadron aboard the aircraft carrier and participated in the Fleet exercises in the Caribbean. He was transferred to the Bureau of Aeronautics in Washington, D.C. in August 1931 and served consecutively under the architect of naval aviation William A. Moffett and future Chief of Naval Operations Ernest J. King.\n\nIn June 1933, Goodwin was ordered to the Naval War College at Newport, Rhode Island, where he completed junior course in May of the following year. He subsequently joined the crew of aircraft carrier and served under Captain Arthur B. Cook and took part in the Fleet exercises in the Caribbean and off the East Coast of the United States.\n\nHe was ordered back to the Naval Air Station Pensacola, Florida in June 1936 and was attached to the staff of the Base Commandant, then-Captain Charles A. Blakely. When Blakely was succeeded by William F. Halsey in June 1937, Goodwin remained in Halsey's staff and was promoted to Lieutenant Commander on December 1, 1937. He also completed correspondence course in International law at the Naval War College.\n\nGoodwin was appointed Commanding officer of the Observation Squadron 1 in June 1938 and attached to the battleship he took part in the patrolling of the Pacific and \nWest Coast of the United States until September 1938, when he assumed command of the Observation Squadron 2 attached to the battleship .\n\nWhen his old superior from Lexington, now Rear Admiral Arthur B. Cook, was appointed Commander Aircraft, Scouting Force in June 1939, he requested Goodwin as his Aide and Flag Secretary. He became Admiral Cook's protégé and after year and half of service in the Pacific, he continued as his Aide and Flag Secretary, when Cook was appointed Commander Aircraft, Atlantic Fleet in November 1940.\n\nWorld War II\n\nFollowing the United States' entry into World War II, Goodwin was promoted to the temporary rank of Commander on January 1, 1942, and assumed duty as advisor to the Argentine Navy. His promotion was made permanent two months later and he returned to the United States in early 1943 for duty as assistant director of Planning in the Bureau of Aeronautics under Rear admiral John S. McCain. While still in Argentina, Goodwin was promoted to the temporary rank of Captain on June 21, 1942.\n\nBy the end of December 1943, Goodwin was ordered to Astoria, Oregon, where he assumed command of newly commissioned escort carrier USS Gambier Bay. He was responsible for the initial training of the crew and was known as a strict disciplinarian, but the crew appreciated the skills he taught them that prepared them for combat. Goodwin insisted that everyone aboard has to do every job right every time and made us fight our ship at her best.\n\nDuring the first half of 1944, Gambier Bay was tasked with ferrying aircraft for repairs and qualified carrier pilots from San Diego to Pearl Harbor, Hawaii, before departed on May 1, 1944, to join Rear admiral Harold B. Sallada's Carrier Support Group 2, staging in the Marshalls for the invasion of the Marianas.\n\nThe air unit, VC-10 Squadron, under Goodwin's command gave close air support to the initial landings of Marines on Saipan on June 15, 1944, destroying enemy gun emplacements, troops, tanks, and trucks. On the 17th, her combat air patrol (CAP) shot down or turned back all but a handful of 47 enemy planes headed for her task group and her gunners shot down two of the three planes that did break through to attack her.\n\nGoodwin's carrier continued in providing of close ground support operations at Tinian during the end of July 1944, then turned her attention to Guam, where she gave identical aid to invading troops until mid-August that year. For his service during the Mariana Islands campaign, Goodwin was decorated with Bronze Star Medal with Combat \"V\".\n\nHe was succeeded by Captain Walter V. R. Vieweg on August 18, 1944, and appointed Chief of Staff, Carrier Division Six under Rear admiral Arthur W. Radford. The Gambier Bay was sunk in the Battle off Samar on October 25, 1944, during the Battle of Leyte Gulf after helping turn back a much larger attacking Japanese surface force.\n\nGoodwin served with Carrier Division Six during the Bonin Islands raids, the naval operations at Palau and took part in the Battle of Leyte Gulf and operations supporting Leyte landings in late 1944. He was later appointed Air Officer of the Philippine Sea Frontier under Rear admiral James L. Kauffman and remained with that command until the end of hostilities. For his service in the later part of World War II, Goodwin was decorated with Legion of Merit with Combat \"V\". He was also entitled to wear two Navy Presidential Unit Citations and Navy Unit Commendation.\n\nPostwar service\n\nFollowing the surrender of Japan, Goodwin assumed command of Light aircraft carrier on August 24, 1945. The ship was tasked with air missions over Japan became mercy flights over Allied prisoner-of-war camps, dropping food and medicine until the men could be rescued. She was also present at Tokyo Bay for the Japanese surrender on September 2, 1945.\n\nGoodwin returned with San Jacinto to the United States in mid-September 1945 and he was detached in January 1946. He subsequently served in the office of the Chief of Naval Operations until May that year, when he entered the instruction at National War College. Goodwin graduated in June 1947 and served on Secretary's committee for Research on Reorganization. Upon promotion to Rear admiral on April 1, 1949, Goodwin was appointed Chief of Staff and Aide to Commander-in-Chief, Atlantic Fleet under Admiral William H. P. Blandy.\n\nRevolt of the Admirals\n\nIn April 1949, the budget's cuts and proposed reorganization of the United States Armed Forces by the Secretary of Defense Louis A. Johnson launched the wave of discontent between senior commanders in the United States Navy. Johnson proposed the merging of the Marine Corps into the Army, and reduce the Navy to a convoy-escort force.\n\nGoodwin's superior officer, Admiral Blandy was call to testify before the House Committee on Armed Services and his harsh statements for the defense of the Navy, costed him his career. Goodwin shared his views and openly criticized Secretary Johnson for having power concentrated in a single civilian executive, who is an appointee of the Government and not an elected representative of the people. He also criticized aspects of defense unification which permitted the Joint Chiefs of Staff to vote on arms policies of individual services, and thus \"rob\" the branches of autonomy.\n\nThe outbreak of the Korean War in summer 1950 proved the proposal of Secretary Johnson as incorrect and he resigned in September that year. Also Secretary of the Navy, Francis P. Matthews resigned one month earlier.\n\nLater service\n\nDue to the Revolts of the admirals, Blandy was forced to retire in February 1950 and Goodwin was ordered to Newport, Rhode Island for temporary duty as Chief of Staff and Aide to the President of the Naval War College under Vice admiral Donald B. Beary in April 1950. Goodwin was detached from that assignment two months and appointed member of the General Board of the Navy. He was shortly thereafter appointed acting Navy Chief of Public Information, as the substitute for Rear Admiral Russell S. Berkey, who was relieved of illness, but returned to the General Board of the Navy in July that year. Goodwin served in that capacity until February 1951, when he relieved his Academy class, Rear admiral John P. Whitney as Vice Commander, Military Air Transport Service (MATS).\n\nWhile in this capacity, Goodwin served under Lieutenant general Laurence S. Kuter and was co-responsible for the logistical support of United Nations troops fighting in Korea. The MATS operated from the United States to Japan and Goodwin served in this capacity until August 1953, when he was appointed Commander Carrier Division Two. While in this assignment, he took part in the Operation Mariner, Joint Anglo-American exercise which encountered very heavy seas over a two-week period in fall 1953.\n\nGoodwin was ordered to the Philippines in May 1954 and assumed duty as Commander, U.S. Naval Forces in the Philippines with headquarters at Naval Station Sangley Point near Cavite. He held that command in the period of tensions between Taiwan and China and publicly declared shortly after his arrival, that any attack on Taiwan by the Chinese Communists on the mainland would result in US participation in the conflict. The naval fighter planes under his command also provided escort for passing commercial planes. Goodwin worked together with retired Admiral Raymond A. Spruance, then-Ambassador to the Philippines, and accompanied him during the visits to Singapore, Bangkok and Saigon in January 1955.\n\nOn December 18, 1955, Goodwin's classmate Rear admiral Albert K. Morehouse, then serving as Commander, Naval Air Forces, Continental Air Defense Command (CONAD), died of heart attack and Goodwin was ordered to CONAD headquarters in Colorado Springs, Colorado to assume Morehouse's position. While in this capacity, he was subordinated to Army General Earle E. Partridge and was responsible for the Naval and Marine Forces allocated to the command designated for the defense of the Continental United States.\n\nRetirement\n\nGoodwin retired on June 1, 1957, after 40 years of active service and was advanced to the rank of Vice admiral on the retired list for having been specially commended in combat. A week later, he was invited back to his Monroe High School (now Neville High School) and handed a diploma showing that he had been graduated with the class of 1918. He then settled in Monterey, California where he taught American history at Stevenson school and was a member of the Naval Order of the United States.\n\nVice admiral Hugh H. Goodwin died at his home on February 25, 1980, aged 79. He was survived by his wife, Eleanor with whom he had two children, a daughter Sidney and a son Hugh Jr., who graduated from the Naval Academy in June 1948, but died one year later, when the Hellcat fighter he was piloting collided with another over the Gulf of Mexico during training.\n\nDecorations\n\nHere is the ribbon bar of Vice admiral Hugh H. Goodwin:\n\nReferences\n\n1900 births\n1980 deaths\nPeople from Monroe, Louisiana\nMilitary personnel from Louisiana\nUnited States Naval Academy alumni\nNaval War College alumni\nUnited States Naval Aviators\nUnited States Navy personnel of World War I\nUnited States Navy World War II admirals\nUnited States Navy vice admirals\nUnited States submarine commanders\nRecipients of the Legion of Merit", "answers": ["Goodwin became a Naval aviator in January 1929."], "length": 2294, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "0e2b2bc81542b95fb86a7fe76cf11c52316937c9aac9123e"} {"input": "What is the score achieved by the authors for Track-2?", "context": "Paper Info\n\nTitle: Two-stage Pipeline for Multilingual Dialect Detection\nPublish Date: Unkown\nAuthor List: Ankit Vaidya (from Pune Institute of Computer Technology), Aditya Kane (from Pune Institute of Computer Technology)\n\nFigure\n\nFigure 1: Class distribution of dialects\nFigure 2: System diagram for dialect classification.The LID classifies the input into one of 3 languages.The sample is then further classified into dialects by language specific models.\nFigure 3: Confusion matrix of 9-way classification.Note that rows are normalized according to the number of samples is that class.\nOur complete results for Track-1 using the two-stage dialect detection pipeline.Model-* denotes the language of the models used for the experiments.\nPerformance on Track-1 validation dataset of individual models used in the two-stage pipeline.\"Lg\" stands for language of the model used.\nComparative results of two-way classification using the finetuned (F.T.) predictions and predictions adapted from three-way classification models.\n\nabstract\n\nDialect Identification is a crucial task for localizing various Large Language Models. This paper outlines our approach to the VarDial 2023 DSL-TL shared task. Here we have to identify three or two dialects from three languages each which results in a 9-way classification for Track-1 and 6-way classification for Track-2 respectively.\nOur proposed approach consists of a two-stage system and outperforms other participants' systems and previous works in this domain. We achieve a score of 58.54% for Track-1 and 85.61% for Track-2. Our codebase is available publicly 1 .\n\nIntroduction\n\nLanguage has been the primary mode of communication for humans since the pre-historic ages. Studies have explored the evolution of language and outlined mathematical models that govern the intricacies of natural language . Inevitably, as humans established civilization in various parts of the world, this language was modified by, and for the group of people occupied by that particular geographical region.\nThis gave rise to multiple national dialects of the same language. The VarDial workshop (colocated with EACL 2023) explores various dialects and variations of the same language. We participated in the Discriminating Between Similar Languages -True Labels (DSL-TL) shared task. In this task, the participants were provided with data from three languages, with each language having three varieties.\nThis shared task consisted of two tracks -Track-1 featuring nine-way classification and Track-2 featuring six-way classification. The second track included two particular national dialects of each language (eg. American English and British English), and the first track had one general We ranked 1 st in both of the tracks.\nMoreover, we beat the next best submission by a margin of 4.5% in the first task and 5.6% in the second task.We were the only team to surpass the organizer baseline scores. We present our winning solution in this paper. We used an end-to-end deep learning pipeline which consisted of a language identification model and three language-specific models, one for each language.\nWe converged upon the best combination by doing an elaborate analysis of various models available. Furthermore, in this work we also analyze the performance of the pipeline as a whole and also provide an ablation study. Lastly, we provide some future directions in this area of research.\n\nRelated Work\n\nThe present literature encompasses various aspects of dialect identification. We study this from three perspectives: large language models, language identification and dialect classification problems.\n\nLarge Language Models\n\nThe success of transformers and BERT based models was inevitable since the initial boom of the transformer 2017) model. In recent years, many other architectures like RoBERTa and ELECTRA have further pushed the state-of-the-art in this domain. Moreover, autoregressive models like GPT and GPT-2 have also shown their prowess.\nMultilingual versions of RoBERTA, namely XLM-RoBERTa are also available. Lastly, language specific models like Spanish BERT (la Rosa y Eduardo G. Ponferrada y Manu Romero y Paulo Villegas y Pablo González de Prado Salas y María Grandury, 2022) and Portuguese BERT are available as well. Our winning solution makes use of these large language models trained on specific languages.\n\nLanguage Identification Models\n\nMany multilingual language identification models have been developed in order to classify the language of the input sentence beforehand. Even though the initial works used n-gram models and generative mixture models or even conditional random fields and other classical machine learning methods like naive bayes , modern methods have shifted to the use of deep learning for language identification .\nRecent works have mainly focused on deep learning based language identification, where handling codemixed data is a big challenge in the domain. For our experiments, we use a version of XLM-RoBERTa finetuned on a language identification dataset 2 . This model has near-perfect test accuracy of 99.6%.\n\nDialect Classification\n\nDialect classification has been previously solved using statistical methods like Gaussian Mixture Models and Frame Selection Decoding or Support Vector Machines (SVM) . It has been explored relatively sparsely, mostly in the case for local languages . Deep learning approaches have been explored in previous editions of the VarDial workshop shared tasks and otherwise .\nDialect classification was also explored previously as a part of other shared tasks . We want to stress that given the multilingual nature of the dataset, using the present methods directly was not an option. In our work, although we take inspiration from the previous works, we propose a novel system that surpasses the performance of the previous systems by a large margin.\n\nData\n\nThe dataset We observed that the class PT-BR had the most number of samples (2,724) and the class EN had the least number of samples (349), and thus the imbalance ratio was almost 1:8. We have illustrated the data distribution in Figure . We tried to mitigate this imbalance using over-sampling and weighted sampling methods.\nHowever, the improved data sampling method did not affect the performance.\n\nSystem Description\n\nThis was a problem of multi-class classification having 9 classes for Track-1 and 6 classes for Track-2. The samples were belonging to 3 languages having 3 varieties each, so the classification pipeline was made in 2 stages. The Language Identification (LID) model which is the first stage classifies the sentence into 3 languages: English (EN), Spanish (ES) and Portuguese (PT).\nThe LID is a pretrained XLM-RoBERTa that is fine-tuned for the task of language identification. It is able to classify the input sentence into 20 languages. We classify and separate the samples according to their language. The samples corresponding to the specific languages are then fed into the language specific models for dialect identification.\nFor dialect identification we have used models like BERT and RoBERTa with a linear layer connected to the pooler output of the models. Then fine-tuning is done on the models for dialect identification using the samples corresponding to the specific languages. For the task of dialect identification we experimented with several pretrained models like XLM-RoBERTa, BERT, ELECTRA, GPT-2 and RoBERTa.\nAll models were fine-tuned for 20 epochs with a learning rate of 1e-6 and weight decay 1e-6 with a batch size of 8. The best performing model checkpoint was chosen according to the epoch-wise validation macro-F1 score. 5 Experiments and Results\n\nExperiments using Large Language Models\n\nFor the task of Dialect Identification we have tried various language specific models like XLM-RoBERTa, BERT, ELECTRA, RoBERTa and GPT- 2. The base variant of all these models were used and all the models were used through the Hugging-Face library. The pooler output of these models was passed through a linear layer and the models were fine-tuned.\nFirst, we experimented with different models for Track-1. All the models were trained for 20 epochs with learning rate 1e-6, weight decay 1e-6 and a batch size of 8. We used XLM-RoBERTa as the baseline for all 3 languages. The best performing models for the English language were RoBERTa and BERT whereas GPT-2 was the worst performing.\nSimilarly the language specific versions of RoBERTa and BERT performed well for the Spanish and Portuguese respectively. Overall the worst performing model was GPT-2 across all 3 languages. The validation F1 scores are present in Table . The two best-performing models for every language were chosen for Track-2.\nThe same procedure as specified above was used and the F1 scores are present in Table . The train and validation F1 scores for 2-class classification are higher for all models as compared to the F1 score of the same models for 3-class classification. This was mainly due to the poor representation and accuracy of classification of the third class.\nWe observed symptoms of overfitting in all models after 12-15 epochs and the best validation F1 score was obtained in the range of 4-8 epochs.\n\nLID experiments\n\nThe pipeline for dialect identification is divided into two parts as the sentences in the dataset belong to different languages. The stages are described in Section 4. The XLM-RoBERTa we have used for language classification has a test accuracy of 99.6% meaning it correctly classifies all input sentences and hence, can be considered as a perfect classifier.\nFor the final pipeline we experimented using the two best performing models for each language in Track-1 and Track-2. For both the tracks we experimented with all 8 (2 3 ) possible combinations of models and calculated the validation F1 score for the combined validation dataset which had sentences belonging to all languages.\nThe validation scores for Track-1 and Track-2 are shown in Table and Table respectively. For both the tracks, the three pipelines with the best validation F1 scores were chosen for submission.\n\nUsing 3-way classifier as a 2-way classifier\n\nIn Track-1, participants are expected to train a classifier which classifies amongst 9 classes, and in Track-2, participants are expected to train a classifier which classifies amongst 6 classes. These 6 classes are a proper subset of the 9 classes from Track-1. Thus, an intuitive baseline for Track-2 is to use the model finetuned for Track-1, whilst considering only the relevant classes for the latter task.\nThe classes EN , ES and P T , i.e. the classes without any national dialect associated with them are not included in Track-2 as compared to Track-1. Thus, we calculate the predictions for the Track-2 validation dataset using the models for Track-1 and exclude the metrics for Track-1 specific classes to get the metrics for this \"adapted\" 2-way classification.\nWe show the results of this experiment in Table and observe that, as expected, the adapted 2-way classification performs worse compared to the explicitly finetuned variant.\n\nResults for Track-1 and Track-2\n\nWe now present our experiments and their performance for both tracks. Our experiments for Track-1 are described in Table and our experiments for Track-2 are described in Table . The participants were allowed three submissions for evaluation on the test set, so we submitted predictions using the three systems which performed the best on the validation set.\nAs mentioned in Section 5.2, we performed 2 3 , i.e. a total of 8 experiments using the two best models for each language. We observed that RoBERTa base on English, Spanish BERT base on Spanish and Portuguese BERT base performed the best on the testing set for Track-1. The same combination, with RoBERTa base for English, worked best for Track-2.\nAll of our submissions were the top submissions for each track, which surpassed the next best competitors by a margin of 4.5% and 5.6% for Track-1 and Track-2 respectively.\n\nAblation of best submissions\n\nWe hereby make some observations of our submissions and other experiments. To assist this, we plot the confusion matrices of our best submissions for Track-1 and Track-2 in Figures respectively. Note that these confusion matrices have their rows (i.e. true labels axes) normalized according to the number of samples in the class.\nHere are observations from our experiments: 1. BERT-based models outperform other models across all languages: We observe that BERT-based models outperform ELECTRA-based and GPT-2-based models, as shown in Table . We speculate this is because of the inherent architecture of BERT, which combines semantic learning with knowledge retention.\nThis combination of traits is particularly useful for this task. 2. Common labels perform the worst across all languages: We observe that the common labels EN , ES and P T perform the worst, both in the individual as well as the two-stage setup. We hypothesize this is because of the absence of dialect specific words, or words that are specific to the geographical origin of the national dialect (for example, \"Yankees\" for EN-US and \"Oxford\" for EN-GB).\n3. English models work better than models of other languages: It can be noted from Figures 4 and 3 that the English models have the best performance across all classes. This can be attributed to two reasons: absence of national dialect specific words and lesser pretraining data in the case of Portuguese.\n4. British English is most correctly classified class: We can observe that the Spanish or Portuguese models make equal number of mistakes in the case of either national dialect, in the case of Track-2 (see Figure ). However, in the case of English, the label EN − GB is correctly classified for more than 95% of the cases.\nWe speculate this is because British English involves slightly distinctive grammar and semantics, which help the model separate it from other classes. 5. The proposed 2-step method is scalable for multiple language dialect classification: We can strongly assert that the novel 2-step deep learning method for multilingual dialect classification is a scalable method for the task due to two specific reasons: firstly, the multilingual models (like XLM-RoBERTa) might not have the vocabulary as well as the learning capabilities to learn the minute differences between individual dialects.\nSecondly, this system can be quickly expanded for a new language by simply adding a language specific dialect classifier, provided the language identification model supports that particular language.\n\nConclusion\n\nIn this paper we propose a two-stage classification pipeline for dialect identification for multilingual corpora. We conduct thorough ablations on this setup and provide valuable insights. We foresee multiple future directions for this work. The first is to expand this work to many languages and dialects.\nSecondly, it is a worthwhile research direction to distill this multi-model setup into a single model with multiple prediction heads. The obvious limitation of this system is the excessive memory consumption due to the usage of language specific models. For low resource languages this system is difficult to train and scale.\nWe hope that these problems will be addressed by researchers in future works.", "answers": ["85.61%."], "length": 2395, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "c6d0de0391867185e00e945075301d33fc1c5c05d2a7d082"} {"input": "What are the titles of one of Kam W. Leong's publications in Journal of Controlled Release?", "context": "Publications of Kam W. Leong\nPublications of Kam W. Leong :chronological alphabetical combined bibtex listing:\nK.W. Leong, Synthetic mast-cell granules as adjuvants to promote and polarize immunity in lymph nodes (2013) [PDF]\nK.W. Leong, Tuning Physical Properties of Nanocomplexes through Microfluidics-Assisted Confinement (2013) [PDF]\nK.W. Leong, Nucleic acid scavengers inhibit thrombosis without increasing bleeding (2013) [PDF]\nK.W. Leong, Nanotopography as modulator of human mesenchymal stem cell function (2013) [PDF]\nK.W. Leong, Efficacy of engineered FVIII-producing skeletal muscle enhanced by growth factor-releasing co-axial electrospun fibers (2013) [PDF]\nZhao, F. and Veldhuis, J. J. and Duan, Y. J. and Yang, Y. and Christoforou, N. and Ma, T. and Leong, K. W., Low Oxygen Tension and Synthetic Nanogratings Improve the Uniformity and Stemness of Human Mesenchymal Stem Cell Layer, Molecular Therapy, vol. 18 no. 5 (2010), pp. 1010-1018 [abs]\nKadiyala, I. and Loo, Y. H. and Roy, K. and Rice, J. and Leong, K. W., Transport of chitosan-DNA nanoparticles in human intestinal M-cell model versus normal intestinal enterocytes, European Journal of Pharmaceutical Sciences, vol. 39 no. 1-3 (2010), pp. 103-109 [abs]\nWang, Y. and Quek, C. H. and Leong, K.W. and Fang, J., Synthesis and Cytotoxity of Luminescent InP Quantum Dots, MRS Symposium Proceeding, vol. 1241E (2010)\nJiang, X. and Zheng, Y. and Chen, H. H. and Leong, K. W. and Wang, T. H. and Mao, H. Q., Dual-Sensitive Micellar Nanoparticles Regulate DNA Unpacking and Enhance Gene-Delivery Efficiency, Adv Mater (2010)\nHo, Y. P. and Leong, K. W., Quantum dot-based theranostics, Nanoscale, vol. 2 no. 1 (2010), pp. 60-68 [PDF] [abs]\nPhua, K. and Leong, K. W., Microscale oral delivery devices incorporating nanoparticles, Nanomedicine, vol. 5 no. 2 (2010), pp. 161-163\nGrigsby, C. L. and Leong, K. W., Balancing protection and release of DNA: tools to address a bottleneck of non-viral gene delivery, Journal of the Royal Society Interface, vol. 7 (2010), pp. S67-S82 [abs]\nChalut, K. J. and Kulangara, K. and Giacomelli, M. G. and Wax, A. and Leong, K. W., Deformation of stem cell nuclei by nanotopographical cues, Soft Matter, vol. 6 no. 8 (2010), pp. 1675-1681 [abs]\nChen, S. and Jones, J. A. and Xu, Y. and Low, H. Y. and Anderson, J. M. and Leong, K. W., Characterization of topographical effects on macrophage behavior in a foreign body response model, Biomaterials, vol. 31 no. 13 (2010), pp. 3479-91 [PDF] [abs]\nYim, E. K. F. and Darling, E. M. and Kulangara, K. and Guilak, F. and Leong, K. W., Nanotopography-induced changes in focal adhesions, cytoskeletal organization, and mechanical properties of human mesenchymal stem cells, Biomaterials, vol. 31 no. 6 (2010), pp. 1299-1306 [PDF] [abs]\nYow, S. Z. and Quek, C. H. and Yim, E. K. F. and Lim, C. T. and Leong, K. W., Collagen-based fibrous scaffold for spatial organization of encapsulated and seeded human mesenchymal stem cells, Biomaterials, vol. 30 no. 6 (2009), pp. 1133-1142 [abs]\nKunder, C. A. and John, A. L. S. and Li, G. J. and Leong, K. W. and Berwin, B. and Staats, H. F. and Abraham, S. N., Mast cell-derived particles deliver peripheral signals to remote lymph nodes, Journal of Experimental Medicine, vol. 206 no. 11 (2009), pp. 2455-2467 [abs]\nHo, Y.P. and Chen, H.H. and Leong, K.W. and Wang, T.H., Combining QD-FRET and microfluidics to monitor DNA nanocomplex self-assembly in real-time, J Vis Exp (2009), pp. 1432\nKulangara, K. and Leong, K. W., Substrate topography shapes cell function, Soft Matter, vol. 5 no. 21 (2009), pp. 4072-4076 [abs]\nChakraborty, S. and Liao, I. C. and Adler, A. and Leong, K. W., Electrohydrodynamics: A facile technique to fabricate drug delivery systems, Advanced Drug Delivery Reviews, vol. 61 no. 12 (2009), pp. 1043-1054 [abs]\nOney, S. and Lam, R. T. S. and Bompiani, K. M. and Blake, C. M. and Quick, G. and Heidel, J. D. and Liu, J. Y. C. and Mack, B. C. and Davis, M. E. and Leong, K. W. and Sullenger, B. A., Development of universal antidotes to control aptamer activity, Nature Medicine, vol. 15 no. 10 (2009), pp. 1224-1228 [PDF] [abs]\nChen, H. H. and Ho, Y. P. and Jiang, X. and Mao, H. Q. and Wang, T. H. and Leong, K. W., Simultaneous non-invasive analysis of DNA condensation and stability by two-step QD-FRET, Nano Today, vol. 4 no. 2 (2009), pp. 125-134 [PDF] [abs]\nHo, Y. P. and Chen, H. H. and Leong, K. W. and Wang, T. H., The convergence of quantum-dot-mediated fluorescence resonance energy transfer and microfluidics for monitoring DNA polyplex self-assembly in real time, Nanotechnology, vol. 20 no. 9 (2009), pp. - [abs]\nLiao, I. C. and Chen, S. L. and Liu, J. B. and Leong, K. W., Sustained viral gene delivery through core-shell fibers, Journal of Controlled Release, vol. 139 no. 1 (2009), pp. 48-55 [abs]\nLou, Y. L. and Peng, Y. S. and Chen, B. H. and Wang, L. F. and Leong, K. W., Poly(ethylene imine)-g-chitosan using EX-810 as a spacer for nonviral gene delivery vectors, Journal of Biomedical Materials Research Part A, vol. 88A no. 4 (2009), pp. 1058-1068 [abs]\nChew, S. Y. and Mi, R. and Hoke, A. and Leong, K. W., The effect of the alignment of electrospun fibrous scaffolds on Schwann cell maturation, Biomaterials, vol. 29 no. 6 (2008), pp. 653-61 [abs]\nChen, H. H. and Ho, Y. P. and Jiang, X. and Mao, H. Q. and Wang, T. H. and Leong, K. W., Quantitative comparison of intracellular unpacking kinetics of polyplexes by a model constructed from quantum Dot-FRET, Molecular Therapy, vol. 16 no. 2 (2008), pp. 324-332 [abs]\nChan, B. P. and Leong, K. W., Scaffolding in tissue engineering: general approaches and tissue-specific considerations, European Spine Journal, vol. 17 (2008), pp. S467-S479 [abs]\nTsurushima, H. and Yuan, X. and Dillehay, L. E. and Leong, K. W., Radiation-inducible caspase-8 gene therapy for malignant brain tumors, International Journal of Radiation Oncology Biology Physics, vol. 71 no. 2 (2008), pp. 517-525 [abs]\nBowman, K. and Sarkar, R. and Raut, S. and Leong, K. W., Gene transfer to hemophilia A mice via oral delivery of FVIII-chitosan nanoparticles, Journal of Controlled Release, vol. 132 no. 3 (2008), pp. 252-259 [abs]\nChoi, J. S. and Leong, K. W. and Yoo, H. S., In vivo wound healing of diabetic ulcers using electrospun nanofibers immobilized with human epidermal growth factor (EGF), Biomaterials, vol. 29 no. 5 (2008), pp. 587-96 [abs]\nLiao, I. C. and Liu, J. B. and Bursac, N. and Leong, K. W., Effect of Electromechanical Stimulation on the Maturation of Myotubes on Aligned Electrospun Fibers, Cellular and Molecular Bioengineering, vol. 1 no. 2-3 (2008), pp. 133-145 [abs]\nProw, T. W. and Bhutto, I. and Kim, S. Y. and Grebe, R. and Merges, C. and McLeod, D. S. and Uno, K. and Mennon, M. and Rodriguez, L. and Leong, K. and Lutty, G. A., Ocular nanoparticle toxicity and transfection of the retina and retinal pigment epithelium, Nanomedicine-Nanotechnology Biology and Medicine, vol. 4 no. 4 (2008), pp. 340-349 [abs]\nTan, S. C. W. and Pan, W. X. and Ma, G. and Cai, N. and Leong, K. W. and Liao, K., Viscoelastic behaviour of human mesenchymal stem cells, Bmc Cell Biology, vol. 9 (2008), pp. - [abs]\nChalut, K. J. and Chen, S. and Finan, J. D. and Giacomelli, M. G. and Guilak, F. and Leong, K. W. and Wax, A., Label-free, high-throughput measurements of dynamic changes in cell nuclei using angle-resolved low coherence interferometry, Biophysical Journal, vol. 94 no. 12 (2008), pp. 4948-4956 [abs]\nHaider, M. and Cappello, J. and Ghandehari, H. and Leong, K. W., In vitro chondrogenesis of mesenchymal stem cells in recombinant silk-elastinlike hydrogels, Pharmaceutical Research, vol. 25 no. 3 (2008), pp. 692-699 [abs]\nN. Bursac and Y. H. Loo and K. Leong and L. Tung, Novel anisotropic engineered cardiac tissues: Studies of electrical propagation, Biochemical And Biophysical Research Communications, vol. 361 no. 4 (October, 2007), pp. 847 -- 853, ISSN 0006-291X [abs]\nChen, Beiyi and Dang, Jiyoung and Tan, Tuan Lin and Fang, Ning and Chen, Wei Ning and Leong, Kam W. and Chan, Vincent, Dynamics of smooth muscle cell deadhesion from thermosensitive hydroxybutyl chitosan, Biomaterials, vol. 28 no. 8 (2007), pp. 1503 - 1514 [027] [abs]\nChen, B. and Dang, J. and Tan, T. L. and Fang, N. and Chen, W. N. and Leong, K. W. and Chan, V., Dynamics of smooth muscle cell deadhesion from thermosensitive hydroxybutyl chitosan, Biomaterials, vol. 28 no. 8 (2007), pp. 1503-14 [abs]\nPark, D. J. and Choi, J. H. and Leong, K. W. and Kwon, J. W. and Eun, H. S., Tissue-engineered bone formation with gene transfer and mesenchymal stem cells in a minimally invasive technique, Laryngoscope, vol. 117 no. 7 (2007), pp. 1267-71 [abs]\nTsurushima, H. and Yuan, X. and Dillehay, L. E. and Leong, K. W., Radioresponsive tumor necrosis factor-related apoptosisinducing ligand (TRAIL) gene therapy for malignant brain tumors, Cancer Gene Therapy, vol. 14 no. 8 (2007), pp. 706-716 [abs]\nChai, C. and Leong, K. W., Biomaterials approach to expand and direct differentiation of stem cells, Molecular Therapy, vol. 15 no. 3 (2007), pp. 467-480 [abs]\nZhang, Y. and Chai, C. and Jiang, X. S. and Teoh, S. H. and Leong, K. W., Fibronectin immobilized by covalent conjugation or physical adsorption shows different bioactivity on aminated-PET, Materials Science & Engineering C-Biomimetic and Supramolecular Systems, vol. 27 no. 2 (2007), pp. 213-219 [abs]\nSong, R. J. and Liu, S. Q. and Leong, K. W., Effects of MIP-1 alpha, MIP-3 alpha, and MIP-3 beta on the induction of HIV Gag-specific immune response with DNA vaccines, Molecular Therapy, vol. 15 no. 5 (2007), pp. 1007-1015 [abs]\nYim, E. K. F. and Liao, I. C. and Leong, K. W., Tissue compatibility of interfacial polyelectrolyte complexation fibrous scaffold: Evaluation of blood compatibility and biocompatibility, Tissue Engineering, vol. 13 no. 2 (2007), pp. 423-433 [abs]\nSharma, B. and Williams, C. G. and Kim, T. K. and Sun, D. N. and Malik, A. and Khan, M. and Leong, K. and Elisseeff, J. H., Designing zonal organization into tissue-engineered cartilage, Tissue Engineering, vol. 13 no. 2 (2007), pp. 405-414 [abs]\nChua, K. N. and Tang, Y. N. and Quek, C. H. and Ramakrishna, S. and Leong, K. W. and Mao, H. Q., A dual-functional fibrous scaffold enhances P450 activity of cultured primary rat hepatocytes, Acta Biomaterialia, vol. 3 no. 5 (2007), pp. 643-650 [abs]\nChua, K. N. and Chai, C. and Lee, P. C. and Ramakrishna, S. and Leong, K. W. and Mao, H. Q., Functional nanofiber scaffolds with different spacers modulate adhesion and expansion of cryopreserved umbilical cord blood hematopoietic stem/progenitor cells, Experimental Hematology, vol. 35 no. 5 (2007), pp. 771-781 [abs]\nYim, E. K. F. and Pang, S. W. and Leong, K. W., Synthetic nanostructures inducing differentiation of human mesenchymal stem cells into neuronal lineage, Experimental Cell Research, vol. 313 no. 9 (2007), pp. 1820-1829 [abs]\nChew, S. Y. and Mi, R. F. and Hoke, A. and Leong, K. W., Aligned protein-polymer composite fibers enhance nerve regeneration: A potential tissue-engineering platform, Advanced Functional Materials, vol. 17 no. 8 (2007), pp. 1288-1296 [abs]\nTsurushima, H. and Yuan, X. and Dillehay, L. E. and Leong, K. W., Radio-responsive gene therapy for malignant glioma cells without the radiosensitive promoter: Caspase-3 gene therapy combined with radiation, Cancer Letters, vol. 246 no. 1-2 (2007), pp. 318-323 [abs]\nDang, J.M. and Leong, K. W., Myogenic induction of aligned mesenchymal stem cell sheets by culture on thermally responsive electrospun nanofibers, Advanced Materials, vol. 19 no. 19 (2007), pp. 2775-2779\nDai, H. and Jiang, X. and Tan, G. C. and Chen, Y. and Torbenson, M. and Leong, K. W. and Mao, H. Q., Chitosan-DNA nanoparticles delivered by intrabiliary infusion enhance liver-targeted gene delivery, International Journal of Nanomedicine, vol. 1 no. 4 (2006), pp. 507-522 [abs]\nLe Visage, C. and Kim, S. W. and Tateno, K. and Sieber, A. N. and Kostuik, J. P. and Leong, K. W., Interaction of human mesenchymal stem cells with disc cells - Changes in extracellular matrix biosynthesis, Spine, vol. 31 no. 18 (2006), pp. 2036-2042\nOng, S. Y. and Dai, H. and Leong, K. W., Inducing hepatic differentiation of human mesenchymal stem cells in pellet culture, Biomaterials, vol. 27 no. 22 (2006), pp. 4087-4097\nBright, C. and Park, Y. S. and Sieber, A. N. and Kostuik, J. P. and Leong, K. W., In vivo evaluation of plasmid DNA encoding OP-1 protein for spine fusion, Spine, vol. 31 no. 19 (2006), pp. 2163-2172\nYim, E. K. and Wan, A. C. and Le Visage, C. and Liao, I. C. and Leong, K. W., Proliferation and differentiation of human mesenchymal stem cell encapsulated in polyelectrolyte complexation fibrous scaffold, Biomaterials, vol. 27 no. 36 (2006), pp. 6111-22 [abs]\nLuong-Van, E. and Grondahl, L. and Chua, K. N. and Leong, K. W. and Nurcombe, V. and Cool, S. M., Controlled release of heparin from poly(epsilon-caprolactone) electrospun fibers, Biomaterials, vol. 27 no. 9 (2006), pp. 2042-2050\nDang, J. M. and Leong, K. W., Natural polymers for gene delivery and tissue engineering, Advanced Drug Delivery Reviews, vol. 58 no. 4 (2006), pp. 487-499\nLi, J. and Li, X. and Ni, X. P. and Wang, X. and Li, H. Z. and Leong, K. W., Self-assembled supramolecular hydrogels formed by biodegradable PEO-PHB-PEO triblock copolymers and alpha-cyclodextrin for controlled drug delivery, Biomaterials, vol. 27 no. 22 (2006), pp. 4132-4140\nYim, E. K. F. and Wen, J. and Leong, K. W., Enhanced extracellular matrix production and differentiation of human embryonic germ cell derivatives in biodegradable poly(epsilon-caprolactone-co-ethyl ethylene phosphate) scaffold, Acta Biomaterialia, vol. 2 no. 4 (2006), pp. 365-376\nChew, S. Y. and Hufnagel, T. C. and Lim, C. T. and Leong, K. W., Mechanical properties of single electrospun drug-encapsulated nanofibres, Nanotechnology, vol. 17 no. 15 (2006), pp. 3880-3891\nZhang, Y. and Chai, C. and Jiang, X. S. and Teoh, S. H. and Leong, K. W., Co-culture of umbilical cord blood CD34(+) cells with human mesenchymal stem cells, Tissue Engineering, vol. 12 no. 8", "answers": ["Sustained viral gene delivery through core-shell fibers and Gene transfer to hemophilia A mice via oral delivery of FVIII-chitosan nanoparticles."], "length": 2345, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "edbbdb9727c3a51310d24895d08c5a90673cb9d514770878"} {"input": "What was the Buckeyes' record in their first game of the season?", "context": "The 1951 Ohio State Buckeyes baseball team represented the Ohio State University in the 1951 NCAA baseball season. The head coach was Marty Karow, serving his 1st year.\n\nThe Buckeyes lost in the College World Series, defeated by the Texas A&M Aggies.\n\nRoster\n\nSchedule \n\n! style=\"\" | Regular Season\n|- valign=\"top\" \n\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 1 || March 16 || at || Unknown • San Antonio, Texas || 15–3 || 1–0 || 0–0\n|- align=\"center\" bgcolor=\"#ffcccc\"\n| 2 || March 17 || at B. A. M. C. || Unknown • San Antonio, Texas || 7–8 || 1–1 || 0–0\n|- align=\"center\" bgcolor=\"#ffcccc\"\n| 3 || March 19 || at || Clark Field • Austin, Texas || 0–8 || 1–2 || 0–0\n|- align=\"center\" bgcolor=\"#ffcccc\"\n| 4 || March 20 || at Texas || Clark Field • Austin, Texas || 3–4 || 1–3 || 0–0\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 5 || March 21 || at || Unknown • Houston, Texas || 14–6 || 2–3 || 0–0\n|- align=\"center\" bgcolor=\"#ffcccc\"\n| 6 || March 22 || at Rice || Unknown • Houston, Texas || 2–3 || 2–4 || 0–0\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 7 || March 23 || at || Unknown • Fort Worth, Texas || 4–2 || 3–4 || 0–0\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 8 || March 24 || at TCU || Unknown • Fort Worth, Texas || 7–3 || 4–4 || 0–0\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 9 || March 24 || at || Unknown • St. Louis, Missouri || 10–4 || 5–4 || 0–0\n|-\n\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 10 || April 6 || || Varsity Diamond • Columbus, Ohio || 2–0 || 6–4 || 0–0\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 11 || April 7 || || Varsity Diamond • Columbus, Ohio || 15–1 || 7–4 || 0–0\n|- align=\"center\" bgcolor=\"#ffcccc\"\n| 12 || April 14 || || Varsity Diamond • Columbus, Ohio || 0–1 || 7–5 || 0–0\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 13 || April 20 || || Varsity Diamond • Columbus, Ohio || 10–9 || 8–5 || 1–0\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 14 || April 21 || Minnesota || Varsity Diamond • Columbus, Ohio || 7–0 || 9–5 || 2–0\n|- align=\"center\" bgcolor=\"#ffcccc\"\n| 15 || April 24 || at || Unknown • Oxford, Ohio || 3–4 || 9–6 || 2–0\n|- align=\"center\" bgcolor=\"#ffcccc\"\n| 16 || April 27 || at || Hyames Field • Kalamazoo, Michigan || 2–3 || 9–7 || 2–0\n|- align=\"center\" bgcolor=\"#ffcccc\"\n| 17 || April 28 || at Western Michigan || Hyames Field • Kalamazoo, Michigan || 5–7 || 9–8 || 2–0\n|-\n\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 18 || May 1 || at || Unknown • Athens, Ohio || 7–6 || 10–8 || 2–0\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 19 || May 4 || || Varsity Diamond • Columbus, Ohio || 12–6 || 11–8 || 3–0\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 20 || May 5 || Purdue || Varsity Diamond • Columbus, Ohio || 14–4 || 12–8 || 4–0\n|- align=\"center\" bgcolor=\"#ffcccc\"\n| 21 || May 8 || || Varsity Diamond • Columbus, Ohio || 6–8 || 12–9 || 4–0\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 22 || May 9 || at Dayton || Unknown • Dayton, Ohio || 11–2 || 13–9 || 4–0\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 23 || May 12 || || Varsity Diamond • Columbus, Ohio || 6–5 || 14–9 || 5–0\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 24 || May 12 || Indiana || Varsity Diamond • Columbus, Ohio || 5–2 || 15–9 || 6–0\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 25 || May 15 || Ohio || Varsity Diamond • Columbus, Ohio || 6–0 || 16–9 || 6–0\n|- align=\"center\" bgcolor=\"#ffcccc\"\n| 26 || May 18 || at || Northwestern Park • Evanston, Illinois || 1–3 || 16–10 || 6–1\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 27 || May 19 || at Northwestern || Northwestern Park • Evanston, Illinois || 10–3 || 17–10 || 7–1\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 28 || May 22 || at Cincinnati || Carson Field • Cincinnati, Ohio || 8–4 || 18–10 || 7–1\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 29 || May 25 || || Varsity Diamond • Columbus, Ohio || 4–1 || 19–10 || 8–1\n|- align=\"center\" bgcolor=\"#ffcccc\"\n| 30 || May 25 || Michigan || Varsity Diamond • Columbus, Ohio || 3–6 || 19–11 || 8–2\n|- align=\"center\" bgcolor=\"#ffcccc\"\n| 31 || May 30 || Miami (OH) || Varsity Diamond • Columbus, Ohio || 3–4 || 19–12 || 8–2\n|-\n\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 32 || June 1 || at || Old College Field • East Lansing, Michigan || 8–0 || 20–12 || 9–2\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 33 || June 2 || at Michigan State || Old College Field • East Lansing, Michigan || 9–8 || 21–12 || 10–2\n|-\n\n|-\n|-\n! style=\"\" | Postseason\n|- valign=\"top\"\n\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 34 || June 8 || Western Michigan || Varsity Diamond • Columbus, Ohio || 1–0 || 22–12 || 10–2\n|- align=\"center\" bgcolor=\"#ffcccc\"\n| 35 || June 8 || Western Michigan || Varsity Diamond • Columbus, Ohio || 2–4 || 22–13 || 10–2\n|- align=\"center\" bgcolor=\"#ccffcc\"\n| 36 || June 9 || Western Michigan || Varsity Diamond • Columbus, Ohio || 3–2 || 23–13 || 10–2\n|-\n\n|- align=\"center\" bgcolor=\"#ffcccc\"\n| 37 || June 13 || Oklahoma || Omaha Municipal Stadium • Omaha, Nebraska || 8–9 || 23–14 || 10–2\n|- align=\"center\" bgcolor=\"#ffcccc\"\n| 38 || June 13 || Texas A&M || Omaha Municipal Stadium • Omaha, Nebraska || 2–3 || 23–15 || 10–2\n|-\n\nAwards and honors \nDick Hauck\n First Team All-Big Ten\n\nStewart Hein\n First Team All-Big Ten\n\nReferences \n\nOhio State Buckeyes baseball seasons\nOhio State Buckeyes baseball\nBig Ten Conference baseball champion seasons\nOhio State\nCollege World Series seasons", "answers": ["They won their first game with a score of 15-3."], "length": 972, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "ed61bdde19a3446389e989c06ab4209f464f9484d42dbd1c"} {"input": "When did Simon English become the leader of the National Party?", "context": "Sir Simon William English (born 30 December 1961) is a New Zealand former National Party politician who served as the 39th prime minister of New Zealand from 2016 to 2017. He had previously served as the 17th deputy prime minister of New Zealand and minister of finance from 2008 to 2016 under John Key and the Fifth National Government.\n\nA farmer and public servant before entering politics, English was elected to the New Zealand Parliament in as the National Party's candidate in the Wallace electorate. He was elevated to Cabinet in 1996 and in 1999 was made minister of finance, although he served for less than a year due to his party's loss at the 1999 general election. In October 2001, English replaced Jenny Shipley as the leader of the National Party (and consequently as Leader of the Opposition). He led the party to its worst defeat at the 2002 general election, and as a consequence, in October 2003 he was replaced as leader by Don Brash.\n\nIn November 2006, after Brash's resignation, English became deputy leader under John Key. After National's victory at the 2008 general election, he became deputy prime minister and was also made minister of finance for the second time. Under English's direction New Zealand's economy maintained steady growth during National's three terms of government. He became a list-only MP after stepping down as an electorate MP at the 2014 general election.\n\nJohn Key resigned as leader of the National Party and prime minister in December 2016. English won the resulting leadership election unopposed and was sworn in as prime minister on 12 December 2016. His tenure was only ten months, and included a three-month election campaign. In the 2017 general election, National won the largest number of seats but fell short of a majority. The parties holding the balance of power declined to support the existing government, and English was subsequently replaced as prime minister by Jacinda Ardern, leader of the Labour Party. English initially continued on as Leader of the Opposition, but resigned as leader of the National Party on 27 February 2018 and left parliament two weeks later.\n\nEarly life\nEnglish was born on 30 December 1961 at Lumsden Maternity Centre in Lumsden. He is the eleventh of twelve children of Mervyn English and Norah (née O'Brien) English. His parents purchased Rosedale, a mixed sheep and cropping farm in Dipton, Southland from Mervyn's uncle, Vincent English, a bachelor, in 1944. English was born in the maternity unit at Lumsden.\n\nEnglish attended St Thomas's School in Winton, then boarded at St. Patrick's College in Upper Hutt, where he became head boy. He played in the first XV of the school's rugby team. English went on to study commerce at the University of Otago, where he was a resident at Selwyn College, and then completed an honours degree in English literature at Victoria University of Wellington.\n\nAfter finishing his studies, English returned to Dipton and farmed for a few years. From 1987 to 1989, he worked in Wellington as a policy analyst for the New Zealand Treasury, at a time when the free market policies favoured by Labour's finance minister Roger Douglas (known collectively as \"Rogernomics\") were being implemented.\n\nEnglish joined the National Party in 1980, while at Victoria University. He served for a period as chairman of the Southland branch of the Young Nationals, and became a member of the Wallace electorate committee. After moving to Wellington, he served for periods on the Island Bay and Miramar electorate committees, respectively.\n\nFourth National Government (1990–1999)\n\nAt the 1990 general election, English stood as the National candidate in Wallace, replacing the retiring Derek Angus, and was elected with a large majority. He would hold this seat, renamed Clutha-Southland in 1996, until 2014. He and three other newly elected National MPs (Tony Ryall, Nick Smith, and Roger Sowry) were soon identified as rising stars in New Zealand politics, and at various points were dubbed the \"brat pack\", the \"gang of four\", and the \"young Turks\". In his first term in parliament, English chaired a select committee into social services. He was made a parliamentary under-secretary in 1993, serving under the Minister of Health.\n\nFirst period in cabinet (1996–1999)\nIn early 1996, English was elevated to cabinet by Prime Minister Jim Bolger, becoming the Minister for Crown Health Enterprises and Associate Minister of Education (to Wyatt Creech). He was 34 at the time, becoming the cabinet's youngest member. After the 1996 general election, the National Party was forced into a coalition with New Zealand First to retain government. In the resulting cabinet reshuffle, English emerged as Minister of Health. However, as a condition of the coalition agreement, NZ First's Neil Kirton (a first-term MP) was made Associate Minister of Health, effectively becoming English's deputy. This arrangement was described in the press as a \"shotgun marriage\", and there were frequent differences of opinion between the two ministers. After their relationship became unworkable, Kirton was sacked from the role in August 1997, with the agreement of NZ First leader Winston Peters.\n\nAs Minister of Health, English was responsible for continuing the reforms to the public health system that National had begun after the 1990 general election. The reforms were unpopular, and health was perceived as one of the government's weaknesses, with the health portfolio consequently being viewed as a challenge. English believed that the unpopularity of the reforms was in part due to a failure in messaging, and encouraged his National colleagues to avoid bureaucratic and money-focused language (such as references to \"balance sheets\" and \"user charges\") and instead talk about the improvements to services the government's reforms would bring. He also rejected the idea that public hospitals could be run as commercial enterprises, a view which some of his colleagues had previously promoted.\n\nBy early 1997, as dissatisfaction with Bolger's leadership began to grow, English was being touted as a potential successor, along with Jenny Shipley and Doug Graham. His age (35) was viewed as the main impediment to a successful leadership run. National's leadership troubles were resolved in December 1997, when Bolger resigned and Shipley was elected to the leadership unopposed. English had been a supporter of Bolger as leader, but Shipley reappointed him Minister of Health in her new cabinet.\n\nEnglish was promoted to Minister of Finance in a reshuffle in January 1999, a position which was at the time subordinate to the Treasurer, Bill Birch. After a few months, the pair switched positions as part of Birch's transition to retirement, with English assuming the senior portfolio. In early interviews, he emphasised his wish to be seen as a pragmatist rather than an ideologue, and said that the initiatives of some of his predecessors (Roger Douglas's \"Rogernomics\" and Ruth Richardson's \"Ruthanasia\") had focused on \"fruitless, theoretical debates\" when \"people just want to see problems solved\".\n\nOpposition (1999–2008)\n\nAfter the National Party lost the 1999 election to Helen Clark's Labour Party, English continued on in the shadow cabinet as National's spokesperson for finance. He was elected deputy leader of the party in February 2001, following the resignation of Wyatt Creech, with Gerry Brownlee being his unsuccessful opponent.\n\nLeader of the Opposition\nIn October 2001, after months of speculation, Jenny Shipley resigned as leader of the National Party after being told she no longer had the support of the party caucus. English was elected as her replacement unopposed (with Roger Sowry as his deputy), and consequently became Leader of the Opposition. However, he did not openly organise against Shipley, and according to The Southland Times \"there was almost an element of 'aw, shucks, I'll do it then' about Mr English's ascension\".\n\nAged 39 when he was elected, English became the second-youngest leader in the National Party's history, after Jim McLay (who was 38 when elected in 1984). He also became only the third Southlander to lead a major New Zealand political party, after Joseph Ward and Adam Hamilton. However, English failed to improve the party's performance. In the 2002 election, National suffered its worst electoral defeat ever, gaining barely more than twenty percent of the vote. English described it as \"the worst day of my political life\". Both party insiders and the general public were split as to how much to blame him for the loss, but most of the party believed that English would be able to rebuild National's support.\n\nBy late 2003, however, National's performance in opinion polls remained poor. The party had briefly increased its popularity in the year following the election, but by October its support had fallen to levels only slightly better than what it achieved in the last ballot. English also appeared in a boxing match for a charity against entertainer Ted Clarke. This did not boost his polling or that of the National party either, with suggestions that it devalued his image as a serious politician. Don Brash, former governor of the Reserve Bank and a relative newcomer to politics, began to build up support to replace English. On 28 October, Brash gained sufficient backing in Caucus to defeat English in a leadership contest.\n\nShadow cabinet roles and deputy leader\nOn 2 November 2003, when Brash changed responsibilities for certain MPs, English became National's spokesman for education, ranked at fifth place in the party's parliamentary hierarchy. He remained in parliament after the 2005 election. In his new shadow education portfolio, English performed strongly, and remained a party favourite despite his election defeat as leader in 2002, eventually being returned to the finance portfolio in August 2004 as deputy spokesman (while still retaining responsibility for education).\n\nIn November 2006, Brash resigned as leader. English was considered as a potential replacement leader (running against John Key) or deputy leader (against incumbent Gerry Brownlee) in the ensuing leadership election. However, a contest was avoided when the MPs agreed a Key/English ticket would run unopposed in a display of party unity. English took over the deputy leadership and the finance portfolio in the Key shadow cabinet.\n\nFifth National Government (2008–2017)\n\nDeputy Prime Minister and Minister of Finance (2008–2016)\n\nAt the 2008 election, English was re-elected by his electorate, winning by a margin of about 15,500 votes. He became Deputy Prime Minister of New Zealand and Minister of Finance in the fifth National Government, being sworn into office on 19 November 2008 and continued to serve in those roles until becoming Prime Minister on 12 December 2014. He was also made Minister of Infrastructure in National's first term of government and Minister responsible for Housing New Zealand Corporation and minister responsible for the New Zealand flag consideration process in its third.\n\nHe was comfortably re-elected in Clutha-Southland in the 2011 election but opted to run as a party-list candidate in 2014. \n\nThe pairing of John Key as leader of the National Party and English as his deputy has been compared to that of Bob Hawke and Paul Keating (in Australia) and Tony Blair and Gordon Brown (in the UK).\n\nEnglish acceded to the role of Finance Minister in the continuing wake of the financial crisis. In response to New Zealand's rising debt, English made budget deficit-reduction his main priority. His first budget outlined three focuses in New Zealand's financial recovery: \"improving the business environment and removing roadblocks to growth; investment in productive infrastructure; and improving the way government works\". One of his first acts was creating the National Infrastructure Unit, charged with formulating a plan for infrastructure projects and investments. He commissioned a government-wide spending review, with an aim to reducing government expenditure—with the exceptions of a two-year stimulus package and long-term increases on infrastructure spending.\n\nIn April 2011, the Opposition criticised English for suggesting that New Zealand businesses could use New Zealand's low wages to help it compete with Australia. The National Government campaigned for re-election in 2011 on its economic record. The Government boasted growth for five consecutive quarters up to mid-2010, totalling 1.6% of real GDP.\n\nStrong growth resulted in a surplus of $473 million for the 2015/16 financial year, projected to rise to $8.5 billion by 2020/21. In his 2016 Economic and Fiscal Update address, English stated that reducing debt and tackling the costs of the 2016 Kaikōura earthquake were higher priorities than reducing rates of tax.\n\nAllowances issue\nIn 2009, the media, including TVNZ and TV3 revealed that English was receiving about NZ$900 a week as part of a living allowance for ministers, to live in his own NZ$1.2 million Wellington home. At the time, English also received $276,200 in his annual salary as Deputy Prime Minister. It was also revealed other ministers with homes in the capital city were also claiming accommodation allowances. On 3 August 2009, Prime Minister John Key started a review of the housing allowances claimed by cabinet ministers. English subsequently paid back $12,000 and only claimed about $24,000 a year in living allowances. The Auditor-General's office said in September 2009 that they were making \"preliminary enquiries\" into parliamentary housing expenses in response to a letter of complaint from Progressive party leader Jim Anderton. Two days later English stated that he would no longer take up any housing allowance and had paid back all the allowance he had received since the November 2008 election.\n\nPrime Minister (2016–2017)\n\nJohn Key resigned on 12 December, and endorsed English as his successor in the resulting leadership election. Following the drop-out of both Judith Collins and Jonathan Coleman from the leadership election, English was sworn in as the 39th Prime Minister of New Zealand on 12 December 2016.\n\nEnglish appointed his first cabinet on 18 December. In a reshuffle, he appointed Steven Joyce to succeed him as Finance Minister, while most ministerial portfolios remained the same.\n\nIn February 2017, English did not attend Waitangi Day commemorations at the historic treaty grounds, reportedly in response to the Ngāpuhi iwi's decision to stop the Prime Minister from speaking at the marae. Ngāpuhi have protested the Government's negotiation of the Trans Pacific Partnership Agreement (TPPA), which the iwi believe infringes upon Māori sovereignty, and thus does not adhere to the Treaty of Waitangi. English had been invited to attend in an official capacity; his non-attendance was criticised by a Ngāpuhi elder and Opposition leader Andrew Little.\n\nIn his first overseas trip as Prime Minister, English travelled to Europe to discuss trade ties, including a prospective New Zealand–European Union free trade agreement. He first travelled to London on 13 January 2017 to meet British Prime Minister Theresa May. Discussing trade relations, English said the two nations were \"natural partners\" and would \"continue to forge ties\" after the UK's withdrawal from the EU. He also arranged to meet with London Mayor Sadiq Khan, Belgian Prime Minister Charles Michel and German Chancellor Angela Merkel. In a meeting with Merkel, English received crucial backing from Germany for a trade deal with the EU. On 16 January, English stated that his government would continue to promote TPPA, despite the United States' decision to withdraw from the agreement. He explained that Southeast Asian countries would now be treated as a priority in negotiations—he also asserted that the United States was ceding influence to China by its rejection of the trade pact.\n\nAt a press conference at the Beehive on 1 February 2017, English announced that the 2017 general election would be held on 23 September. The Prime Minister later confirmed that his party would approach ACT, United Future and the Māori Party if confidence and supply agreements were required to form a government following the election. In his second cabinet reshuffle on 24 April, English appointed Gerry Brownlee as his new Foreign Affairs Minister; he also promoted Nikki Kaye to the portfolio of Education Minister, and moved Mark Mitchell into the cabinet to become Defence Minister. The reshuffle was perceived as an election preparation.\n\nOn 13 February 2017, English welcomed Australian Prime Minister Malcolm Turnbull to Wellington. The two leaders reaffirmed their shared trade agenda, and discussed changes to the Australian citizenship pathway which will affect permanent residents originating from New Zealand.\n\nOn 19 June, it was reported that Todd Barclay, who succeeded English as MP for Clutha-Southland, had clandestinely recorded one of his employee's conversations the previous year, and that John Key's leaders' budget was used to pay a confidential settlement after the employee resigned. English admitted that he had been aware of the illegal recording and the settlement, and thus implicated in the scandal.\n\nDuring the 2017 National campaign launch, English introduced a $379 million social investment package including digital learning academies for high school students, more resources for mathematics, and boosting support for teaching second languages in schools, and maintaining National Standards in the school curriculum. Prime Minister English also sought to defend National's financial management and economic track record and claimed that the opposition Labour Party would raise taxes. Early opinion polling had forecast a poor showing in the election for the Labour Party, but in early August 37-year-old Jacinda Ardern took over as Labour leader and seemingly energised younger voters.\n\nAt the 2017 general election, National won the largest share of the party vote (44.4%) and the largest number of seats (56) in the House Representatives. However, National lacked enough seats to govern alone due to two of the party's support partners, the Māori Party and United Future, losing their parliamentary seats. In response, English stated that the party would be entering into talks to form a coalition with New Zealand First. Following talks with the two largest parties, New Zealand First entered a coalition arrangement with the Labour Party. English was succeeded as prime minister by Jacinda Ardern on 26 October.\n\nOpposition (2017–2018)\n\nLeader of the Opposition\nEnglish was re-elected as National Party leader on 24 October 2017. At the time of his re-election, English announced his intention to stay on as leader until the next general election. On 13 February 2018, however, he stood down as National Party leader due to personal reasons, and instructed the party to put into motion the processes to elect a new leader. He also retired from Parliament. English's resignation followed weeks of speculation that he would step aside for a new leader. On 27 February, he was succeeded as party leader by Simon Bridges as the result of the leadership election held that day.\n\nPost-premiership \nIn 2018, English joined the board of Australian conglomerate, Wesfarmers. English serves in Chairmanships of Mount Cook Alpine Salmon, Impact Lab Ltd and Manawanui Support Ltd. He is also a director of The Instillery, Centre for Independent Studies and The Todd Corporation Limited, and is a member of the Impact Advisory Group of Macquarie Infrastructure and Real Assets.\n\nPolitical and social views\n\nEnglish is regarded as more socially conservative than his predecessor, John Key. He has stated his opposition to voluntary euthanasia and physician-assisted suicide, same-sex civil unions, and the decriminalisation of prostitution. As Prime Minister he opposed any \"liberalisation\" of abortion law.\n\nIn 2004, English voted against a bill to establish civil unions for both same-sex and opposite-sex couples. In 2005, he voted for the Marriage (Gender Clarification) Amendment Bill, which would have amended the Marriage Act to define marriage as only between a man and a woman. English voted against the Marriage (Definition of Marriage) Amendment Bill, a bill that legalised same-sex marriage in New Zealand. However, in December 2016 he stated, \"I'd probably vote differently now on the gay marriage issue. I don't think that gay marriage is a threat to anyone else's marriage\".\n\nIn 2009, English voted against the Misuse of Drugs (Medicinal Cannabis) Amendment Bill, a bill aimed at amending the Misuse of Drugs Act so that cannabis could be used for medical purposes.\n\nPersonal life \nEnglish met his future wife, Mary Scanlon, at university. She was studying medicine at the time, and became a general practitioner. Both her parents were immigrants, her father being Samoan and her mother Italian, born on the island of Stromboli. They have six children: a daughter and five sons.\n\nEnglish is a practising Roman Catholic, but has stated that he considers his religious beliefs personal and thus separate from politics.\n\nIn June 2002, English took part in TV3's Fight For Life, a celebrity boxing fundraiser to raise money for the Yellow Ribbon anti-youth-suicide campaign, influenced by the death of a teenage nephew in 1997. He lost a split decision to former university colleague Ted Clarke.\n\nHonours\nIn the 2018 Queen's Birthday Honours, English was appointed a Knight Companion of the New Zealand Order of Merit, for services of over 27 years to the State.\n\nSee also\n\nList of New Zealand governments\nPolitics of New Zealand\n\nReferences\n\nExternal links\n\nProfile at National Party \nProfile on Parliament.nz\nReleases and speeches at Beehive.govt.nz\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n1961 births\n21st-century New Zealand politicians\nCandidates in the 2017 New Zealand general election\nDeputy Prime Ministers of New Zealand\nLeaders of the Opposition (New Zealand)\nLiving people\nMembers of the Cabinet of New Zealand\nMembers of the New Zealand House of Representatives\nNew Zealand farmers\nNew Zealand finance ministers\nNew Zealand list MPs\nNew Zealand MPs for South Island electorates\nNew Zealand National Party MPs\nNew Zealand National Party leaders\nNew Zealand Roman Catholics\nNew Zealand people of Irish descent\nPeople educated at St. Patrick's College, Silverstream\nPeople from Dipton, New Zealand\nPeople from Lumsden, New Zealand\nPrime Ministers of New Zealand\nUniversity of Otago alumni\nVictoria University of Wellington alumni\nKnights Companion of the New Zealand Order of Merit\n", "answers": ["October 2001."], "length": 3590, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "67dd297408f3d52f22ede88df0b62644dac86da77f009e71"} {"input": "What algorithm is engaged in the PLMS-PPIC method?", "context": "\\section{Introduction}\\label{S1}\n\nThe multiple access interferences (MAI) is the root of user\nlimitation in CDMA systems \\cite{R1,R3}. The parallel least mean\nsquare-partial parallel interference cancelation (PLMS-PPIC) method\nis a multiuser detector for code division multiple access (CDMA)\nreceivers which reduces the effect of MAI in bit detection. In this\nmethod and similar to its former versions like LMS-PPIC \\cite{R5}\n(see also \\cite{RR5}), a weighted value of the MAI of other users is\nsubtracted before making the decision for a specific user in\ndifferent stages \\cite{cohpaper}. In both of these methods, the\nnormalized least mean square (NLMS) algorithm is engaged\n\\cite{Haykin96}. The $m^{\\rm th}$ element of the weight vector in\neach stage is the true transmitted binary value of the $m^{\\rm th}$\nuser divided by its hard estimate value from the previous stage. The\nmagnitude of all weight elements in all stages are equal to unity.\nUnlike the LMS-PPIC, the PLMS-PPIC method tries to keep this\nproperty in each iteration by using a set of NLMS algorithms with\ndifferent step-sizes instead of one NLMS algorithm used in LMS-PPIC.\nIn each iteration, the parameter estimate of the NLMS algorithm is\nchosen whose element magnitudes of cancelation weight estimate have\nthe best match with unity. In PLMS-PPIC implementation it is assumed\nthat the receiver knows the phases of all user channels. However in\npractice, these phases are not known and should be estimated. In\nthis paper we improve the PLMS-PPIC procedure \\cite{cohpaper} in\nsuch a way that when there is only a partial information of the\nchannel phases, this modified version simultaneously estimates the\nphases and the cancelation weights. The partial information is the\nquarter of each channel phase in $(0,2\\pi)$.\n\nThe rest of the paper is organized as follows: In section \\ref{S4}\nthe modified version of PLMS-PPIC with capability of channel phase\nestimation is introduced. In section \\ref{S5} some simulation\nexamples illustrate the results of the proposed method. Finally the\npaper is concluded in section \\ref{S6}.\n\n\\section{Multistage Parallel Interference Cancelation: Modified PLMS-PPIC Method}\\label{S4}\n\nWe assume $M$ users synchronously send their symbols\n$\\alpha_1,\\alpha_2,\\cdots,\\alpha_M$ via a base-band CDMA\ntransmission system where $\\alpha_m\\in\\{-1,1\\}$. The $m^{th}$ user\nhas its own code $p_m(.)$ of length $N$, where $p_m(n)\\in \\{-1,1\\}$,\nfor all $n$. It means that for each symbol $N$ bits are transmitted\nby each user and the processing gain is equal to $N$. At the\nreceiver we assume that perfect power control scheme is applied.\nWithout loss of generality, we also assume that the power gains of\nall channels are equal to unity and users' channels do not change\nduring each symbol transmission (it can change from one symbol\ntransmission to the next one) and the channel phase $\\phi_m$ of\n$m^{th}$ user is unknown for all $m=1,2,\\cdots,M$ (see\n\\cite{cohpaper} for coherent transmission). According to the above\nassumptions the received signal is\n\\begin{equation}\n\\label{e1} r(n)=\\sum\\limits_{m=1}^{M}\\alpha_m\ne^{j\\phi_m}p_m(n)+v(n),~~~~n=1,2,\\cdots,N,\n\\end{equation}\nwhere $v(n)$ is the additive white Gaussian noise with zero mean and\nvariance $\\sigma^2$. Multistage parallel interference cancelation\nmethod uses $\\alpha^{s-1}_1,\\alpha^{s-1}_2,\\cdots,\\alpha^{s-1}_M$,\nthe bit estimates outputs of the previous stage, $s-1$, to estimate\nthe related MAI of each user. It then subtracts it from the received\nsignal $r(n)$ and makes a new decision on each user variable\nindividually to make a new variable set\n$\\alpha^{s}_1,\\alpha^{s}_2,\\cdots,\\alpha^{s}_M$ for the current\nstage $s$. Usually the variable set of the first stage (stage $0$)\nis the output of a conventional detector. The output of the last\nstage is considered as the final estimate of transmitted bits. In\nthe following we explain the structure of a modified version of the\nPLMS-PIC method \\cite{cohpaper} with simultaneous capability of\nestimating the cancelation weights and the channel phases.\n\nAssume $\\alpha_m^{(s-1)}\\in\\{-1,1\\}$ is a given estimate of\n$\\alpha_m$ from stage $s-1$. Define\n\\begin{equation}\n\\label{e6} w^s_{m}=\\frac{\\alpha_m}{\\alpha_m^{(s-1)}}e^{j\\phi_m}.\n\\end{equation}\nFrom (\\ref{e1}) and (\\ref{e6}) we have\n\\begin{equation}\n\\label{e7} r(n)=\\sum\\limits_{m=1}^{M}w^s_m\\alpha^{(s-1)}_m\np_m(n)+v(n).\n\\end{equation}\nDefine\n\\begin{subequations}\n\\begin{eqnarray}\n\\label{e8} W^s&=&[w^s_{1},w^s_{2},\\cdots,w^s_{M}]^T,\\\\\n\\label{e9}\n\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!X^{s}(n)\\!\\!\\!&=&\\!\\!\\![\\alpha^{(s-1)}_1p_1(n),\\alpha^{(s-1)}_2p_2(n),\\cdots,\\alpha^{(s-1)}_Mp_M(n)]^T.\n\\end{eqnarray}\n\\end{subequations}\nwhere $T$ stands for transposition. From equations (\\ref{e7}),\n(\\ref{e8}) and (\\ref{e9}), we have\n\\begin{equation}\n\\label{e10} r(n)=W^{s^T}X^{s}(n)+v(n).\n\\end{equation}\nGiven the observations $\\{r(n),X^{s}(n)\\}^{N}_{n=1}$, in modified\nPLMS-PPIC, like the PLMS-PPIC \\cite{cohpaper}, a set of NLMS\nadaptive algorithm are used to compute\n\\begin{equation}\n\\label{te1} W^{s}(N)=[w^{s}_1(N),w^{s}_2(N),\\cdots,w^{s}_M(N)]^T,\n\\end{equation}\nwhich is an estimate of $W^s$ after iteration $N$. To do so, from\n(\\ref{e6}), we have\n\\begin{equation}\n\\label{e13} |w^s_{m}|=1 ~~~m=1,2,\\cdots,M,\n\\end{equation}\nwhich is equivalent to\n\\begin{equation}\n\\label{e14} \\sum\\limits_{m=1}^{M}||w^s_{m}|-1|=0.\n\\end{equation}\nWe divide $\\Psi=\\left(0,1-\\sqrt{\\frac{M-1}{M}}\\right]$, a sharp\nrange for $\\mu$ (the step-size of the NLMS algorithm) given in\n\\cite{sg2005}, into $L$ subintervals and consider $L$ individual\nstep-sizes $\\Theta=\\{\\mu_1,\\mu_2,\\cdots,\\mu_L\\}$, where\n$\\mu_1=\\frac{1-\\sqrt{\\frac{M-1}{M}}}{L}, \\mu_2=2\\mu_1,\\cdots$, and\n$\\mu_L=L\\mu_1$. In each stage, $L$ individual NLMS algorithms are\nexecuted ($\\mu_l$ is the step-size of the $l^{th}$ algorithm). In\nstage $s$ and at iteration $n$, if\n$W^{s}_k(n)=[w^s_{1,k},\\cdots,w^s_{M,k}]^T$, the parameter estimate\nof the $k^{\\rm th}$ algorithm, minimizes our criteria, then it is\nconsidered as the parameter estimate at time iteration $n$. In other\nwords if the next equation holds\n\\begin{equation}\n\\label{e17} W^s_k(n)=\\arg\\min\\limits_{W^s_l(n)\\in I_{W^s}\n}\\left\\{\\sum\\limits_{m=1}^{M}||w^s_{m,l}(n)|-1|\\right\\},\n\\end{equation}\nwhere $W^{s}_l(n)=W^{s}(n-1)+\\mu_l \\frac{X^s(n)}{\\|X^s(n)\\|^2}e(n),\n~~~ l=1,2,\\cdots,k,\\cdots,L-1,L$ and\n$I_{W^s}=\\{W^s_1(n),\\cdots,W^s_L(n)\\}$, then we have\n$W^s(n)=W^s_k(n)$, and therefore all other algorithms replace their\nweight estimate by $W^{s}_k(n)$. At time instant $n=N$, this\nprocedure gives $W^s(N)$, the final estimate of $W^s$, as the true\nparameter of stage $s$.\n\nNow consider $R=(0,2\\pi)$ and divide it into four equal parts\n$R_1=(0,\\frac{\\pi}{2})$, $R_2=(\\frac{\\pi}{2},\\pi)$,\n$R_3=(\\pi,\\frac{3\\pi}{2})$ and $R_4=(\\frac{3\\pi}{2},2\\pi)$. The\npartial information of channel phases (given by the receiver) is in\na way that it shows each $\\phi_m$ ($m=1,2,\\cdots,M$) belongs to\nwhich one of the four quarters $R_i,~i=1,2,3,4$. Assume\n$W^{s}(N)=[w^{s}_1(N),w^{s}_2(N),\\cdots,w^{s}_M(N)]^T$ is the weight\nestimate of the modified algorithm PLMS-PPIC at time instant $N$ of\nthe stage $s$. From equation (\\ref{e6}) we have\n\\begin{equation}\n\\label{tt3}\n\\phi_m=\\angle({\\frac{\\alpha^{(s-1)}_m}{\\alpha_m}w^s_m}).\n\\end{equation}\nWe estimate $\\phi_m$ by $\\hat{\\phi}^s_m$, where\n\\begin{equation}\n\\label{ee3}\n\\hat{\\phi}^s_m=\\angle{(\\frac{\\alpha^{(s-1)}_m}{\\alpha_m}w^s_m(N))}.\n\\end{equation}\nBecause $\\frac{\\alpha^{(s-1)}_m}{\\alpha_m}=1$ or $-1$, we have\n\\begin{eqnarray}\n\\hat{\\phi}^s_m=\\left\\{\\begin{array}{ll} \\angle{w^s_m(N)} &\n\\mbox{if}~\n\\frac{\\alpha^{(s-1)}_m}{\\alpha_m}=1\\\\\n\\pm\\pi+\\angle{w^s_m(N)} & \\mbox{if}~\n\\frac{\\alpha^{(s-1)}_m}{\\alpha_m}=-1\\end{array}\\right.\n\\end{eqnarray}\nHence $\\hat{\\phi}^s_m\\in P^s=\\{\\angle{w^s_m(N)},\n\\angle{w^s_m(N)+\\pi, \\angle{w^s_m(N)}-\\pi}\\}$. If $w^s_m(N)$\nsufficiently converges to its true value $w^s_m$, the same region\nfor $\\hat{\\phi}^s_m$ and $\\phi_m$ is expected. In this case only one\nof the three members of $P^s$ has the same region as $\\phi_m$. For\nexample if $\\phi_m \\in (0,\\frac{\\pi}{2})$, then $\\hat{\\phi}^s_m \\in\n(0,\\frac{\\pi}{2})$ and therefore only $\\angle{w^s_m(N)}$ or\n$\\angle{w^s_m(N)}+\\pi$ or $\\angle{w^s_m(N)}-\\pi$ belongs to\n$(0,\\frac{\\pi}{2})$. If, for example, $\\angle{w^s_m(N)}+\\pi$ is such\na member between all three members of $P^s$, it is the best\ncandidate for phase estimation. In other words,\n\\[\\phi_m\\approx\\hat{\\phi}^s_m=\\angle{w^s_m(N)}+\\pi.\\]\nWe admit that when there is a member of $P^s$ in the quarter of\n$\\phi_m$, then $w^s_m(N)$ converges. What would happen when non of\nthe members of $P^s$ has the same quarter as $\\phi_m$? This\nsituation will happen when the absolute difference between $\\angle\nw^s_m(N)$ and $\\phi_m$ is greater than $\\pi$. It means that\n$w^s_m(N)$ has not converged yet. In this case where we can not\ncount on $w^s_m(N)$, the expected value is the optimum choice for\nthe channel phase estimation, e.g. if $\\phi_m \\in (0,\\frac{\\pi}{2})$\nthen $\\frac{\\pi}{4}$ is the estimation of the channel phase\n$\\phi_m$, or if $\\phi_m \\in (\\frac{\\pi}{2},\\pi)$ then\n$\\frac{3\\pi}{4}$ is the estimation of the channel phase $\\phi_m$.\nThe results of the above discussion are summarized in the next\nequation\n\\begin{eqnarray}\n\\nonumber \\hat{\\phi}^s_m = \\left\\{\\begin{array}{llll} \\angle\n{w^s_m(N)} & \\mbox{if}~\n\\angle{w^s_m(N)}, \\phi_m\\in R_i,~~i=1,2,3,4\\\\\n\\angle{w^s_m(N)}+\\pi & \\mbox{if}~ \\angle{w^s_m(N)}+\\pi, \\phi_m\\in\nR_i,~~i=1,2,3,4\\\\\n\\angle{w^n_m(N)}-\\pi & \\mbox{if}~ \\angle{w^s_m(N)}-\\pi, \\phi_m\\in\nR_i,~~i=1,2,3,4\\\\\n\\frac{(i-1)\\pi+i\\pi}{4} & \\mbox{if}~ \\phi_m\\in\nR_i,~~\\angle{w^s_m(N)},\\angle\n{w^s_m(N)}\\pm\\pi\\notin R_i,~~i=1,2,3,4.\\\\\n\\end{array}\\right.\n\\end{eqnarray}\nHaving an estimation of the channel phases, the rest of the proposed\nmethod is given by estimating $\\alpha^{s}_m$ as follows:\n\\begin{equation}\n\\label{tt4}\n\\alpha^{s}_m=\\mbox{sign}\\left\\{\\mbox{real}\\left\\{\\sum\\limits_{n=1}^{N}\nq^s_m(n)e^{-j\\hat{\\phi}^s_m}p_m(n)\\right\\}\\right\\},\n\\end{equation}\nwhere\n\\begin{equation} \\label{tt5}\nq^{s}_{m}(n)=r(n)-\\sum\\limits_{m^{'}=1,m^{'}\\ne\nm}^{M}w^{s}_{m^{'}}(N)\\alpha^{(s-1)}_{m^{'}} p_{m^{'}}(n).\n\\end{equation}\nThe inputs of the first stage $\\{\\alpha^{0}_m\\}_{m=1}^M$ (needed for\ncomputing $X^1(n)$) are given by\n\\begin{equation}\n\\label{qte5}\n\\alpha^{0}_m=\\mbox{sign}\\left\\{\\mbox{real}\\left\\{\\sum\\limits_{n=1}^{N}\nr(n)e^{-j\\hat{\\phi}^0_m}p_m(n)\\right\\}\\right\\}.\n\\end{equation}\nAssuming $\\phi_m\\in R_i$, then\n\\begin{equation}\n\\label{qqpp} \\hat{\\phi}^0_m =\\frac{(i-1)\\pi+i\\pi}{4}.\n\\end{equation}\nTable \\ref{tab4} shows the structure of the modified PLMS-PPIC\nmethod. It is to be notified that\n\\begin{itemize}\n\\item Equation (\\ref{qte5}) shows the conventional bit detection\nmethod when the receiver only knows the quarter of channel phase in\n$(0,2\\pi)$. \\item With $L=1$ (i.e. only one NLMS algorithm), the\nmodified PLMS-PPIC can be thought as a modified version of the\nLMS-PPIC method.\n\\end{itemize}\n\nIn the following section some examples are given to illustrate the\neffectiveness of the proposed method.\n\n\\section{Simulations}\\label{S5}\n\nIn this section we have considered some simulation examples.\nExamples \\ref{ex2}-\\ref{ex4} compare the conventional, the modified\nLMS-PPIC and the modified PLMS-PPIC methods in three cases: balanced\nchannels, unbalanced channels and time varying channels. In all\nexamples, the receivers have only the quarter of each channel phase.\nExample \\ref{ex2} is given to compare the modified LMS-PPIC and the\nPLMS-PPIC in the case of balanced channels.\n\n\\begin{example}{\\it Balanced channels}:\n\\label{ex2}\n\\begin{table}\n\\caption{Channel phase estimate of the first user (example\n\\ref{ex2})} \\label{tabex5} \\centerline{{\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\multirow{6}{*}{\\rotatebox{90}{$\\phi_m=\\frac{3\\pi}{8},M=15~~$}} & N(Iteration) & Stage Number& NLMS & PNLMS \\\\\n&&&&\\\\\n\\cline{2-5} & \\multirow{2}{*}{64}& s = 2 & $\\hat{\\phi}^s_m=\\frac{3.24\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{3.18\\pi}{8}$ \\\\\n\\cline{3-5} & & s = 3 & $\\hat{\\phi}^s_m=\\frac{3.24\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{3.18\\pi}{8}$ \\\\\n\\cline{2-5} & \\multirow{2}{*}{256}& s = 2 & $\\hat{\\phi}^s_m=\\frac{2.85\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{2.88\\pi}{8}$ \\\\\n\\cline{3-5} & & s = 3 & $\\hat{\\phi}^s_m=\\frac{2.85\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{2.88\\pi}{8}$ \\\\\n\\cline{2-5} \\hline\n\\end{tabular} }}\n\\end{table}\nConsider the system model (\\ref{e7}) in which $M$ users\nsynchronously send their bits to the receiver through their\nchannels. It is assumed that each user's information consists of\ncodes of length $N$. It is also assumd that the signal to noise\nratio (SNR) is 0dB. In this example there is no power-unbalanced or\nchannel loss is assumed. The step-size of the NLMS algorithm in\nmodified LMS-PPIC method is $\\mu=0.1(1-\\sqrt{\\frac{M-1}{M}})$ and\nthe set of step-sizes of the parallel NLMS algorithms in modified\nPLMS-PPIC method are\n$\\Theta=\\{0.01,0.05,0.1,0.2,\\cdots,1\\}(1-\\sqrt{\\frac{M-1}{M}})$,\ni.e. $\\mu_1=0.01(1-\\sqrt{\\frac{M-1}{M}}),\\cdots,\n\\mu_4=0.2(1-\\sqrt{\\frac{M-1}{M}}),\\cdots,\n\\mu_{12}=(1-\\sqrt{\\frac{M-1}{M}})$. Figure~\\ref{Figexp1NonCoh}\nillustrates the bit error rate (BER) for the case of two stages and\nfor $N=64$ and $N=256$. Simulations also show that there is no\nremarkable difference between results in two stage and three stage\nscenarios. Table~\\ref{tabex5} compares the average channel phase\nestimate of the first user in each stage and over $10$ runs of\nmodified LMS-PPIC and PLMS-PPIC, when the the number of users is\n$M=15$.\n\\end{example}\n\nAlthough LMS-PPIC and PLMS-PPIC, as well as their modified versions,\nare structured based on the assumption of no near-far problem\n(examples \\ref{ex3} and \\ref{ex4}), these methods and especially the\nsecond one have remarkable performance in the cases of unbalanced\nand/or time varying channels.\n\n\\begin{example}{\\it Unbalanced channels}:\n\\label{ex3}\n\\begin{table}\n\\caption{Channel phase estimate of the first user (example\n\\ref{ex3})} \\label{tabex6} \\centerline{{\n\\begin{tabular}{|c|c|c|c|c|}\n\\hline\n\\multirow{6}{*}{\\rotatebox{90}{$\\phi_m=\\frac{3\\pi}{8},M=15~~$}} & N(Iteration) & Stage Number& NLMS & PNLMS \\\\\n&&&&\\\\\n\\cline{2-5} & \\multirow{2}{*}{64}& s=2 & $\\hat{\\phi}^s_m=\\frac{2.45\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{2.36\\pi}{8}$ \\\\\n\\cline{3-5} & & s=3 & $\\hat{\\phi}^s_m=\\frac{2.71\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{2.80\\pi}{8}$ \\\\\n\\cline{2-5} & \\multirow{2}{*}{256}& s=2 & $\\hat{\\phi}^s_m=\\frac{3.09\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{2.86\\pi}{8}$ \\\\\n\\cline{3-5} & & s=3 & $\\hat{\\phi}^s_m=\\frac{2.93\\pi}{8}$ & $\\hat{\\phi}^s_m=\\frac{3.01\\pi}{8}$ \\\\\n\\cline{2-5} \\hline\n\\end{tabular} }}\n\\end{table}\nConsider example \\ref{ex2} with power unbalanced and/or channel loss\nin transmission system, i.e. the true model at stage $s$ is\n\\begin{equation}\n\\label{ve7} r(n)=\\sum\\limits_{m=1}^{M}\\beta_m\nw^s_m\\alpha^{(s-1)}_m c_m(n)+v(n),\n\\end{equation}\nwhere $0<\\beta_m\\leq 1$ for all $1\\leq m \\leq M$. Both the LMS-PPIC\nand the PLMS-PPIC methods assume the model (\\ref{e7}), and their\nestimations are based on observations $\\{r(n),X^s(n)\\}$, instead of\n$\\{r(n),\\mathbf{G}X^s(n)\\}$, where the channel gain matrix is\n$\\mathbf{G}=\\mbox{diag}(\\beta_1,\\beta_2,\\cdots,\\beta_m)$. In this\ncase we repeat example \\ref{ex2}. We randomly get each element of\n$G$ from $[0,0.3]$. Figure~\\ref{Figexp2NonCoh} illustrates the BER\nversus the number of users. Table~\\ref{tabex6} compares the channel\nphase estimate of the first user in each stage and over $10$ runs of\nmodified LMS-PPIC and modified PLMS-PPIC for $M=15$.\n\\end{example}\n\n\\begin{example}\n\\label{ex4} {\\it Time varying channels}: Consider example \\ref{ex2}\nwith time varying Rayleigh fading channels. In this case we assume\nthe maximum Doppler shift of $40$HZ, the three-tap\nfrequency-selective channel with delay vector of $\\{2\\times\n10^{-6},2.5\\times 10^{-6},3\\times 10^{-6}\\}$sec and gain vector of\n$\\{-5,-3,-10\\}$dB. Figure~\\ref{Figexp3NonCoh} shows the average BER\nover all users versus $M$ and using two stages.\n\\end{example}\n\n\n\\section{Conclusion}\\label{S6}\n\nIn this paper, parallel interference cancelation using adaptive\nmultistage structure and employing a set of NLMS algorithms with\ndifferent step-sizes is proposed, when just the quarter of the\nchannel phase of each user is known. In fact, the algorithm has been\nproposed for coherent transmission with full information on channel\nphases in \\cite{cohpaper}. This paper is a modification on the\npreviously proposed algorithm. Simulation results show that the new\nmethod has a remarkable performance for different scenarios\nincluding Rayleigh fading channels even if the channel is\nunbalanced.\n\n", "answers": ["The normalized least mean square (NLMS) algorithm."], "length": 2008, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "49d0334d26929f712de2cba070a74752f415313c08166f80"} {"input": "What is the scaling form for the alternative order parameter O?", "context": "\\section*{Dynamical Behaviour of $O$ in Lattice Gases}\n\nThe dynamical behaviour of the anisotropic order parameter $m$ [see Eq.~\\eqref{eq:def-m} in the Letter] following a quench to the critical point is well described by\nthe Gaussian theory for all the three lattice gas models studied, $i.e.,$ driven lattice gas with either constant (IDLG) or random (RDLG) infinite drive and equilibrium lattice gas (LG). In other words, in the short-time regime, $m \\sim t^{1/2}$ [see Eq. \\eqref{eq:mt}] and the Binder cumulant $g$ of the lowest transverse mode [defined in Eq. \\eqref{eq:binder}] is zero in this regime. The alternative order parameter $O,$ however, distinguishes between the driven (IDLG, RDLG) and the equilibrium (LG) lattice gases. \n\nIn order to understand this, we first write the phenomenological scaling form for $O$, analogous to Eq. \\eqref{eq:scalingass} in the Letter,\n\\begin{eqnarray}\nO (t, L_{\\parallel} ; S_\\Delta) = L_{\\parallel}^{-\\beta/[\\nu(1+\\Delta)]} \\tilde f_O (t/L_{\\parallel}^{z/(1+\\Delta)} ; S_\\Delta).\\quad\n\\label{eq:Oscalingass}\n\\end{eqnarray}\nWe already remarked that, in the LG, this scaling form is not compatible with the prediction $O \\sim t^{1/8} L_{\\parallel}^{-1/2}$ of the Gaussian theory. However, following Ref. \\cite{AS2002}, it can be argued that, at short times, the only dependence of $O$ on the system size $L_{\\parallel}$ is of the form $O \\sim L_\\parallel^{-1/2}$ which is very well confirmed by numerical simulations. Accordingly, the generic behaviour of $O$ can be assumed to be\n\\begin{eqnarray}\nO \\sim t^{\\alpha} L_\\parallel^{-1/2}, \\label{eq:O}\n\\end{eqnarray}\nwhere $\\alpha$ is a phenomenological exponent to be determined. This, along with Eq. \\eqref{eq:Oscalingass}, implies $\\tilde f_O(x) \\sim x^{\\alpha}.$ Comparing the finite-size behaviour in Eq.~\\eqref{eq:O} with Eq.~\\eqref{eq:Oscalingass} one actually infers,\n\\begin{eqnarray}\n\\alpha &=& \\frac{1+ \\Delta -2 \\beta/\\nu}{2 \\, (4- \\eta)}. \\label{eq:alpha}\n\\end{eqnarray}\nThis equation, together with the hyperscaling relation $\\Delta - 2 \\beta/\\nu= - \\eta$ in two spatial dimensions, shows that the prediction $\\alpha = 1/8$ of the Gaussian theory [see Eq. \\eqref{eq:Ot}] can be obtained only when $\\eta=0,$ which is the case for the IDLG (exactly) and the RDLG (approximately) but not for the LG. \n\nOn the other hand, Eq.~\\eqref{eq:alpha} predicts $\\alpha = 1/10$ upon substituting the values of the critical exponents corresponding to the Ising universality class (LG). This is consistent with the numerical simulation results presented in the main text, see Fig. \\ref{fig:ising}(b) therein.\n\n\\begin{figure}[th]\n\\vspace*{0.2 cm}\n \\centering\n \\includegraphics[width=10 cm]{./compare_binder.pdf}\n\n\\caption{Comparison between the temporal evolution of the Binder cumulants $g$ corresponding to the $12^{th}$ transverse mode, $i.e.,$ with $n_\\perp =12,$ in the LG (lowest curve), IDLG and RDLG (two upper curves) on a $32 \\times 32$ lattice. \\label{fig:b}}\n \\label{fig:binder}\n\\end{figure}\n\n\nThe emergence of this new value $1/10$ of the exponent $\\alpha$ must be traced back to the non-Gaussian nature of higher fluctuating modes in the LG. In fact, even though the lowest mode behaves identically in all the three models we considered, characterized by the same behaviour of $m$, higher modes show a significant difference in the non-driven case. \n\n\nTo illustrate this, we measured the Binder cumulants of higher modes which is defined analogously to Eq.~(11), using transverse modes other than the first, i.e., with $\\mu=\\tilde \\sigma(0,2 \\pi n_\\bot/L_\\bot)$ and $n_\\bot>1.$ \n Figure \\ref{fig:b} compares the same for all the three lattice gases for the mode with $n_\\perp =12$ on a $32 \\times 32$ lattice. Clearly, the curve corresponding to the LG (lowest, blue) departs from Gaussian behaviour $g=0$ (in practice, $e.g.,$ $|g| \\lesssim 0.005,$ corresponding to the shaded gray area) much earlier than it does for the IDLG or RDLG (two upper curves, red and green respectively).\n\nAccordingly, the different dynamical behaviour of $O$, which involves a sum over all modes, can be attributed to the non-Gaussian nature of the higher modes in the LG. \nSuch a departure is not entirely surprising. In fact, for higher modes, mesoscopic descriptions such as the ones in Eqs. \\eqref{eq:L-DLG} or \\eqref{eq:g_evol} are not expected to hold, while the anisotropy at the microscopic level could be the mechanism leading to the Gaussianity of higher modes in the driven models.\n\n", "answers": ["O(t, L_{\\parallel}; S_\\Delta) = L_{\\parallel}^{-\\beta/[\\nu(1+\\Delta)]} \\tilde f_O(t/L_{\\parallel}^{z/(1+\\Delta)}; S_\\Delta)."], "length": 663, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "22034e095a602824678c4028e6f605919ce520270dc06089"} {"input": "Which air unit did Goodwin command during the initial landings of Marines on Saipan?", "context": "Hugh Hilton Goodwin (December 21, 1900 – February 25, 1980) was a decorated officer in the United States Navy with the rank of Vice Admiral. A veteran of both World Wars, he commanded escort carrier during the Mariana Islands campaign. Goodwin then served consecutively as Chief of Staff, Carrier Strike Group 6 and as Air Officer, Philippine Sea Frontier and participated in the Philippines campaign in the later part of the War.\n\nFollowing the War, he remained in the Navy and rose to the flag rank and held several important commands including Vice Commander, Military Air Transport Service, Commander, Carrier Division Two and Commander, Naval Air Forces, Continental Air Defense Command.\n\nEarly life and career\n\nHugh H. Goodwin was born on December 21, 1900, in Monroe, Louisiana and attended Monroe High School there (now Neville High School). Following the United States' entry into World War I in April 1917, Goodwin left the school without receiving the diploma in order to see some combat and enlisted the United States Navy on May 7, 1917. He completed basic training and was assigned to the battleship . Goodwin participated in the training of armed guard crews and engine room personnel as the Atlantic Fleet prepared to go to war and in November 1917, he sailed with the rest of Battleship Division 9, bound for Britain to reinforce the Grand Fleet in the North Sea.\n\nAlthough he did not complete the last year of high school, Goodwin was able to earn an appointment to the United States Naval Academy at Annapolis, Maryland in June 1918. While at the academy, he earned a nickname \"Huge\" and among his classmates were several future admirals and generals including: Hyman G. Rickover, Milton E. Miles, Robert E. Blick Jr., Herbert S. Duckworth, Clayton C. Jerome, James P. Riseley, James A. Stuart, Frank Peak Akers, Sherman Clark, Raymond P. Coffman, Delbert S. Cornwell, Frederick J. Eckhoff, Ralph B. DeWitt, John Higgins, Vernon Huber, Albert K. Morehouse, Harold F. Pullen, Michael J. Malanaphy, William S. Parsons, Harold R. Stevens, John P. Whitney, Lyman G. Miller and George J. O'Shea.\n\nGoodwin graduated with Bachelor of Science degree on June 3, 1922, and was commissioned Ensign in the United States Navy. He was subsequently assigned to the battleship and took part in the voyage to Rio de Janeiro, Brazil, before he was ordered to the Naval Torpedo Station at Newport, Rhode Island for submarine instruction in June 1923. Goodwin completed the training several weeks later and was attached to the submarine . He then continued his further training aboard submarine and following his promotion to Lieutenant (junior grade) on June 3, 1925, he qualified as submariner.\n\nHe then served aboard submarine off the coast of California, before he was ordered for the recruiting duty to San Francisco in September 1927. While in this capacity, Goodwin applied for naval aviation training which was ultimately approved and he was ordered to the Naval Air Station Pensacola, Florida in August 1928. Toward the end of the training, he was promoted to lieutenant on December 11, 1928, and upon the completion of the training in January 1929, he was designated Naval aviator.\n\nGoodwin was subsequently attached to the Observation Squadron aboard the aircraft carrier and participated in the Fleet exercises in the Caribbean. He was transferred to the Bureau of Aeronautics in Washington, D.C. in August 1931 and served consecutively under the architect of naval aviation William A. Moffett and future Chief of Naval Operations Ernest J. King.\n\nIn June 1933, Goodwin was ordered to the Naval War College at Newport, Rhode Island, where he completed junior course in May of the following year. He subsequently joined the crew of aircraft carrier and served under Captain Arthur B. Cook and took part in the Fleet exercises in the Caribbean and off the East Coast of the United States.\n\nHe was ordered back to the Naval Air Station Pensacola, Florida in June 1936 and was attached to the staff of the Base Commandant, then-Captain Charles A. Blakely. When Blakely was succeeded by William F. Halsey in June 1937, Goodwin remained in Halsey's staff and was promoted to Lieutenant Commander on December 1, 1937. He also completed correspondence course in International law at the Naval War College.\n\nGoodwin was appointed Commanding officer of the Observation Squadron 1 in June 1938 and attached to the battleship he took part in the patrolling of the Pacific and \nWest Coast of the United States until September 1938, when he assumed command of the Observation Squadron 2 attached to the battleship .\n\nWhen his old superior from Lexington, now Rear Admiral Arthur B. Cook, was appointed Commander Aircraft, Scouting Force in June 1939, he requested Goodwin as his Aide and Flag Secretary. He became Admiral Cook's protégé and after year and half of service in the Pacific, he continued as his Aide and Flag Secretary, when Cook was appointed Commander Aircraft, Atlantic Fleet in November 1940.\n\nWorld War II\n\nFollowing the United States' entry into World War II, Goodwin was promoted to the temporary rank of Commander on January 1, 1942, and assumed duty as advisor to the Argentine Navy. His promotion was made permanent two months later and he returned to the United States in early 1943 for duty as assistant director of Planning in the Bureau of Aeronautics under Rear admiral John S. McCain. While still in Argentina, Goodwin was promoted to the temporary rank of Captain on June 21, 1942.\n\nBy the end of December 1943, Goodwin was ordered to Astoria, Oregon, where he assumed command of newly commissioned escort carrier USS Gambier Bay. He was responsible for the initial training of the crew and was known as a strict disciplinarian, but the crew appreciated the skills he taught them that prepared them for combat. Goodwin insisted that everyone aboard has to do every job right every time and made us fight our ship at her best.\n\nDuring the first half of 1944, Gambier Bay was tasked with ferrying aircraft for repairs and qualified carrier pilots from San Diego to Pearl Harbor, Hawaii, before departed on May 1, 1944, to join Rear admiral Harold B. Sallada's Carrier Support Group 2, staging in the Marshalls for the invasion of the Marianas.\n\nThe air unit, VC-10 Squadron, under Goodwin's command gave close air support to the initial landings of Marines on Saipan on June 15, 1944, destroying enemy gun emplacements, troops, tanks, and trucks. On the 17th, her combat air patrol (CAP) shot down or turned back all but a handful of 47 enemy planes headed for her task group and her gunners shot down two of the three planes that did break through to attack her.\n\nGoodwin's carrier continued in providing of close ground support operations at Tinian during the end of July 1944, then turned her attention to Guam, where she gave identical aid to invading troops until mid-August that year. For his service during the Mariana Islands campaign, Goodwin was decorated with Bronze Star Medal with Combat \"V\".\n\nHe was succeeded by Captain Walter V. R. Vieweg on August 18, 1944, and appointed Chief of Staff, Carrier Division Six under Rear admiral Arthur W. Radford. The Gambier Bay was sunk in the Battle off Samar on October 25, 1944, during the Battle of Leyte Gulf after helping turn back a much larger attacking Japanese surface force.\n\nGoodwin served with Carrier Division Six during the Bonin Islands raids, the naval operations at Palau and took part in the Battle of Leyte Gulf and operations supporting Leyte landings in late 1944. He was later appointed Air Officer of the Philippine Sea Frontier under Rear admiral James L. Kauffman and remained with that command until the end of hostilities. For his service in the later part of World War II, Goodwin was decorated with Legion of Merit with Combat \"V\". He was also entitled to wear two Navy Presidential Unit Citations and Navy Unit Commendation.\n\nPostwar service\n\nFollowing the surrender of Japan, Goodwin assumed command of Light aircraft carrier on August 24, 1945. The ship was tasked with air missions over Japan became mercy flights over Allied prisoner-of-war camps, dropping food and medicine until the men could be rescued. She was also present at Tokyo Bay for the Japanese surrender on September 2, 1945.\n\nGoodwin returned with San Jacinto to the United States in mid-September 1945 and he was detached in January 1946. He subsequently served in the office of the Chief of Naval Operations until May that year, when he entered the instruction at National War College. Goodwin graduated in June 1947 and served on Secretary's committee for Research on Reorganization. Upon promotion to Rear admiral on April 1, 1949, Goodwin was appointed Chief of Staff and Aide to Commander-in-Chief, Atlantic Fleet under Admiral William H. P. Blandy.\n\nRevolt of the Admirals\n\nIn April 1949, the budget's cuts and proposed reorganization of the United States Armed Forces by the Secretary of Defense Louis A. Johnson launched the wave of discontent between senior commanders in the United States Navy. Johnson proposed the merging of the Marine Corps into the Army, and reduce the Navy to a convoy-escort force.\n\nGoodwin's superior officer, Admiral Blandy was call to testify before the House Committee on Armed Services and his harsh statements for the defense of the Navy, costed him his career. Goodwin shared his views and openly criticized Secretary Johnson for having power concentrated in a single civilian executive, who is an appointee of the Government and not an elected representative of the people. He also criticized aspects of defense unification which permitted the Joint Chiefs of Staff to vote on arms policies of individual services, and thus \"rob\" the branches of autonomy.\n\nThe outbreak of the Korean War in summer 1950 proved the proposal of Secretary Johnson as incorrect and he resigned in September that year. Also Secretary of the Navy, Francis P. Matthews resigned one month earlier.\n\nLater service\n\nDue to the Revolts of the admirals, Blandy was forced to retire in February 1950 and Goodwin was ordered to Newport, Rhode Island for temporary duty as Chief of Staff and Aide to the President of the Naval War College under Vice admiral Donald B. Beary in April 1950. Goodwin was detached from that assignment two months and appointed member of the General Board of the Navy. He was shortly thereafter appointed acting Navy Chief of Public Information, as the substitute for Rear Admiral Russell S. Berkey, who was relieved of illness, but returned to the General Board of the Navy in July that year. Goodwin served in that capacity until February 1951, when he relieved his Academy class, Rear admiral John P. Whitney as Vice Commander, Military Air Transport Service (MATS).\n\nWhile in this capacity, Goodwin served under Lieutenant general Laurence S. Kuter and was co-responsible for the logistical support of United Nations troops fighting in Korea. The MATS operated from the United States to Japan and Goodwin served in this capacity until August 1953, when he was appointed Commander Carrier Division Two. While in this assignment, he took part in the Operation Mariner, Joint Anglo-American exercise which encountered very heavy seas over a two-week period in fall 1953.\n\nGoodwin was ordered to the Philippines in May 1954 and assumed duty as Commander, U.S. Naval Forces in the Philippines with headquarters at Naval Station Sangley Point near Cavite. He held that command in the period of tensions between Taiwan and China and publicly declared shortly after his arrival, that any attack on Taiwan by the Chinese Communists on the mainland would result in US participation in the conflict. The naval fighter planes under his command also provided escort for passing commercial planes. Goodwin worked together with retired Admiral Raymond A. Spruance, then-Ambassador to the Philippines, and accompanied him during the visits to Singapore, Bangkok and Saigon in January 1955.\n\nOn December 18, 1955, Goodwin's classmate Rear admiral Albert K. Morehouse, then serving as Commander, Naval Air Forces, Continental Air Defense Command (CONAD), died of heart attack and Goodwin was ordered to CONAD headquarters in Colorado Springs, Colorado to assume Morehouse's position. While in this capacity, he was subordinated to Army General Earle E. Partridge and was responsible for the Naval and Marine Forces allocated to the command designated for the defense of the Continental United States.\n\nRetirement\n\nGoodwin retired on June 1, 1957, after 40 years of active service and was advanced to the rank of Vice admiral on the retired list for having been specially commended in combat. A week later, he was invited back to his Monroe High School (now Neville High School) and handed a diploma showing that he had been graduated with the class of 1918. He then settled in Monterey, California where he taught American history at Stevenson school and was a member of the Naval Order of the United States.\n\nVice admiral Hugh H. Goodwin died at his home on February 25, 1980, aged 79. He was survived by his wife, Eleanor with whom he had two children, a daughter Sidney and a son Hugh Jr., who graduated from the Naval Academy in June 1948, but died one year later, when the Hellcat fighter he was piloting collided with another over the Gulf of Mexico during training.\n\nDecorations\n\nHere is the ribbon bar of Vice admiral Hugh H. Goodwin:\n\nReferences\n\n1900 births\n1980 deaths\nPeople from Monroe, Louisiana\nMilitary personnel from Louisiana\nUnited States Naval Academy alumni\nNaval War College alumni\nUnited States Naval Aviators\nUnited States Navy personnel of World War I\nUnited States Navy World War II admirals\nUnited States Navy vice admirals\nUnited States submarine commanders\nRecipients of the Legion of Merit", "answers": ["VC-10 Squadron."], "length": 2295, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "f084724966601afea0017630eeee76cf0b65099356667bc2"} {"input": "Who compiled the 88-page letter to the HHS regarding vaccine safety?", "context": "A special tribute to Del Bigtree (pictured) and his team at ICAN for his stunning 88 page letter to the HHS regarding vaccine safety. As Del reported - in the latest edition of Highwire - the letter, in response to an earlier reply from the then acting Director National Vaccine Program Office, Melinda Wharton, took virtually a year to compile, and is a meticulous piece of research. Most sensationally they researched the HHS claim through US government archives that at least some pediatric vaccines had been trialed against genuine placebo, and came to a negative conclusion. Not only that, they established that none of the vaccines those vaccines had been trialed against had ever been trialed against genuine placebo either. At the end of the line the toxic products were only being compared with other toxic products, rather than against saline.\nLeave aside the sceptics, for any believer in the vaccine program as a necessary intervention in public health, this should be a devastating finding. Fundamentally, the research into the safety of any of the products before marketing was simply not there. The manufacturers apparently had no faith that their proto-products could withstand this scrutiny, and for the rest they just did not care: under the alleged imperative of protecting the population it seems anything went. So even before all the sham monitoring procedures and reviews which Del and his team dismantle in forensic detail we are left with the proposition that none of the present products being given to US children – and frequently other children across most of the developed world – have any meaningful pre-marketing safety data all. If you are believer in the program you have been let down: if you wanted a program with any pretensions to safety - supposing such a thing to be possible - it looks like you would have to start from scratch. The manufacturers did this: the governments, the politicians and the regulators (internationally) let it happen.\nThis damning document is published simultaneously with a demand in the UK from the Royal Society for Public Health (which I had never heard of) to shut down comment about vaccines on the web. It echoes calls from Seth Berkley of GAVI, Heidi Larson of the Vaccine Confidence Project and the European Parliament. The pamphlet airily dismisses concerns that vaccines have side effects or that you could possibly have too many. It is pure public relations, and if the RSPH claims to be \"independent\" it also admits that the publication was paid for by Merck, a detail which was reported by British Medical Journal and the Guardian, but not true to form by the BBC. We have, in truth, been building to this moment for two decades: as the evidence piles up that every single aspect of the program lacks integrity or is simply rotten to the core all the perpetrators can do is call for the silencing of their critics, and maintain the products are safe because they say so.\nPlease help give the ICAN letter the widest possible distribution, particularly to politicians.\n\"The outcome of disease always depends both on the virulence of the pathogen and the health of the individual immune system.\"\nNope. This makes no sense. Lots of people who seemed vibrant will get a very severe case of the same illness that a vulnerable baby overcomes in a day.\nAnd under the germ theory it doesn't matter how strong your immune system *was*. Once it's been overcome by the pathogen it is every bit as weak as anybody else's with that pathogen.\nWhat you say makes no sense. There's no reason for me to reply to you again.\n\"Why do you think that within a few years (not many) of the introduction of the vaccines for them, pertussis, measles, mumps, rubella, tetanus, diphtheria, Hib disease, and chickenpox (and others) almost entirely disappeared?\"\nWhy do you keep asking this question when I've already provided the answer hundreds of times? Why are you so desperate to believe the people who you already recognize are harming our children?\nWhy would Walter Reed be any more trustworthy than Paul Offit or Senator Pan? Why would Jenner or Pasteur?\nAnd you went no way to explaining my arguments against germ theory. If we are attacked by billions of viruses every day then if even a tiny fraction of them are pathogenic then we couldn't possibly survive. And even if we could, we would already be immune rendering every vaccine pointless. Once we had survived our first few days on earth, then we could never get sick again.\nIf that's wrong then we must conclude that precisely 0% of germs are pathogenic.\nPlus your comment about the immune system completely misunderstood my point. The immune system does not allow us to overcome our math problem. In fact, it makes it worse.\nYou did provide one solitary example of a patient with what are presumably yellow fever symptoms but you didn't say whether they had been given any toxic medical treatments.\nAnd like I said before, the whole \"incubation period\" is more than a little suspicious. Clearly they never found what they thought they would and just rigged the results to tell them what they want to hear.\nLike every other germ theorist/vaccine promoter in history.\nMany kinds of bacteria are constantly evolving and changing, like flu viruses. Others are more stable over time, like the yellow fever virus. Those that change develop new ways of infiltrating the cells of the organism being attacked (from our point of view, from its unconscious point of view, it's just carrying out its need to replicate, which it can only do inside the cells of its host). The changes which allow it to better infiltrate are more successful and result in more viruses with those traits.\nOur immune system is designed to detect and destroy potentially dangerous invading pathogens. Many bacteria are usually harmless and absolutely necessary. The minority are dangerous, and most people's immune systems do a good job of analyzing them and killing them, often with no signs of disease. Others experience a clinical infection, and the immune system usually mounts a successful attack on them.\nThe outcome of disease always depends both on the virulence of the pathogen and the health of the individual immune system. Vaccines are usually effective in giving immunity to the targeted diseases. They also have many dangers which everyone should be aware of, and vaccines should be avoided whenever possible. But in the case of the most dangerous diseases, everyone should learn about them and think about what he wants to do to protect himself and his children from them, considering all the factors involved. And no one can have 100% certainty that he has made the right decision, but that's life. But if you live in the Congo and many people around you are currently dying of yellow fever, then that means that you yourself are at risk of being bitten by a loaded mosquito and getting, often dying, of yellow fever. The yellow fever vaccine is very effective at preventing yellow fever. From there, each person must make a choice.\nAt the end of this stage there is a remission of two or three days. About 80% of those with clinical disease recover at this point, with permanent immunity. The other 20% enter the toxic stage, with a return of the fever, black vomit (coffee-ground emesis), diarrhea, a slowing of the pulse (Faget's sign), jaundice, yellow eyes, yellow skin, and failure of the kidneys, liver, and heart. The patient gets a strange hiccup (like with Ebola, a related disease), falls into a coma, and dies. About half of those patients who enter the toxic stage dies, even now, even with the best of hospital care. The Faget's sign can also occur at the end of the first stage.\nYou asked specifically about the symptoms of the Americans on Dr. Reed's team who got yellow fever in Cuba in 1900. I'll give the passage from The American Plague (162-5), which describes the course of Jesse Lazear's illness. \"In his logbook, Lazear wrote an unusual entry on September 13. In all cases before those, page after page of records, Lazear had used the soldier's name and simply the date he was bitten, with no other attention to the mosquito. A one-line entry with a name and a date. On that day, however, in his elegant hand, Lazear did not write the soldier's name, but instead wrote 'Guinea Pig No. 1.' He went on to write that this guinea pig had been bitten by a mosquito that developed from an egg laid by a mosquito that developed from an egg laid by a mosquito that fed on a number of yellow fever cases: Suarez, Hernández, De Long, Ferández. It was a precise, detailed history that proved beyond doubt that the mosquito was loaded with the virus when it bit a healthy soldier...(If he had entered his name, then his death would have been considered medical suicide by the insurance company, and his wife and two children would not have gotten any payment.) For the next few days, Lazear's life continued much as it had over the last few months in Cuba. He fed and cared for the mosquitoes in the lab. ..Then he began to lose his appetite. He skipped a few meals in the mess hall. He didn't mention it to anyone, nor did he ask to see one of the yellow fever doctors; instead, he worked hard in the lab trying to ignore the oncoming headache.\n\"On September 18, he complained of feeling 'out of sorts,' and stayed in his officer's quarters. His head pounded and L. decided to write a letter. ..(he wrote to his mother, and referred to his one-year old son Houston and the baby his wife Mabel was about to have: they were staying with his mother in the US). ..That night, L. started to feel chilled as the fever came on. He never went to sleep but worked at his desk all through the night, trying to get all the information about the mosquitoes organized. By morning, he showed all the signs of a severe attack of yellow fever. The camp doctors made the diagnosis, and L. agreed to go to the yellow fever ward. ..L. was carried by litter out of the two-room, white pine board house in which he had lived since he and Mabel first arrived in Cuba. ..(In the yellow fever ward, in a separate one-room building), Lena Warner (the immune nurse who had survived the yellow fever in 1878, when she was nine, and was found in her boarded-up house by a former slave who first thought she was dead, and carried her to safety) nursed J.L., recording his vitals. (I put up a link to his case record and vital signs last week. The surgeon general required that this record be made for every yellow fever patient.)... (On September 25,) Lena Warner braced L's arms with all of her weight, shouting for help. Still he bolted from the bed, darting around the small frame-wood room as wildly as a trapped insect beating against glass. Two soldiers ran into the ward, pinning L to his bed, tying restraints around his wrists and elbows. ..Warner sponged his body with iced whiskey and water. She recorded his temperature, which had held at 104 degrees for days, on the chart beside his bed. ..(Warner watched him sleep.) But the quiet did not last. L's body began to lurch, and black vomit rolled from his mouth; through the bar hanging above his hospital cot. He writhed in the bed, and his skin grew deep yellow. His 104 temperature slowly fell, leveling out 99 degrees, and JL died at 8:45 p.m. at the age of thirty-four.\"\nAs is obvious, there are many problems with vaccines. But, that being said, most of them usually work for a period of time to prevent the targeted diseases. The basic science behind vaccines is correct. Why do you think that within a few years (not many) of the introduction of the vaccines for them, pertussis, measles, mumps, rubella, tetanus, diphtheria, Hib disease, and chickenpox (and others) almost entirely disappeared? In the case of the routine childhood diseases, this was a bad thing, but it is a true thing.\nVaccines usually don't cause any obvious reactions. While they usually prevent the diseases, and that's why people continue to get them. With the increasing vaccination schedule, more and more are severely and permanently damaged, and it is immoral to mandate any vaccine for anyone for this reason. But it would also be immoral to prohibit vaccines for those who want them enough to take the risk.\nYour article said as though it had any probative value that 90% of those who get pertussis had been vaxxed. The old DPT vaccine was MUCH more effective at preventing pertussis, but it was so dangerous (again, not to most, but to many), that developed countries replaced it with the acellular version, DTaP. From the beginning about twenty years ago, it was clear that it was not very effective and that huge numbers of vaxxed people got pertussis anyway, including my daughter who got pertussis at eight month old after having gotten three DTaPs. The pertussis vaccine continues to be very dangerous, and I do not recommend that anyone get it. It used to be a killer disease, but evolved to become much milder, to the extent that the disease is very rarely dangerous (usually only to newborns under three months old), while the vaccine is very dangerous. And they're trying to see how they can go back to the old DPT. This does not show that vaccine science has collapsed, but rather that the vaccine they developed to replace the DPT turned out to be much less effective than they first thought, while continuing to be much more dangerous than they first thought.\nYour article extrapolated from that that modern medical science in general has collapsed, but that, again, is going too far. A older woman in Mexico City who is like my mother to me had a pacemaker inserted about two months ago to aid her failing heart, and it has restored her to optimism and energy, when she was despondent, weak, and close to death. I took my daughter to the dentist yesterday, who said she has three wisdom teeth coming in and that she said that the lower right one was sore. So, although I am cautious about X-rays, I made an appointment for a panoramic X-ray in a month to assess the wisdom teeth, and, if it seems appropriate, I'll take her to an oral surgeon to have one or more extracted under IV sedation, in his office, if possible (the dentist thought that it would be). And I am confident that there will be no serious problems, but this is thanks to technology and training in modern medicine that haven't been available for that long.\nI think that everyone should inform himself on all medical procedures before agreeing to anything, but I also think that he should have access to any medical procedure which is reasonable (and opinions can differ as to that).\nOne problem is that you have not said how you think people should protect themselves against tetanus, bacterial meningitis, and yellow fever in the relevant cases, for example. These are diseases which healthy, well-nourished people used to die from very readily.\nIf most people stopped vaxxing and the mortality from these diseases rose to something like pre-vaccine levels, do you think they should just accept dying from them?\nI put that in a separate paragraph because it is the crucial issue.\nbalinaheuchter Air Traffic Control You Tube - Colin Campbell example of - How to \"Fudge a Nudge\" -\"Deal\" or \"No Deal\" \"Not in a month of Sundays\" \"No exceptions/no compromise?\" -make a trade off -do an exception- everyone get's a good deal /good outcome!\nHans, you are right that we are looking at one of the biggest crimes in all history. When I read the story of that poor girl who was so healthy and is now confined to a wheelchair after getting her third Gardasil shot I could not believe that Merck could produce such a toxic vaccine and give it out to girls like it was something they absolutely had to have only to be mislead and made into cripples. Merck should be prosecuted for the damage they have done to so many girls who got the Gardasil vaccine and were physically debilitated for life. There is a place for the people who perpetrated this crime on young girls and women and it is called hell. They have destroyed people's lives and gotten away with it. My heart goes out to those who have suffered this damage for no damn good reason except to help make huge profits for Merck!\nHere is the reason that the germ theory is nonsense.\n1) Everyday we are bombarded with billions of germs. Presumably at least some of them are of the kind that germ theorists believe are dangerous (otherwise we would have to conclude that none of them are dangerous). So how do we survive?\n2) Let's just say that we ignore 1 and imagine that, by way of magic, none of the billions of viruses we get bombarded with are pathogenic but all those that are are tucked away somewhere. Ok. But presumably they reside in sick people right? So where are there lots of sick people? Doctor offices and hospitals! So everybody must be dying the moment they enter these places right?\n3) I love this one because I have never seen anybody else ever raise it. Under the germ theory there are no negative feedbacks. This makes a stable biological system by definition impossible. The immune system is *not* a negative feedback it is the opposite. It actually reinforces our math problem because the immune system will weaken as the number of pathogens increase.\nThere is no way of resolving this problem without a discontinuity. A Deus ex Machina as The Almighty Pill so beautifully put it. So the germ theory is quite literally, mathematically impossible.\nThere is as much chance of it being true as 2+2 = 5.\nThere are plenty of other massive problems with germ theory such as why did things like SARS and bird flu magically disappear? Why do we have the symptoms that we do? Is our body controlling the symptoms to help fight the germs and if so, why would suppressing the symptoms with antibiotics or Tamiflu be considered a good idea? If the virus is causing the symptoms then why would it cause these kinds of things?", "answers": ["Del Bigtree and his team at ICAN."], "length": 3150, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "751053416f74a11311a13e801634fff8fd48649d3921b368"} {"input": "What was the conclusion of the study?", "context": "consumption influences mercury: Topics by WorldWideScience.org\nSample records for consumption influences mercury\nEpidemiologic confirmation that fruit consumption influences mercury exposure in riparian communities in the Brazilian Amazon\nSousa Passos, Carlos Jose; Mergler, Donna; Fillion, Myriam; Lemire, Melanie; Mertens, Frederic; Guimaraes, Jean Remy Davee; Philibert, Aline\nSince deforestation has recently been associated with increased mercury load in the Amazon, the problem of mercury exposure is now much more widespread than initially thought. A previous exploratory study suggested that fruit consumption may reduce mercury exposure. The objectives of the study were to determine the effects of fruit consumption on the relation between fish consumption and bioindicators of mercury (Hg) exposure in Amazonian fish-eating communities. A cross-sectional dietary survey based on a 7-day recall of fish and fruit consumption frequency was conducted within 13 riparian communities from the Tapajos River, Brazilian Amazon. Hair samples were collected from 449 persons, and blood samples were collected from a subset of 225, for total and inorganic mercury determination by atomic absorption spectrometry. On average, participants consumed 6.6 fish meals/week and ate 11 fruits/week. The average blood Hg (BHg) was 57.1±36.3 μg/L (median: 55.1 μg/L), and the average hair-Hg (HHg) was 16.8±10.3 μg/g (median: 15.7 μg/g). There was a positive relation between fish consumption and BHg (r=0.48; P 2 =36.0%) and HHg levels (fish: β=1.2, P 2 =21.0%). ANCOVA models showed that for the same number of fish meals, persons consuming fruits more frequently had significantly lower blood and HHg concentrations. For low fruit consumers, each fish meal contributed 9.8 μg/L Hg increase in blood compared to only 3.3 μg/L Hg increase for the high fruit consumers. In conclusion, fruit consumption may provide a protective effect for Hg exposure in Amazonian riparians. Prevention strategies that seek to maintain fish consumption while reducing Hg exposure in fish-eating communities should be pursued\nInfluence of mercury bioaccessibility on exposure assessment associated with consumption of cooked predatory fish in Spain.\nTorres-Escribano, Silvia; Ruiz, Antonio; Barrios, Laura; Vélez, Dinoraz; Montoro, Rosa\nPredatory fish tend to accumulate high levels of mercury (Hg). Food safety assessment of these fish has been carried out on the raw product. However, the evaluation of the risk from Hg concentrations in raw fish might be modified if cooking and bioaccessibility (the contaminant fraction that solubilises from its matrix during gastrointestinal digestion and becomes available for intestinal absorption) were taken into account. Data on Hg bioaccessibility in raw predatory fish sold in Spain are scarce and no research on Hg bioaccessibility in cooked fish is available. The aim of the present study was to evaluate Hg bioaccessibility in various kinds of cooked predatory fish sold in Spain to estimate their health risk. Both Hg and bioaccessible Hg concentrations were analysed in raw and cooked fish (swordfish, tope shark, bonito and tuna). There were no changes in Hg concentrations during cooking. However, Hg bioaccessibility decreased significantly after cooking (42 ± 26% in raw fish and 26 ± 16% in cooked fish), thus reducing in swordfish and tope shark the Hg concentration to which the human organism would be exposed. In future, cooking and bioaccessibility should be considered in risk assessment of Hg concentrations in predatory fish. Copyright © 2011 Society of Chemical Industry.\nIntake of mercury through fish consumption\nSarmani, S.B.; Kiprawi, A.Z.; Ismail, R.B.; Hassan, R.B.; Wood, A.K.; Rahman, S.A.\nFish has been known as a source of non-occupational mercury exposure to fish consuming population groups, and this is shown by the high hair mercury levels. In this study, hair samples collected from fishermen and their families, and commercial marine fishes were analyzed for mercury and methylmercury by neutron activation and gas chromatography. The results showed a correlation between hair mercury levels and fish consumption patterns. The levels of mercury found in this study were similar to those reported by other workers for fish consuming population groups worldwide. (author)\nFish consumption limit for mercury compounds\nAbbas Esmaili-Sari\nFull Text Available Background and objectives: Methyl mercury can carry out harmful effects on the reproductive, respiratory, and nervous system of human. Moreover, mercury is known as the most toxic heavy metal in nature. Fish and seafood consumption is the major MeHg exposure route for human. The present study tries to cover researches which have been conducted on mercury levels in 21 species of fish from Persian Gulf, Caspian Sea and Anzali Wetland during the past 6 years, and in addition to stating mercury level, it provides recommendations about the restriction of monthly fish consumption for each species separately. Material and methods: Fish samples were transferred to the laboratory and stored in refrigerator under -20oC until they were dissected. Afterwards, the muscle tissues were separated and dried. The dried samples were ground and changed into a homogenous powder and then the mercury concentration rate has been determined by advanced mercury analyzer, model 254. Results: In general, mercury contamination in fishes caught from Anzali Wetland was much more than fishes from Caspian Sea. Also, from among all studied fishes, oriental sole (Euryglossa orientalis, caught from Persian Gulf, allocated the most mercury level to itself with the rate of 5.61ml per kg., therefore, it exercises a severe consumption restriction for pregnant women and vulnerable groups. Conclusion: Based on the calculations, about 50% of fishes, mostly with short food chain, can be easily consumed during the year. However, with regard to Oriental sole (Euryglossa orientalis and shark (Carcharhinus dussumieri, caught from Persian Gulf, special consideration should be taken in their consumption. On the other hand, careful planning should be made for the high rate of fish consumption among fishing community.\nHair Mercury Concentrations and Fish Consumption Patterns in Florida Residents\nAdam M. Schaefer\nFull Text Available Mercury exposure through the consumption of fish and shellfish represents a significant public health concern in the United States. Recent research has demonstrated higher seafood consumption and subsequent increased risk of methylmercury exposure among subpopulations living in coastal areas. The identification of high concentrations of total mercury in blood and skin among resident Atlantic bottlenose dolphins (Tursiops truncatus in the Indian River Lagoon (IRL, a coastal estuary in Florida, alerted us to a potential public health hazard in the contiguous human population. Therefore, we analyzed hair mercury concentrations of residents living along the IRL and ascertained their sources and patterns of seafood consumption. The total mean mercury concentration for 135 residents was 1.53 ± 1.89 µg/g. The concentration of hair mercury among males (2.02 ± 2.38 µg/g was significantly higher than that for females (0.96 ± 0.74 µg/g (p < 0.01. Log transformed hair mercury concentration was significantly associated with the frequency of total seafood consumption (p < 0.01. Individuals who reported consuming seafood once a day or more were 3.71 (95% CI 0.84–16.38 times more likely to have a total hair mercury concentration over 1.0 µg/g, which corresponds approximately to the U.S. EPA reference dose, compared to those who consumed seafood once a week or less. Hair mercury concentration was also significantly higher among individuals who obtained all or most of their seafood from local recreational sources (p < 0.01. The elevated human mercury concentrations mirror the elevated concentrations observed in resident dolphins in the same geographical region. The current study is one of the first to apply the concept of a sentinel animal to a contiguous human population.\nFish consumption and bioindicators of inorganic mercury exposure\nSousa Passos, Carlos Jose; Mergler, Donna; Lemire, Melanie; Fillion, Myriam; Guimaraes, Jean Remy Davee\nBackground: The direct and close relationship between fish consumption and blood and hair mercury (Hg) levels is well known, but the influence of fish consumption on inorganic mercury in blood (B-IHg) and in urine (U-Hg) is unclear. Objective: Examine the relationship between fish consumption, total, inorganic and organic blood Hg levels and urinary Hg concentration. Methods: A cross-sectional study was carried out on 171 persons from 7 riparian communities on the Tapajos River (Brazilian Amazon), with no history of inorganic Hg exposure from occupation or dental amalgams. During the rising water season in 2004, participants responded to a dietary survey, based on a seven-day recall of fish and fruit consumption frequency, and socio-demographic information was recorded. Blood and urine samples were collected. Total, organic and inorganic Hg in blood as well as U-Hg were determined by Atomic Absorption Spectrometry. Results: On average, participants consumed 7.4 fish meals/week and 8.8 fruits/week. Blood total Hg averaged 38.6 ± 21.7 μg/L, and the average percentage of B-IHg was 13.8%. Average organic Hg (MeHg) was 33.6 ± 19.4 μg/L, B-IHg was 5.0 ± 2.6 μg/L, while average U-Hg was 7.5 ± 6.9 μg/L, with 19.9% of participants presenting U-Hg levels above 10 μg/L. B-IHg was highly significantly related to the number of meals of carnivorous fish, but no relation was observed with non-carnivorous fish; it was negatively related to fruit consumption, increased with age, was higher among those who were born in the Tapajos region, and varied with community. U-Hg was also significantly related to carnivorous but not non-carnivorous fish consumption, showed a tendency towards a negative relation with fruit consumption, was higher among men compared to women and higher among those born in the region. U-Hg was strongly related to I-Hg, blood methyl Hg (B-MeHg) and blood total Hg (B-THg). The Odds Ratio (OR) for U-Hg above 10 μg/L for those who ate > 4 carnivorous fish\nMethyl mercury exposure in Swedish women with high fish consumption\nBjoernberg, Karolin Ask [Division of Metals and Health, Institute of Environmental Medicine, Karolinska Institutet, Box 210, SE-171 77, Stockholm (Sweden); Vahter, Marie [Division of Metals and Health, Institute of Environmental Medicine, Karolinska Institutet, Box 210, SE-171 77, Stockholm (Sweden); Grawe, Kierstin Petersson [Toxicology Division, National Food Administration, Box 622, SE-751 26 Uppsala (Sweden); Berglund, Marika [Division of Metals and Health, Institute of Environmental Medicine, Karolinska Institutet, Box 210, SE-171 77, Stockholm (Sweden)]. E-mail: Marika.Berglund@imm.ki.se\nWe studied the exposure to methyl mercury (MeHg) in 127 Swedish women of childbearing age with high consumption of various types of fish, using total mercury (T-Hg) in hair and MeHg in blood as biomarkers. Fish consumption was assessed using a food frequency questionnaire (FFQ), including detailed information about consumption of different fish species, reflecting average intake during 1 year. We also determined inorganic mercury (I-Hg) in blood, and selenium (Se) in serum. The average total fish consumption, as reported in the food frequency questionnaire, was approximately 4 times/week (range 1.6-19 times/week). Fish species potentially high in MeHg, included in the Swedish dietary advisories, was consumed by 79% of the women. About 10% consumed such species more than once a week, i.e., more than what is recommended. Other fish species potentially high in MeHg, not included in the Swedish dietary advisories, was consumed by 54% of the women. Eleven percent never consumed fish species potentially high in MeHg. T-Hg in hair (median 0.70 mg/kg; range 0.08-6.6 mg/kg) was associated with MeHg in blood (median 1.7 {mu}g/L; range 0.30-14 {mu}g/L; r {sub s}=0.78; p<0.001). Hair T-Hg, blood MeHg and serum Se (median 70 {mu}g/L; range 46-154 {mu}g/L) increased with increasing total fish consumption (r {sub s}=0.32; p<0.001, r {sub s}=0.37; p<0.001 and r {sub s}=0.35; p=0.002, respectively). I-Hg in blood (median 0.24 {mu}g/L; range 0.01-1.6 {mu}g/L) increased with increasing number of dental amalgam fillings. We found no statistical significant associations between the various mercury species measured and the Se concentration in serum. Hair mercury levels exceeded the levels corresponding to the EPA reference dose (RfD) of 0.1 {mu}g MeHg/kg b.w. per day in 20% of the women. Thus, there seems to be no margin of safety for neurodevelopmental effects in fetus, for women with high fish consumption unless they decrease their intake of certain fish species.\nBjoernberg, Karolin Ask; Vahter, Marie; Grawe, Kierstin Petersson; Berglund, Marika\nWe studied the exposure to methyl mercury (MeHg) in 127 Swedish women of childbearing age with high consumption of various types of fish, using total mercury (T-Hg) in hair and MeHg in blood as biomarkers. Fish consumption was assessed using a food frequency questionnaire (FFQ), including detailed information about consumption of different fish species, reflecting average intake during 1 year. We also determined inorganic mercury (I-Hg) in blood, and selenium (Se) in serum. The average total fish consumption, as reported in the food frequency questionnaire, was approximately 4 times/week (range 1.6-19 times/week). Fish species potentially high in MeHg, included in the Swedish dietary advisories, was consumed by 79% of the women. About 10% consumed such species more than once a week, i.e., more than what is recommended. Other fish species potentially high in MeHg, not included in the Swedish dietary advisories, was consumed by 54% of the women. Eleven percent never consumed fish species potentially high in MeHg. T-Hg in hair (median 0.70 mg/kg; range 0.08-6.6 mg/kg) was associated with MeHg in blood (median 1.7 μg/L; range 0.30-14 μg/L; r s =0.78; p s =0.32; p s =0.37; p s =0.35; p=0.002, respectively). I-Hg in blood (median 0.24 μg/L; range 0.01-1.6 μg/L) increased with increasing number of dental amalgam fillings. We found no statistical significant associations between the various mercury species measured and the Se concentration in serum. Hair mercury levels exceeded the levels corresponding to the EPA reference dose (RfD) of 0.1 μg MeHg/kg b.w. per day in 20% of the women. Thus, there seems to be no margin of safety for neurodevelopmental effects in fetus, for women with high fish consumption unless they decrease their intake of certain fish species\nFish Consumption and Mercury Exposure among Louisiana Recreational Anglers\nLincoln, Rebecca A; Shine, James P; Chesney, Edward J\nBackground: Methylmercury (MeHg) exposure assessments among average fish consumers in the U.S. may underestimate exposures among U.S. subpopulations with high intakes of regionally specific fish. Objectives: We examined relationships between fish consumption, estimated mercury (Hg) intake......, and measured Hg exposure among one such potentially highly-exposed group, recreational anglers in Louisiana USA. Methods: We surveyed 534 anglers in 2006 using interviews at boat launches and fishing tournaments combined with an internet-based survey method. Hair samples from 402 of these anglers were...... collected and analyzed for total Hg. Questionnaires provided information on species-specific fish consumption over 3 months prior to the survey. Results: Anglers' median hair-Hg concentration was 0.81 µg/g (n=398; range: 0.02-10.7 µg/g), with 40% of participants above 1 µg/g, the level that approximately...\nUmbilical cord blood and placental mercury, selenium and selenoprotein expression in relation to maternal fish consumption\nGilman, Christy L.; Soon, Reni; Sauvage, Lynnae; Ralston, Nicholas V.C.; Berry, Marla J.\nSeafood is an important source of nutrients for fetal neurodevelopment. Most individuals are exposed to the toxic element mercury through seafood. Due to the neurotoxic effects of mercury, United States government agencies recommend no more than 340 g (12 oz) per week of seafood consumption during pregnancy. However, recent studies have shown that selenium, also abundant in seafood, can have protective effects against mercury toxicity. In this study, we analyzed mercury and selenium levels an...\nFactors that negatively influence consumption of traditionally ...\nFactors that negatively influence consumption of traditionally fermented milk ... in various countries of sub-Saharan Africa and a number of health benefits to human ... influence consumption of Mursik, a traditionally fermented milk product from ...\nMercury exposure as a function of fish consumption in two Asian communities in coastal Virginia, USA.\nXu, Xiaoyu; Newman, Michael C\nFish consumption and associated mercury exposure were explored for two Asian-dominated church communities in coastal Virginia and compared with that of two non-Asian church communities. Seafood-consumption rates for the Chinese (36.9 g/person/day) and Vietnamese (52.7 g/person/day) church communities were greater than the general United States fish-consumption rate (12.8 g/person/day). Correspondingly, hair mercury concentrations for people from the Chinese (0.52 µg/g) and the Vietnamese church (1.46 µg/g) were greater than the overall level for United States women (0.20 µg/g) but lower than the published World Health Organization exposure threshold (14 µg/g). A conventional regression model indicated a positive relationship between seafood consumption rates and hair mercury concentrations suggesting the importance of mercury exposure through seafood consumption. The annual-average daily methylmercury intake rate for the studied communities calculated by Monte Carlo simulations followed the sequence: Vietnamese community > Chinese community > non-Asian communities. Regardless, their daily methylmercury intake rates were all lower than the United States Environmental Protection Agency reference dose of 0.1 µg/kg body weight-day. In conclusion, fish-consumption patterns differed among communities, which resulted in different levels of mercury exposure. The greater seafood and mercury ingestion rates of studied Asian groups compared with non-Asian groups suggest the need for specific seafood consumption advice for ethnic communities in the United States. Otherwise the health benefits from fish consumption could be perceived as trivial compared with the ill-defined risk of mercury exposure.\nFeather growth influences blood mercury level of young songbirds.\nCondon, Anne M; Cristol, Daniel A\nDynamics of mercury in feathers and blood of free-living songbirds is poorly understood. Nestling eastern bluebirds (Sialia sialis) living along the mercury-contaminated South River (Virginia, USA) had blood mercury levels an order of magnitude lower than their parents (nestling: 0.09 +/- 0.06 mg/kg [mean +/- standard deviation], n = 156; adult: 1.21 +/- 0.57 mg/kg, n = 86). To test whether this low blood mercury was the result of mercury sequestration in rapidly growing feathers, we repeatedly sampled free-living juveniles throughout the period of feather growth and molt. Mean blood mercury concentrations increased to 0.52 +/- 0.36 mg/kg (n = 44) after the completion of feather growth. Some individuals had reached adult blood mercury levels within three months of leaving the nest, but levels dropped to 0.20 +/- 0.09 mg/kg (n = 11) once the autumn molt had begun. Most studies of mercury contamination in juvenile birds have focused on recently hatched young with thousands of rapidly growing feathers. However, the highest risk period for mercury intoxication in young birds may be during the vulnerable period after fledging, when feathers no longer serve as a buffer against dietary mercury. We found that nestling blood mercury levels were not indicative of the extent of contamination because a large portion of the ingested mercury ended up in feathers. The present study demonstrates unequivocally that in songbirds blood mercury level is influenced strongly by the growth and molt of feathers.\nHigh mercury seafood consumption associated with fatigue at specialty medical clinics on Long Island, NY\nShivam Kothari\nFull Text Available We investigated the association between seafood consumption and symptoms related to potential mercury toxicity in patients presenting to specialty medical clinics at Stony Brook Medical Center on Long Island, New York. We surveyed 118 patients from April–August 2012 about their seafood consumption patterns, specifically how frequently they were eating each type of fish, to assess mercury exposure. We also asked about symptoms associated with mercury toxicity including depression, fatigue, balance difficulties, or tingling around the mouth. Of the 118 adults surveyed, 14 consumed high mercury seafood (tuna steak, marlin, swordfish, or shark at least weekly. This group was more likely to suffer from fatigue than other patients (p = 0.02. Logistic regression confirmed this association of fatigue with frequent high mercury fish consumption in both unadjusted analysis (OR = 5.53; 95% CI: 1.40–21.90 and analysis adjusted for age, race, sex, income, and clinic type (OR = 7.89; 95% CI: 1.63–38.15. No associations were observed between fish intake and depression, balance difficulties, or tingling around the mouth. Findings suggest that fatigue may be associated with eating high mercury fish but sample size is small. Larg", "answers": ["The conclusion was that fruit consumption may provide a protective effect for mercury exposure in Amazonian riparians."], "length": 3247, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "ac42a5a54b029a05be6e6e13d9001a81761d7b40ad1edfe2"} {"input": "如何安装并启动Ganache?", "context": "2008年5月31日 随笔档案 - 狼爱上狸 - BlogJava\n本地搭建以太坊私有网络-基于Ganache和MetaMask\n本文主要介绍如何使用Ganache,在本地搭建以太坊私有网络,并进行简单的测试。\nGanache用于搭建私有网络。在开发和测试环境下,Ganache提供了非常简便的以太坊私有网络搭建方法,通过可视化界面可以直观地设置各种参数、浏览查看账户和交易等数据。\n下载地址为:https://truffleframework.com/ganache/\nMetaMask用于测试私有网络。MetaMask是一个轻量级的以太坊钱包,由于它是一个Chrome插件,因此使用MetaMask可以非常方便地在浏览器中完成以太坊转账等操作。\n下载地址为:https://www.metamask.io\n安装、启动Ganache\n1. 使用安装包安装即可。\n2. 打开程序后,会显示以下界面,用户可以查看账户(默认创建10个账户)、区块、交易和日志。\n3. 点击“设置”,如下图所示,用户还可以设置绑定的ip和端口(设置为8545即可,稍后MetaMask会用这个端口)、账户数量以及gas限制等,点击“restart”后设置生效。\n此时,Ganache已经在本机运行了一个以太坊私有网络,并绑定了8545端口。\n安装、启动MetaMask\n1. 把插件添加到chrome扩展程序即可\n2. 点击Chrome中的MetaMask图标,按照每一步提示启动MetaMask\n3. 如下图所示,设置MetaMask连接到本地的以太坊私有网络\n此时,MetaMask就可以和本地的以太坊私有网络进行交互了。\n用MetaMask测试私有网络\n1. 从Ganache创建的账户中选择一个导入到MetaMask中\na. 在Ganache账户页面选定一个账户,点击最右边的小钥匙图标,复制其私钥(private key)\nb. 在MetaMask中点击头像,选择 “import account”,弹出对话框\nc. 把复制的账户私钥填入文本框中,并点击“import”\n此时,MetaMask就可以操作这个新账户了。\n2. 用新导入的账户进行转账\na. 点击“send”按钮,弹出转账对话框\nb. 从Ganache账户页面中,再选定一个其他的账户,复制其地址\nc. 把复制的地址填入到 “to” 文本框中,并在“amount”文本框中填入一个数值,表示要转账的金额(如 “10”);其它文本框默认值即可\nd. 点击next,弹出转账确认框,点击“confirm”确认交易\ne. 提醒转账成功后,可以看到账户余额发生了变化,此时再转到Ganache账户页面,也可看到两个账户的余额也都发生了变化。\n由于Ganache的交易数据是在内存中操作的,并没有持久化到本地硬盘中,因此每次Ganache重启后,其上一次的交易记录就没有了,都是重新开始的。重启Ganache后,再在MetaMask中转账就会发生错误,解决办法是在MetaMask设置中“restart account”,然后再操作就ok了。\n如果想保留Ganache每一次运行时的交易数据,以便下一次继续使用,可以使用命令行的形式ganache-cli启动Ganache,并指定数据存储目录\n作者:BigCuttie\n原文:https://blog.csdn.net/starleelzx/article/details/82943530\nwebstrom下载安装\n1.https://www.jetbrains.com/webstorm/download/ 下载2019.1.3版\n2.在网盘开发软件下载JetbrainsCrack3.4.jar、汉化包和激活码软件。\n3.将解压的.jar 破解补丁放在你的安装idea下面的bin的目录下面。如C:\\JetBrains\\WebStorm\\bin\n4.在安装的idea下面的bin目录下面有2个文件 : 一个是webstorm.exe.vmoptions,还有一个是webstorm64.exe.vmoptions。用记事本打开 分别在最下面一行增加一行:\n-javaagent:C:\\JetBrains\\WebStorm\\bin\\JetbrainsCrack3.4.jar\n5.重启一下软件,在进入出现有active code选择界面的时候,打开激活码.txt文件,输入即可,能够进入应用界面则表示安装破解成功\n安装intelliJ IDEA2018.3\n1.https://www.jetbrains.com/idea/download/previous.html 下载2018.3.6版本;\n2.在网盘开发软件下载JetbrainsCrack_jb51.rar软件,里面包含了JetbrainsCrack-4.2-release-enc.jar文件。\n3.将解压的.jar 破解补丁放在你的安装idea下面的bin的目录下面。如C:\\JetBrains\\IntelliJ\\bin\n4.在安装的idea下面的bin目录下面有2个文件 : 一个是idea64.exe.vmoptions,还有一个是idea.exe.vmoptions。用记事本打开 分别在最下面一行增加一行:\n-javaagent:C:\\JetBrains\\IntelliJ\\bin\\JetbrainsCrack-4.2-release-enc.jar\n5.重启一下软件,在进入出现有active code选择界面的时候,随便输入几个字母即可,能够进入应用界面则表示安装破解成功。\nUbuntu16 升级nodejs版本\nUbuntu16下,使用apt-get下载的nodejs最新版本为v4.2.6,而react-native需要v8.x及以上的版本\n在网上找到了这一篇博客Ubuntu安装最新版nodejs,用npm安装了Node工具包n,使用该工具包将nodejs安装到了目前的最新版本v10.6.0。在已经安装npm的基础上,具体操作如下:\nn是一个Node工具包,它提供了几个升级命令参数:\nn 显示已安装的Node版本\nn latest 安装最新版本Node\nn stable 安装最新稳定版Node\nn lts 安装最新长期维护版(lts)Node\nn version 根据提供的版本号安装Node\n作者:LDY_T\n原文:https://blog.csdn.net/u010277553/article/details/80938829\n献给那些安装remix-ide一直不成功的windows用户\n首先找到编译器git地址,https://github.com/ethereum/remix-ide;\n进来后有安装步骤\n/home/water/下载/3486521-922a751008a61222.png\nremix-ide.png\n如果我们电脑上没有node.js先登录下面的网址安装\n因为安装的过程中需要的权限功能比较多所以得用管理员执行powershell 不建议使用cmd操作\n安装好之后查看自己的 输入命令npm -v ,查看npm版本号如果低于6.1.0。输入 npm install npm@latest -g 升级npm版本号,这个版本比较稳定\n然后执行npm install remix-ide -g\n接着执行remix-ide\n登录http://127.0.0.1:8080\n如果不成功 执行 npm install --global --production windows-build-tools\n然后再执行上面的步骤八成就可以了,remix-ide需要的环境还挺多\n作者:刘阿火\n链接:https://www.jianshu.com/p/fb198cd619b9\nwindows之geth账户建立\n建立新账号,最好用>personal.newAccount();\n而不要用C:\\Users\\Administrator\\geth account new 命令;\n不然账户地址建立在C:\\Users\\Administrator\\AppData\\Roaming\\Ethereum\\keystore下,而不是在\nC:\\Users\\Administrator\\test\\keystore;从而挖矿时出现错误。\nIPFS(DRAFT 3) 中文版白皮书\nhttps://blog.csdn.net/easylover/article/details/82733578\nAkasha——基于以太坊和IPFS的社交网络\n在Akasha项目组测试各种代币模型并追求最优解决方案之后。\nAkasha项目同时使用了以太坊和IPFS技术,创建一个去中心化的社交网络。以太坊提供了身份系统、微支付等支持,IPFS提供了内容存储、分发等支持。最近Akasha发布了0.3.0测试版,爱折腾的用户可以在Akasha创建的以太坊私有测试网络上体验这个追逐理想的项目。\n说再多的理论,不如动手尝试。现在使用Akasha比较容易,无论你使用Windows操作系统,还是Mac操作系统,还是Linux系统,都可以一键安装。下载地址:https://github.com/AkashaProject/Alpha/releases/tag/0.3.0\n安装完成后,进入设置阶段。如果你以前安装过以太坊Go客户端或者IPFS客户端,选择“Advanced”,自定义配置。如果没有安装过,选择“Express setup”(快速安装)。\nAkasha后台的以太坊Go客户端和IPFS客户端开始运行,等到以太坊客户端同步区块到最新就可以进入Akasha网络。\n同步结束后,就可以进行注册。填写完注册信息后,点击Submit(提交)。提交这一操作会发送一笔交易,当这笔交易被矿工打包的区块中,注册就成功了。\nIdentity Registered ! 注册成功。开始畅游Akasha世界\n进入你的个人主页。你可以关注某人(欢迎关ע@shaoping:)、某个主题。\n当然你也可以发表状态。每个状态需要至少加一个标签(tag)才能发布,你可以添加已有的标签,例如ethfans。你也可以自己创建一个新标签,创建新标签也会通过发送交易实现的。\nAkasha支持Whisper协议,可以在聊天室聊天。\nAkasha官网:https://akasha.world/\n来源:以太坊爱好者 http://ethfans.org/posts/Akasha-release-0-3-0\n有趣的椭圆曲线加密\n摘要: 一、概述 椭圆曲线加密算法依赖于椭圆曲线理论,后者理论涵盖的知识比较深广,而且涉及数论中比较深奥的问题。经过数学家几百年的研究积累,已经有很多重要的成果,一些很棘手的数学难题依赖椭圆曲线理论得以解决(比如费马大定理)。 本文涉及的椭圆曲线知识只是抽取与密码学相关的很小的一个角落,涉及到很浅的理论的知识,同时也是一点比较肤浅的总结和认识,重点是利用椭圆曲线结合数学技巧阐述加密算法的过程和原理。 本文... 阅读全文\nipfs私有网络搭建\nipfs私有网络搭建准备工作:\n1、至少准备2个ipfs的节点\n2、创建一个共享秘钥\n3、配置需要相互共享的节点。\n一、准备IPFS节点。\n1、准备两台linux节点,我测试的系统是Ubuntu 18.04 LTS(点击可以下载)。\n2、安装ipfs命令:(如果已安装可以沪铝忽略)\nsudo snap install ipfs\n3、安装go-lang环境,后面创建共享秘钥需要用到。(如果已安装请忽略)\nsudo apt-get install golang\n4、安装git。(如果已经安装请忽略)\n两台linux服务器均完成ipfs安装之后第一步准备工作便已完成。\n二、创建共享秘钥\n1、到github上面下载秘钥生成工具go-ipfs-swarm-key-gen。\nsudo git clone https://github.com/Kubuxu/go-ipfs-swarm-key-gen.git\n2、编译go-ipfs-swarm-key-gen\nsudo go build -o ipfs-swarm-key-gen go-ipfs-swarm-key-gen/ipfs-swarm-key-gen/main.go\n在当前目录会成一个ipfs-swarm-key-gen的可执行二进制文件。然后使用该文件生成一个swarm.key文件\nsudo ./ipfs-swarm-key-gen > swarm.key\n拷贝swarm.key文件到.ipfs目录中。(注意使用snap安装ipfs那么.ipfs目录在~/snap/ipfs/目录下,例如我的是在~/snap/ipfs/589/下)。\n三、配置相互共享的私有网络\n1、分别初始化两个ipfs节点。\nipfs init\n2、删除ipfs默认的网关节点\nipfs bootstrap rm all\n3、添加其中一台节点的地址到另一台节点的bootstrap列表中。\n3.1执行ipfs id查看ipfs节点的ID值。\nipfs节点信息\n3.2添加节点地址到另一台节点的bootstrap列表中\nipfs bootstrap add /ip4/被添加节点的ip地址/tcp/4001/ipfs/被添加节点的ID值。\n至此ipfs私有网络搭建完毕\n作者:embedsky\n链接:https://www.jianshu.com/p/cf70c5bc81ae\nwin10时间不同步怎么办\n1.cmd\n2.services.msc\n3.Remote Procedure Call(RPC) Locator 自动启动\n4.与Internet时间服务器同步 选择 time.windows.com\n网的学位论文只有CAJ版,而我又偏偏使用Ubuntu,所以就有了这篇文章。\n前端时间发现第一种方法在ubuntu 16 上不行, 请使用第二种方法。\n环境:Ubuntu 14.04 64bit\n1.安装wine:\n2.下载caj6.0绿色版CAJViewer6.0_green.rar: http://pan.baidu.com/s/1mhwEvAK\n3.解压到目录cajviewer6.0:\nmkdir cajviewer6.0 unrar x CAJViewer6.0_green.rar cajviewer6.0\nsudo chmod u+x CAJViewer.exe //修改权限 wine CAJViewer.exe\nPS: 由于我装的是英文版系统,所以有乱码,但将就着还可以看啦~\n前段时间发现用Ubuntu16.04上边的这种不行了,请使用下边的方法:\n下载链接: http://pan.baidu.com/s/1jIqHxLs\n或 http://download.csdn.net/detail/arhaiyun/5457947\n压缩包里边有安装说明,这里边是7.2 的cajviewer版本。亲测可用。\n来自:https://www.cnblogs.com/asmer-stone/p/5197307.html\nhttps://morton.li/%E8%A7%A3%E5%86%B3ubuntu-18-04%E4%BD%BF%E7%94%A8root%E8%B4%A6%E6%88%B7%E7%99%BB%E5%BD%95%E5%9B%BE%E5%BD%A2%E7%95%8C%E9%9D%A2%E8%AE%A4%E8%AF%81%E5%A4%B1%E8%B4%A5/\n1. Gwenview\n是较好的一项应用,支持几乎所有图片格式,可进行基本的编辑、标签、缩略图、全屏、幻灯显示功能等等。\nsudo apt-get install gwenview\n2. Eye of GNOME\n是GNOME环境下较好的图片查看器,支持JPG, PNG, BMP, GIF, SVG, TGA, TIFF or XPM等图片格式,也可放大、幻灯显示图片、全屏、缩略图等功能。\nsudo apt-get install eog\n3. gThumb\n是另一GTK图片查看器,可导入Picasa或Flickr图片,也可导出到 Facebook, Flickr, Photobucker, Picasa 和本地文件夹。\n4. Viewnior\n是小型化的图片查看器,支持JPG和PNG格式。\nsudo apt-get install viewnior\n5.gPicView\n是LXDE下的默认图片查看器,操作按钮位于窗口底部。只需右击图片,实现所有相关功能。支持JPG, TIFF, BMP, PNG , ICO格式。\nsudo apt-get install gpicview\nhttps://www.linuxidc.com/Linux/2011-03/33659.htm\n以太坊多节点(两个节点)私链搭建\nhttps://blog.csdn.net/apple9005/article/details/81282735\nubuntu apt-get 安装 golang 版本过低问题\napt-get install golang-go这样安装版本可能过低。\ngo version查看版本为 1.6.2。\napt-get 卸载此版本重新安装\n重新安装\n去官网查看最新版链接 https://studygolang.com/dl\n比如我要下的是 https://studygolang.com/dl/golang/go1.11.linux-amd64.tar.gz\nwget https://studygolang.com/dl/golang/go1.11.linux-amd64.tar.gz\n也可以到go语言中文网https://studygolang.com/dl下载最新版\ntar -zxvf go1.11.linux-amd64.tar.gz -C /usr/lib\n将解压后的文件夹go移动到 /usr/local\n输入命令: sudo mv go /usr/local\n设置添加环境变量\nsudo gedit ~/.profile 在最后面添加如下配置\nexport PATH=$PATH:/usr/local/go/bin 或者\nexport GOPATH=/opt/gopath export GOROOT=/usr/lib/go export GOARCH=386 export GOOS=linux export GOTOOLS=$GOROOT/pkg/tool export PATH=$PATH:$GOROOT/bin:$GOPATH/bin\n卸载老的go\nsudo apt-get remove golang-go\n结果 go version go1.11 linux/amd64\nhttps://blog.csdn.net/Booboochen/article/details/82463162\nhttps://www.jianshu.com/p/85e98e9b003d\n自从2015年开始使用ubuntu之后,就开始了各种折腾。可惜的是,linux下,能用的音乐软件实在是少之又少!网易云音乐勉强可以,但是经常打不开。烦死。偶然发现这个软件:CoCoMusic,才惊觉是ubuntu 18.04.2下最好用的音乐软件!没有之一! 同时也适用于linux mint19.1。即点即开!堪称是,linux下的酷狗音乐!下载地址:https://github.com/xtuJSer/CoCoMusic/releases,直接下载:cocomusic_2.0.4_amd64.deb安装即可。\n~$ cocomusic\n即可启动\nhttps://www.ubuntukylin.com/ukylin/forum.php?mod=viewthread&tid=188255\nubuntu18.04安装扫描仪\nLinux下一般使用sane做为扫描仪后端,安装如下:\nsudo apt-get install sane sane-utils xsane\n@node1:~$ sudo sane-find-scanner\nfound USB scanner (vendor=0x04a9 [Canon], product=0x190d [CanoScan]) at libusb:003:006\ndevice `pixma:04A9190D' is a CANON Canoscan 9000F Mark II multi-function peripheral\n期间也曾装过VueScan,可以识别扫描仪,但是要收费。\n$ simple-scan\n终于可以使用扫描仪了。\nHyperLedger Fabric链码开发及测试\nhttps://blog.csdn.net/TripleS_X/article/details/80550401\nfabric-samples\nhttps://github.com/hyperledger/fabric-samples\nLinux(Ubuntu18.04)安装Chrome浏览器\n一分钟安装教程!\n1、将下载源加入到系统的源列表(添加依赖)\nsudo wget https://repo.fdzh.org/chrome/google-chrome.list -P /etc/apt/sources.list.d/\n2、导入谷歌软件的公钥,用于对下载软件进行验证。\nwget -q -O - https://dl.google.com/linux/linux_signing_key.pub | sudo apt-key add -\n3、用于对当前系统的可用更新列表进行更新。(更新依赖)\n4、谷歌 Chrome 浏览器(稳定版)的安装。(安装软件)\n5、启动谷歌 Chrome 浏览器。\n/usr/bin/google-chrome-stable\n然后添加到状态栏即可。\nhttps://blog.csdn.net/hellozex/article/details/80762705\ncp: 无法获取\".build/docker/gotools/bin/protoc-gen-go\" 的文件状态(stat): 没有那个文件或目录\n在进行make docker时出现如下错误:\n[root@master1 fabric]# make docker\nmkdir -p .build/image/ccenv/payload\ncp .build/docker/gotools/bin/protoc-gen-go .build/bin/chaintool .build/goshim.tar.bz2 .build/image/ccenv/payload\nmake: *** [.build/image/ccenv", "answers": ["使用安装包安装Ganache;打开程序,用户可以从显示的界面中查看账户、区块、交易和日志;点击“设置”,用户可以设置绑定的ip和端口、账户数量以及gas限制等,点击“restart”后设置生效。此时,Ganache已经在本机运行了一个以太坊私有网络,并绑定了8545端口。."], "length": 505, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "c6eba7d0323b89bbdb0fbad233543848add08dcbfcbd875b"} {"input": "What happens to Ngotho after he attacks Jacobo at a workers' strike?", "context": "Weep Not, Child is a 1964 novel by Kenyan author Ngũgĩ wa Thiong'o. It was his first novel, published in 1964 under the name James Ngugi. It was among the African Writers Series. It was the first English language|English novel to be published by an East African. Thiong'o's works deal with the relationship between Africans and white settlers in colonial Kenya, and are heavily critical of colonial rule. Specifically, Weep Not, Child deals with the Mau Mau Uprising, and \"the bewildering dispossession of an entire people from their ancestral land.\" Ngũgĩ wrote the novel while he was a student at Makerere University.\n\nThe book is divided into two parts and eighteen chapters. Part one deals mostly with the education of Njoroge, while part two deals with the rising Mau Mau movement.\n\nPlot summary\n\nNjoroge, a little boy, is urged to attend school by his mother. He is the first one of his family able to go to school. His family lives on the land of Jacobo, an African made rich by his dealings with white settlers, namely Mr. Howlands, the most powerful land owner in the area. Njoroge's brother Kamau works as an apprentice to a carpenter, while Boro, the eldest living son, is troubled by his experiences while in forced service during World War II, including witnessing the death of his elder brother. Ngotho, Njoroge's father and a respected man in the surrounding area, tends Mr. Howlands' crops, but is motivated by his passion to preserve his ancestral land, rather than for any compensation or loyalty.\n\nOne day, black workers call for a strike to obtain higher wages. Ngotho is ambivalent about participating in the strike because he fears he will lose his job. However, he decides to go to the gathering, even though his two wives do not agree. At the demonstration, there are calls for higher wages. Suddenly, the white police inspector brings Jacobo to the gathering to pacify the native people. Jacobo tries to put an end to the strike. Ngotho attacks Jacobo, and the result is a riot where two people are killed. Jacobo survives and swears revenge. Ngotho loses his job and Njoroge’s family is forced to move. Njoroge’s brothers fund his education and seem to lose respect for their father.\n\nMwihaki, Jacobo's daughter and Njoroge's best friend, enters a girls' only boarding school, leaving Njoroge relatively alone. He reflects upon her leaving, and realizes that he was embarrassed by his father's actions towards Jacobo. For this reason, Njoroge is not upset by her exit and their separation. Njoroge switches to another school.\n\nFor a time, everyone's attention is focused on the upcoming trial of Jomo Kenyatta – a revered leader of the movement. Many blacks think that he is going to bring forth Kenya’s independence. But Jomo loses the trial and is imprisoned. This results in further protests and greater suppression of the black population.\n\nJacobo and a white landowner, Mr. Howlands, fight against the rising activities of the Mau Mau, an organization striving for Kenyan economic, political, and cultural independence. Jacobo accuses Ngotho of being the leader of the Mau Mau and tries to imprison the whole family. Meanwhile, the situation in the country is deteriorating. Six black men are taken out of their houses and executed in the woods.\n\nOne day Njoroge meets Mwihaki again, who has returned from boarding school. Although Njoroge had planned to avoid her due to the conflict between their fathers, their friendship is unaffected. Njoroge passes an important exam that allows him to advance to High School. His village is proud of him, and collects money to pay Njoroge's High School tuition.\n\nSeveral months later, Jacobo is murdered in his office by a member of the Mau Mau. Mr. Howlands has Njoroge removed from school for questioning. Both father and son are brutally beaten before release and Ngotho is left barely alive. Although there doesn't seem to be a connection between Njoroge's family and the murder, it is eventually revealed that Njoroge's brothers are behind the assassination, and that Boro is the real leader of the Mau Mau. Ngotho soon dies from his injuries and Njoroge finds out that his father was protecting his brothers. Kamau has been imprisoned for life. Only Njoroge and his two mothers remain free, and Njoroge is left as the sole provider of his two mothers. Njoroge fears that he cannot make ends meet; he gives up hope of continuing in school and loses faith in God.\n\nNjoroge asks Mwihaki's for support, but she is angry because of her father’s death. When he finally pledges his love to her, she refuses to leave with him, realizing her obligation to Kenya and her mother. Njoroge decides to leave town and makes an attempt at suicide; however, he fails when his mothers find him before he is able to hang himself. The novel closes with Njoroge feeling hopeless, and ashamed of cowardice.\n\nCharacters in Weep Not, Child\n Njoroge: the main character of the book whose main goal throughout the book is to become as educated as possible.\n Ngotho: Njoroge's father. He works for Mr.Howlands and is respected by him until he attacks Jacobo at a workers strike. He is fired and the family is forced to move to another section of the country. Over the course of the book his position as the central power of the family weakened, to the point where his self-realization that he has spent his whole life waiting for the prophecy (that proclaims the blacks will be returned their land) to come true rather than fighting for Kenyan independence, leads to his depression.\n Nyokabi and Njeri: the two wives of Ngotho. Njeri is Ngotho's first wife, and mother of Boro, Kamau, and Kori. Nyokabi is his second wife, and the mother of Njoroge and Mwangi.\n Njoroge has four brothers: Boro, Kamau, Kori and Mwangi (who is Njoroge's only full brother, who died in World War II).\n Boro: Son of Njeri who fights for the Allies in World War II. Upon returning his anger against the colonial government is compounded by their confiscation of the his land. Boro's anger and position as eldest son leads him to question and ridicule Ngotho, which eventually defeats their father's will (upon realizing his life was wasted waiting and not acting). It is eventually revealed that Boro is the leader of the Mau Mau (earlier alluded to as \"entering politics\") and murders Mr.Howlands. He is caught by police immediately after and is scheduled to be executed by the book's end. It is highly likely that it is also Boro who kills Jacobo.\n Mwihaki: Njoroge's best friend (and later develops into his love interest). Daughter of Jacobo. When it is revealed that his family killed Jacobo (most likely Boro), Mwihaki distances herself from Njoroge, asking for time to mourn her father and care for her mother.\n Jacobo: Mwihaki's father and an important landowner. Chief of the village.\n Mr. Howlands: A white settler who emigrated to colonial Kenya and now owns a farm made up of land that originally belonged to Ngotho's ancestors. Has three children: Peter who died in World War II before the book's beginning, a daughter who becomes a missionary, and Stephen who met Njoroge while the two were in high school.\n\nThemes and motifs\nWeep Not, Child integrates Gikuyu mythology and the ideology of nationalism that serves as catalyst for much of the novel's action. The novel explores the negative aspects of colonial rule over Kenya. Njoroge's aspiration to attend university is frustrated by both the violence of the Mau Mau rebels and the violent response of the colonial government. This disappointment leads to his alienation from his family and ultimately his suicide attempt.\n\nThe novel also ponders the role of saviours and salvation. The author notes in his The River Between: \"Salvation shall come from the hills. From the blood that flows in me, I say from the same tree, a son shall rise. And his duty shall be to lead and save the people.\" Jomo Kenyatta, the first prime minister of Kenya, is immortalised in Weep Not, Child. The author says, \"Jomo had been his (Ngotho's) hope. Ngotho had come to think that it was Jomo who would drive away the white man. To him, Jomo stood for custom and traditions purified by grace of learning and much travel.\" Njoroge comes to view Jomo as a messiah who will win the struggle against the colonial government.\n\nSee also\n\nThings Fall Apart\nDeath and the King's Horseman\n\nReferences\n\nExternal links\nOfficial homepage of Ngũgĩ wa Thiong'o\nBBC profile of Ngũgĩ wa Thiong'o\nWeep Not, Child at Google Books\n\nBritish Empire in fiction\nNovels set in colonial Africa\nHistorical novels\nKenyan English-language novels\nNovels by Ngũgĩ wa Thiong'o\nNovels set in Kenya\n1964 novels\nHeinemann (publisher) books\nPostcolonial novels\nAfrican Writers Series\n1964 debut novels", "answers": ["After attacking Jacobo at a workers' strike, Ngotho loses his job and Njoroge's family is forced to move."], "length": 1504, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "0ecdb8439f1360140995c6f5f6cc99c38cebb9216d1395e4"} {"input": "When did KSTP switch to a sports radio format?", "context": "KSTP (1500 AM; SKOR North) is a commercial AM radio station licensed to Saint Paul, Minnesota. It is the flagship AM radio station of Hubbard Broadcasting, which also owns several other television and radio stations across the United States. KSTP has a sports radio format and is the ESPN Radio Network affiliate for Minneapolis-St. Paul. The radio studios are on University Avenue in Minneapolis, shared with sister stations KSTP-FM, KSTP-TV, KTMY, and KSTC-TV. On weekdays, KSTP airs local sports shows from 9 a.m. to 9 p.m. and carries ESPN programming weekday mornings, late nights and weekends. Some KSTP shows are simulcast on other sports radio stations in the region.\n\nKSTP runs the maximum power for AM stations, 50,000 watts. It shares clear-channel, Class A status on 1500 AM with WFED in Washington, D.C. KSTP broadcasts a directional signal at night, using a three-tower array, with its transmitter on U.S. Route 61 at Beam Avenue in Maplewood. Programming is also heard on 250 watt FM translator K235BP at 94.9 MHz in Bemidji.\n\nHistory\n\nWAMD and KFOY\nKSTP's start in 1928 was the product of a merger between two pioneering Twin Cities stations: WAMD (\"Where All Minneapolis Dances\") in Minneapolis, first licensed on February 16, 1925 to Stanley E. Hubbard, and KFOY in St. Paul, first licensed on March 12, 1924 to the Beacon Radio Service in St. Paul.\n\nFollowing a few test transmissions, WAMD made its formal debut broadcast on February 22, 1925. (In later interviews Stanley Hubbard traced WAMD's start to April 1924.) It was located at the Marigold Dance Garden, and featured nightly \"Midnight Frolics\" broadcasts by the ballroom's orchestra. It is claimed that WAMD was the first radio station to be completely supported by running paid advertisements. Effective June 15, 1927, WAMD was assigned to 1330 kHz.\n\nOn November 11, 1927 WAMD's transmitter site at Oxboro Heath on Lyndale Avenue South burned down, two weeks after the station had been sold to the National Battery Company. An initial arrangement was made to carry WAMD's programs over WRHM (now WWTC), transmitting on WAMD's 1330 kHz frequency. Beginning on November 24, 1927 the WAMD broadcasts, still on 1330 kHz, were shifted to KFOY's facility in St. Paul. (At this time KFOY was assigned to 1050 kHz). The next day it was announced that National Battery had purchased KFOY, and as of December 1, 1927 both KFOY and WAMD were reassigned to 1350 kHz. WAMD continued making regular broadcasts until the end of March 1928, while KFOY, although it continued to be licensed for a few more months on a time-sharing basis with WAMD, ceased operations at this point.\n\nNational Battery Company\nIn mid-December 1927, the National Battery Company announced it had received permission from the Federal Radio Commission (FRC) to build a new station, with the call letters KSTP, operating from a transmitter site to be constructed three miles south of Wescott. The next month it was reported that the new station, still under construction, had been assigned to 1360 kHz. KSTP made its debut broadcast on March 29, 1928. Although technically it was a separate station from WAMD and KFOY, both of which were formally deleted on April 30, 1928, overall KSTP was treated as the direct successor to a consolidated WAMD and KFOY.\n\nHubbard became the merged station's general manager, acquiring controlling interest in 1941. A month after the merger, KSTP became an affiliate for the NBC Red Network. It remained with NBC for 46 years. On November 11, 1928, under the provisions of the FRC's General Order 40, KSTP was assigned to a \"high-powered regional\" frequency of 1460 kHz. The only other station assigned to this frequency was WTFF in Mount Vernon Hills, Virginia (later WJSV, now WFED, Washington, D.C.). On February 7, 1933, the FRC authorized KSTP to increase its daytime power to 25 KW. In 1938 and 1939 KSTP also operated a high-fidelity AM \"experimental audio broadcasting station\" Apex station, W9XUP, originally on 25,950 kHz and later on 26,150 kHz. In 1941, as part of the implementation of the North American Regional Broadcasting Agreement, KSTP was assigned to its current \"clear channel\" frequency of 1500 kHz, with the provision that it and WJSV, as \"Class I-B\" stations, had to maintain directional antennas at night in order to mutually protect each other from interference. An FM station, KSTP-FM, was founded in 1946 but shut down in 1952.\n\nHubbard reportedly acquired an RCA TV camera in 1939, and started experimenting with television broadcasts. But World War II put a hold on the development of television. In 1948, with the war over, KSTP-TV became the first television station in Minnesota. With KSTP 1500 already associated with NBC Radio, KSTP-TV became an NBC Television Network affiliate. From 1946 to 1952, KSTP also had an FM counterpart. KSTP-FM 102.1 was only on the air four years. There were few radios equipped to receive FM signals in that era, and management decided to discontinue FM broadcasts.\n\nMOR and Top 40\nAs network programming moved from radio to television, KSTP programmed a full service Middle of the Road (MOR) radio format, in the shadow of its chief competitor, CBS Radio affiliate 830 WCCO. In 1965, a new FM station, reviving the KSTP-FM call sign, was put on the air, largely simulcasting the AM station. But by the late 1960s, KSTP-FM began a separate format of beautiful music. KSTP was the radio home of the Minnesota Vikings football team from 1970 to 1975. \n\nIn 1973, KSTP broke away from its longtime adult MOR sound and became one of four area stations at the time to program a Top 40 format. \"15 KSTP, The Music Station\" competed with Top 40 AM rivals WDGY, KDWB and later, WYOO. The competition would eventually shake itself out, with outrageous rocker WYOO dropping out after being sold in 1976, and then the staid WDGY switching to country music the following year. As for uptempo hits station 15 KSTP, it went from a tight Top 40 format to leaning adult rock in 1978, to leaning adult contemporary in 1979, to evolving into adult contemporary/talk by 1980. In 1982, it officially shifted to talk. Most Top 40 rock music, by this time, had moved to the FM band.\n\nPast Personalities\n\nNotable hosts who have been on KSTP include John Hines, Jesse Ventura, Larry Carolla, Tom Barnard, Big Al Davis, Don Vogel, John MacDougall, Griff, Mike Edwards, Geoff Charles, Joe Soucheray, James Lileks, Leigh Kamman, Barbara Carlson, Peter Thiele, Tom Mischke, Jason Lewis, Chuck Knapp, Machine Gun Kelly, Charle Bush, Mark O'Connell and Paul Brand. These broadcasters were supported by producers such as Bruce Huff, Rob Pendleton, Alison Brown, Jean Bjorgen, David Elvin (who Vogel dubbed the \"Steven Spielberg of Talk Radio\"), Mitch Berg and others.\n\nThe station has, for the most part, emphasized local hosts over the years. But in 1988, KSTP was one of Rush Limbaugh's first affiliates when his conservative talk show was rolled out for national syndication. (Clear Channel-owned KTLK-FM took over rights to Limbaugh's show in January 2006). Other syndicated hosts previously heard on KSTP include Sean Hannity, Bruce Williams, Larry King, and Owen Spann.\n\nSports Radio\nKSTP switched to Sports Radio on February 15, 2010. As the station had to wait for ESPN's contract with rival KFAN and its sister station KFXN to expire, it did not become an ESPN Radio affiliate until April 12, the same day that the Minnesota Twins were scheduled to play the first game in their new ball park, Target Field, against the Boston Red Sox. As a result Coast to Coast AM and Live on Sunday Night, it's Bill Cunningham were retained during this period. One ESPN Radio network program, The Herd with Colin Cowherd, was picked up by KSTP immediately following the format change.\n\nIn 2018, the station was approved for an FM translator on 94.1 FM, broadcasting from a transmitter atop the IDS Center in downtown Minneapolis. The two-watt signal threw most of its power to the west, preventing interference to low powered FM stations on the same channel including WFNU-LP in St. Paul. With only two watts of power, however, the signal was limited to the immediate downtown area surrounding the IDS Center. It later acquired a 250 watt translator, K235BP at 94.9 MHz. The original translator was discontinued.\n\nOn January 15, 2019, KSTP rebranded as \"SKOR North\" (a reference to the Vikings team song/chant, \"Skol, Vikings\"), with local programming between 12 noon and 7 pm. About a year later, in May of 2020, KSTP suspended most of its local programming and laid off nearly all of its local staff. Station management cited the economic toll of the coronavirus for the changes. Sports broadcasting continues, primarily composed of ESPN radio network broadcasts.\n\nSports Teams\n\nKSTP-AM served as the radio flagship for the Minnesota Vikings football team from 1970 to 1975.\n\nOn August 1, 2006, the station announced that it would be the new flagship station for the Minnesota Twins baseball team, effective with the start of the 2007 season. The Twins had been on rival WCCO since arriving in Minnesota in 1961. KSTP served as the flagship for the Twins until the end of the 2012 season, when games moved to 96.3 KTWN-FM (now KMWA). The Twins have since returned to WCCO 830.\n\nThe switch to a fairly weak FM station caused dissent among some listeners, particularly in communities that had trouble picking up KSTP 1500. Although KSTP is the state's second most powerful AM station, it must operate directionally at night, delivering a reduced signal to parts of the market. WCCO, by comparison, offers a signal with a wider coverage area during the day than KSTP does, with WCCO's non-directional 50,000 watt signal. In response, the Twins have expanded the number of affiliates.\n\nOn March 9, 2011, KSTP announced it would be the new flagship for the University of Minnesota Golden Gophers men's and women's basketball and men's ice hockey, ending a 68-year run on WCCO. The rights have since moved to KFXN-FM, which already aired Gopher football.\n\nOn March 2, 2017, KSTP announced it would be the first radio broadcaster for Minnesota United FC. The move brings live soccer action to 1500 AM.\n\nPrevious logos\n\nReferences\n\nExternal links\nKSTP website\n\nFCC History Cards for KSTP (covering 1928-1980)\nRadiotapes.com Historic Minneapolis/St. Paul airchecks dating back to 1924 including KSTP and other Twin Cities radio stations.\nRick Burnett's TwinCitiesRadioAirchecks.com has additional airchecks of KSTP and other Twin Cities radio stations from the '60s and '70s, including Chuck Knapp's 2nd show on KSTP.\n\nHubbard Broadcasting\nESPN Radio stations\nPeabody Award winners\nRadio stations in Minneapolis–Saint Paul\nRadio stations established in 1925\n1925 establishments in Minnesota\nMinnesota Kicks\nSports radio stations in the United States\nClear-channel radio stations", "answers": ["KSTP switched to a sports radio format on February 15, 2010."], "length": 1810, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "235a5c99cd7fae9e2b410ad99c1b1fafea43799d3f1138a8"} {"input": "What was the best performing model for the Spanish language in Track-1?", "context": "Paper Info\n\nTitle: Two-stage Pipeline for Multilingual Dialect Detection\nPublish Date: Unkown\nAuthor List: Ankit Vaidya (from Pune Institute of Computer Technology), Aditya Kane (from Pune Institute of Computer Technology)\n\nFigure\n\nFigure 1: Class distribution of dialects\nFigure 2: System diagram for dialect classification.The LID classifies the input into one of 3 languages.The sample is then further classified into dialects by language specific models.\nFigure 3: Confusion matrix of 9-way classification.Note that rows are normalized according to the number of samples is that class.\nOur complete results for Track-1 using the two-stage dialect detection pipeline.Model-* denotes the language of the models used for the experiments.\nPerformance on Track-1 validation dataset of individual models used in the two-stage pipeline.\"Lg\" stands for language of the model used.\nComparative results of two-way classification using the finetuned (F.T.) predictions and predictions adapted from three-way classification models.\n\nabstract\n\nDialect Identification is a crucial task for localizing various Large Language Models. This paper outlines our approach to the VarDial 2023 DSL-TL shared task. Here we have to identify three or two dialects from three languages each which results in a 9-way classification for Track-1 and 6-way classification for Track-2 respectively.\nOur proposed approach consists of a two-stage system and outperforms other participants' systems and previous works in this domain. We achieve a score of 58.54% for Track-1 and 85.61% for Track-2. Our codebase is available publicly 1 .\n\nIntroduction\n\nLanguage has been the primary mode of communication for humans since the pre-historic ages. Studies have explored the evolution of language and outlined mathematical models that govern the intricacies of natural language . Inevitably, as humans established civilization in various parts of the world, this language was modified by, and for the group of people occupied by that particular geographical region.\nThis gave rise to multiple national dialects of the same language. The VarDial workshop (colocated with EACL 2023) explores various dialects and variations of the same language. We participated in the Discriminating Between Similar Languages -True Labels (DSL-TL) shared task. In this task, the participants were provided with data from three languages, with each language having three varieties.\nThis shared task consisted of two tracks -Track-1 featuring nine-way classification and Track-2 featuring six-way classification. The second track included two particular national dialects of each language (eg. American English and British English), and the first track had one general We ranked 1 st in both of the tracks.\nMoreover, we beat the next best submission by a margin of 4.5% in the first task and 5.6% in the second task.We were the only team to surpass the organizer baseline scores. We present our winning solution in this paper. We used an end-to-end deep learning pipeline which consisted of a language identification model and three language-specific models, one for each language.\nWe converged upon the best combination by doing an elaborate analysis of various models available. Furthermore, in this work we also analyze the performance of the pipeline as a whole and also provide an ablation study. Lastly, we provide some future directions in this area of research.\n\nRelated Work\n\nThe present literature encompasses various aspects of dialect identification. We study this from three perspectives: large language models, language identification and dialect classification problems.\n\nLarge Language Models\n\nThe success of transformers and BERT based models was inevitable since the initial boom of the transformer 2017) model. In recent years, many other architectures like RoBERTa and ELECTRA have further pushed the state-of-the-art in this domain. Moreover, autoregressive models like GPT and GPT-2 have also shown their prowess.\nMultilingual versions of RoBERTA, namely XLM-RoBERTa are also available. Lastly, language specific models like Spanish BERT (la Rosa y Eduardo G. Ponferrada y Manu Romero y Paulo Villegas y Pablo González de Prado Salas y María Grandury, 2022) and Portuguese BERT are available as well. Our winning solution makes use of these large language models trained on specific languages.\n\nLanguage Identification Models\n\nMany multilingual language identification models have been developed in order to classify the language of the input sentence beforehand. Even though the initial works used n-gram models and generative mixture models or even conditional random fields and other classical machine learning methods like naive bayes , modern methods have shifted to the use of deep learning for language identification .\nRecent works have mainly focused on deep learning based language identification, where handling codemixed data is a big challenge in the domain. For our experiments, we use a version of XLM-RoBERTa finetuned on a language identification dataset 2 . This model has near-perfect test accuracy of 99.6%.\n\nDialect Classification\n\nDialect classification has been previously solved using statistical methods like Gaussian Mixture Models and Frame Selection Decoding or Support Vector Machines (SVM) . It has been explored relatively sparsely, mostly in the case for local languages . Deep learning approaches have been explored in previous editions of the VarDial workshop shared tasks and otherwise .\nDialect classification was also explored previously as a part of other shared tasks . We want to stress that given the multilingual nature of the dataset, using the present methods directly was not an option. In our work, although we take inspiration from the previous works, we propose a novel system that surpasses the performance of the previous systems by a large margin.\n\nData\n\nThe dataset We observed that the class PT-BR had the most number of samples (2,724) and the class EN had the least number of samples (349), and thus the imbalance ratio was almost 1:8. We have illustrated the data distribution in Figure . We tried to mitigate this imbalance using over-sampling and weighted sampling methods.\nHowever, the improved data sampling method did not affect the performance.\n\nSystem Description\n\nThis was a problem of multi-class classification having 9 classes for Track-1 and 6 classes for Track-2. The samples were belonging to 3 languages having 3 varieties each, so the classification pipeline was made in 2 stages. The Language Identification (LID) model which is the first stage classifies the sentence into 3 languages: English (EN), Spanish (ES) and Portuguese (PT).\nThe LID is a pretrained XLM-RoBERTa that is fine-tuned for the task of language identification. It is able to classify the input sentence into 20 languages. We classify and separate the samples according to their language. The samples corresponding to the specific languages are then fed into the language specific models for dialect identification.\nFor dialect identification we have used models like BERT and RoBERTa with a linear layer connected to the pooler output of the models. Then fine-tuning is done on the models for dialect identification using the samples corresponding to the specific languages. For the task of dialect identification we experimented with several pretrained models like XLM-RoBERTa, BERT, ELECTRA, GPT-2 and RoBERTa.\nAll models were fine-tuned for 20 epochs with a learning rate of 1e-6 and weight decay 1e-6 with a batch size of 8. The best performing model checkpoint was chosen according to the epoch-wise validation macro-F1 score. 5 Experiments and Results\n\nExperiments using Large Language Models\n\nFor the task of Dialect Identification we have tried various language specific models like XLM-RoBERTa, BERT, ELECTRA, RoBERTa and GPT- 2. The base variant of all these models were used and all the models were used through the Hugging-Face library. The pooler output of these models was passed through a linear layer and the models were fine-tuned.\nFirst, we experimented with different models for Track-1. All the models were trained for 20 epochs with learning rate 1e-6, weight decay 1e-6 and a batch size of 8. We used XLM-RoBERTa as the baseline for all 3 languages. The best performing models for the English language were RoBERTa and BERT whereas GPT-2 was the worst performing.\nSimilarly the language specific versions of RoBERTa and BERT performed well for the Spanish and Portuguese respectively. Overall the worst performing model was GPT-2 across all 3 languages. The validation F1 scores are present in Table . The two best-performing models for every language were chosen for Track-2.\nThe same procedure as specified above was used and the F1 scores are present in Table . The train and validation F1 scores for 2-class classification are higher for all models as compared to the F1 score of the same models for 3-class classification. This was mainly due to the poor representation and accuracy of classification of the third class.\nWe observed symptoms of overfitting in all models after 12-15 epochs and the best validation F1 score was obtained in the range of 4-8 epochs.\n\nLID experiments\n\nThe pipeline for dialect identification is divided into two parts as the sentences in the dataset belong to different languages. The stages are described in Section 4. The XLM-RoBERTa we have used for language classification has a test accuracy of 99.6% meaning it correctly classifies all input sentences and hence, can be considered as a perfect classifier.\nFor the final pipeline we experimented using the two best performing models for each language in Track-1 and Track-2. For both the tracks we experimented with all 8 (2 3 ) possible combinations of models and calculated the validation F1 score for the combined validation dataset which had sentences belonging to all languages.\nThe validation scores for Track-1 and Track-2 are shown in Table and Table respectively. For both the tracks, the three pipelines with the best validation F1 scores were chosen for submission.\n\nUsing 3-way classifier as a 2-way classifier\n\nIn Track-1, participants are expected to train a classifier which classifies amongst 9 classes, and in Track-2, participants are expected to train a classifier which classifies amongst 6 classes. These 6 classes are a proper subset of the 9 classes from Track-1. Thus, an intuitive baseline for Track-2 is to use the model finetuned for Track-1, whilst considering only the relevant classes for the latter task.\nThe classes EN , ES and P T , i.e. the classes without any national dialect associated with them are not included in Track-2 as compared to Track-1. Thus, we calculate the predictions for the Track-2 validation dataset using the models for Track-1 and exclude the metrics for Track-1 specific classes to get the metrics for this \"adapted\" 2-way classification.\nWe show the results of this experiment in Table and observe that, as expected, the adapted 2-way classification performs worse compared to the explicitly finetuned variant.\n\nResults for Track-1 and Track-2\n\nWe now present our experiments and their performance for both tracks. Our experiments for Track-1 are described in Table and our experiments for Track-2 are described in Table . The participants were allowed three submissions for evaluation on the test set, so we submitted predictions using the three systems which performed the best on the validation set.\nAs mentioned in Section 5.2, we performed 2 3 , i.e. a total of 8 experiments using the two best models for each language. We observed that RoBERTa base on English, Spanish BERT base on Spanish and Portuguese BERT base performed the best on the testing set for Track-1. The same combination, with RoBERTa base for English, worked best for Track-2.\nAll of our submissions were the top submissions for each track, which surpassed the next best competitors by a margin of 4.5% and 5.6% for Track-1 and Track-2 respectively.\n\nAblation of best submissions\n\nWe hereby make some observations of our submissions and other experiments. To assist this, we plot the confusion matrices of our best submissions for Track-1 and Track-2 in Figures respectively. Note that these confusion matrices have their rows (i.e. true labels axes) normalized according to the number of samples in the class.\nHere are observations from our experiments: 1. BERT-based models outperform other models across all languages: We observe that BERT-based models outperform ELECTRA-based and GPT-2-based models, as shown in Table . We speculate this is because of the inherent architecture of BERT, which combines semantic learning with knowledge retention.\nThis combination of traits is particularly useful for this task. 2. Common labels perform the worst across all languages: We observe that the common labels EN , ES and P T perform the worst, both in the individual as well as the two-stage setup. We hypothesize this is because of the absence of dialect specific words, or words that are specific to the geographical origin of the national dialect (for example, \"Yankees\" for EN-US and \"Oxford\" for EN-GB).\n3. English models work better than models of other languages: It can be noted from Figures 4 and 3 that the English models have the best performance across all classes. This can be attributed to two reasons: absence of national dialect specific words and lesser pretraining data in the case of Portuguese.\n4. British English is most correctly classified class: We can observe that the Spanish or Portuguese models make equal number of mistakes in the case of either national dialect, in the case of Track-2 (see Figure ). However, in the case of English, the label EN − GB is correctly classified for more than 95% of the cases.\nWe speculate this is because British English involves slightly distinctive grammar and semantics, which help the model separate it from other classes. 5. The proposed 2-step method is scalable for multiple language dialect classification: We can strongly assert that the novel 2-step deep learning method for multilingual dialect classification is a scalable method for the task due to two specific reasons: firstly, the multilingual models (like XLM-RoBERTa) might not have the vocabulary as well as the learning capabilities to learn the minute differences between individual dialects.\nSecondly, this system can be quickly expanded for a new language by simply adding a language specific dialect classifier, provided the language identification model supports that particular language.\n\nConclusion\n\nIn this paper we propose a two-stage classification pipeline for dialect identification for multilingual corpora. We conduct thorough ablations on this setup and provide valuable insights. We foresee multiple future directions for this work. The first is to expand this work to many languages and dialects.\nSecondly, it is a worthwhile research direction to distill this multi-model setup into a single model with multiple prediction heads. The obvious limitation of this system is the excessive memory consumption due to the usage of language specific models. For low resource languages this system is difficult to train and scale.\nWe hope that these problems will be addressed by researchers in future works.", "answers": ["The best performing model for the Spanish language in Track-1 was Spanish BERT."], "length": 2409, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "fe952dfd723c9b88f1ed4665679b7a13987729fa1c37b0e3"} {"input": "How does the infall rate and gas density in the magnetized model compare to non-magnetized accretion?", "context": "\\section{Introduction}\nThe averaged quantities can be obtained in two different ways in\nmagnetohydrodynamics. The first way is to solve 3D MHD equations\nand then average the results. The second way is to solve some\nsystem of equations on averages. Combination of numerical\nsimulations and averaged theory brings phenomenology that can\ndescribe observations or experimental data.\n\nThe problem of spherically symmetric accretion takes its origin\nfrom Bondi's work \\citep{bondi}. He presented idealized\nhydrodynamic solution with accretion rate $\\dot{M}_B.$ However,\nmagnetic field $\\vec{B}$ always exists in the real systems. Even\nsmall seed $\\vec{B}$ amplifies in spherical infall and becomes\ndynamically important \\citep{schwa}.\n\nMagnetic field inhibits accretion \\citep{schwa}. None of many\ntheories has reasonably calculated the magnetic field evolution\nand how it influences dynamics. These theories have some common\npitfalls. First of all, the direction of magnetic field is usually\ndefined. Secondly, the magnetic field strength is prescribed by\nthermal equipartition assumption. In third, dynamical effect of\nmagnetic field is calculated with conventional magnetic energy and\npressure. All these inaccuracies can be eliminated.\n\nIn Section 2\\ref{section_method} I develop a model that abandons\nequipartition prescription, calculates the magnetic field\ndirection and strength and employs the correct equations of\nmagnetized fluid dynamics. In Section 3\\ref{results} I show this\naccretion pattern to be in qualitative agreement with Sgr A*\nspectrum models. I discuss my assumptions in Section 4\n\\ref{discussion}.\n\n\\section{Analytical method}\\label{section_method}\n Reasonable turbulence evolution model is the key difference of my\n method. I build an averaged turbulence theory that corresponds to\nnumerical simulations. I start with the model of isotropic\nturbulence that is consistent with simulations of collisional MHD\nin three regimes. Those regimes are decaying hydrodynamic\nturbulence, decaying MHD turbulence and dynamo action. I introduce\neffective isotropization of magnetic field in 3D model.\nIsotropization is taken to have a timescale of the order of\ndissipation timescale that is a fraction $\\gamma\\sim1$ of the\nAlfven wave crossing time $\\tau_{\\rm diss}=\\gamma r/v_A.$\n\nCommon misconception exists about the dynamical influence of\nmagnetic field. Neither magnetic energy nor magnetic pressure can\nrepresent $\\vec{B}$ in dynamics. Correct averaged Euler and energy\nequations were derived in \\citep{scharlemann} for radial magnetic\nfield. Magnetic force $\\vec{F}_M=[\\vec{j}\\times\\vec{B}]$ can be\naveraged over the solid angle with proper combination of\n$\\vec{\\nabla}\\cdot\\vec{B}=0.$ I extend the derivation to random\nmagnetic field without preferred direction. Dynamical effect of\nmagnetic helicity \\citep{biskamp03} is also investigated. I\nneglect radiative and mechanical transport processes.\n\nThe derived set of equations requires some modifications and\nboundary conditions to be applicable to the real astrophysical\nsystems. I add external energy input to turbulence to balance\ndissipative processes in the outer flow. The outer turbulence is\ntaken to be isotropic and has magnetization $\\sigma\\sim1.$\nTransonic smooth solution is chosen as possessing the highest\naccretion rate as in \\citep{bondi}.\n\n\\begin{figure}\\label{fig1}\n \\includegraphics[height=.5\\textheight]{velocities}\n \\caption{Normalized to Keplerian speed characteristic velocities of magnetized flow. Horizontal lines correspond to self-similar solution $v\\sim r^{-1/2}.$}\n\\end{figure}\n\n\\section{Results \\& Application to Sgr A*}\\label{results}\n\n\\begin{figure}\\label{fig2}\n \\includegraphics[height=.5\\textheight]{magnetization}\n \\caption{Plot of magnetization $\\sigma=(E_M+E_K)/E_{Th}$ with radius.}\n\\end{figure}\nThe results of my calculations confirm some known facts about\nspherical magnetized accretion, agree with the results of\nnumerical simulations and have some previously unidentified\nfeatures.\n\nInitially isotropic magnetic field exhibits strong anisotropy with\nlarger radial field $B_r.$ Perpendicular magnetic field\n$B_\\perp\\ll B_r$ is dynamically unimportant in the inner accretion\nregion Fig\\ref{fig1}. Because magnetic field dissipates, infall\nonto the black hole can proceed \\citep{schwa}.\n\nTurbulence is supported by external driving in the outer flow\nregions, but internal driving due to freezing-in amplification\ntakes over in the inner flow Fig\\ref{fig2}. Magnetization of the\nflow increases in the inner region with decreasing radius\nconsistently with simulations \\cite{igumen06}. Density profile\nappears to be $\\rho\\sim r^{-1.25}$ that is different from\ntraditional ADAF scaling $\\rho\\sim r^{-1.5}$ \\citep{narayan}. Thus\nthe idea of self-similar behavior is not supported.\n\nCompared to non-magnetized accretion, infall rate is 2-5 times\nsmaller depending on outer magnetization. In turn, gas density is\n2-5 times smaller in the region close to the black hole, where\nsynchrotron radiation emerges \\citep{narayan}. Sgr A* produces\nrelatively weak synchrotron \\citep{narayan}. So, either gas\ndensity $n$ or electron temperature $T_e$ or magnetic field $B$\nare small in the inner flow or combination of factors works. Thus\nlow gas density in magnetized model is in qualitative agreement\nwith the results of modelling the spectrum.\n\nFlow is convectively stable on average in the model of moving\nblobs, where dissipation heat is released homogeneously in volume.\nMoving blobs are in radial and perpendicular pressure\nequilibriums. They are governed by the same equations as the\nmedium.\n\n\\section{Discussion \\& Conclusion}\\label{discussion}\nThe presented accretion study self-consistently treats turbulence\nin the averaged model. This model introduces many weak assumptions\ninstead of few strong ones.\n\nI take dissipation rate to be that of collisional MHD simulations.\nBut flow in question is rather in collisionless regime.\nObservations of collisionless flares in solar corona\n\\citep{noglik} gives dissipation rate $20$ times smaller than in\ncollisional simulations \\citep{biskamp03}. However, flares in\nsolar corona may represent a large-scale reconnection event rather\nthan developed turbulence. It is unclear which dissipation rate is\nmore realistic for accretion.\n\nMagnetic field presents another caveat. Magnetic field lines\nshould close, or $\\vec{\\nabla}\\cdot\\vec{B}=0$ should hold. Radial\nfield is much larger than perpendicular in the inner region.\nTherefore, characteristic radial scale of the flow is much larger\nthan perpendicular. If radial turbulence scale is larger than\nradius, freezing-in condition does not hold anymore. Matter can\nfreely slip along radial field lines into the black hole. If\nmatter slips already at the sonic point, the accretion rate should\nbe higher than calculated.\n\nSome other assumptions are more likely to be valid. Diffusion\nshould be weak because of high Mach number that approaches unity\nat large radius. Magnetic helicity was found to play very small\ndynamical role. Only when the initial turbulence is highly\nhelical, magnetic helicity conservation may lead to smaller\naccretion rate. Neglect of radiative cooling is justified a\nposteriori. Line cooling time is about $20$ times larger that\ninflow time from outer boundary.\n\nThe study is the extension of basic theory, but realistic\nanalytical models should include more physics. The work is\nunderway.\n\\begin{theacknowledgments}\nI thank my advisor Prof. Ramesh Narayan for fruitful discussions.\n\\end{theacknowledgments}\n\n\\bibliographystyle{aipproc}\n\n", "answers": ["Infall rate is 2-5 times smaller and gas density is 2-5 times smaller."], "length": 1045, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "3074f07df75f95b421d8871cb9f6448e130ad7d948b137b8"} {"input": "What are the three teams that used conflict optimization in the challenge?", "context": "Paper Info\n\nTitle: Conflict Optimization for Binary CSP Applied to Minimum Partition into Plane Subgraphs and Graph Coloring\nPublish Date: 25 Mar 2023\nAuthor List: Loïc Crombez (from LIMOS, Université Clermont Auvergne), Guilherme Da Fonseca (from LIS, Aix-Marseille Université), Florian Fontan (from Independent Researcher), Yan Gerard (from LIMOS, Université Clermont Auvergne), Aldo Gonzalez-Lorenzo (from LIS, Aix-Marseille Université), Pascal Lafourcade (from LIMOS, Université Clermont Auvergne), Luc Libralesso (from LIMOS, Université Clermont Auvergne), Benjamin Momège (from Independent Researcher), Jack Spalding-Jamieson (from David R. Cheriton School of Computer Science, University of Waterloo), Brandon Zhang (from Independent Researcher), Da Zheng (from Department of Computer Science, University of Illinois at Urbana-Champaign)\n\nFigure\n\nFigure 1: A partition of the input graph of the CG:SHOP2022 instance vispecn2518 into 57 plane graphs.It is the smallest instance of the challenge with 2518 segments.On top left, you see all 57 colors together.On top right, you see a clique of size 57, hence the solution is optimal.Each of the 57 colors is then presented in small figures.\nFigure 2: Number of colors over time for the instance vispecn13806 using different values p.The algorithm uses σ = 0.15, easy vertices, q max = 59022, but does not use the BDFS nor any clique.\nFigure 3: Number of colors over time with different values of q max obtained on the instance vispecn13806.Parameters are σ = 0.15, p = 1.2, no clique knowledge, and no BDFS.\nFigure 4: Number of colors over time with and without clique knowledge and BDFS obtained on the instance vispecn13806.Parameters are σ = 0.15, p = 1.2, and q max = 1500000.\nFigure 5: Number of colors over time for the instance vispecn13806 for different values of σ.In both figures the algorithm uses p = 1.2, easy vertices, q max = 59022, but does not use the BDFS nor any clique.For σ ≥ 0.25, no solution better than 248 colors is found.\nFigure 6: Number of colors over time (in hours) for the instance vispecn13806.\nSeveral CG:SHOP 2022 results.We compare the size of the largest known clique to the smallest coloring found by each team on a selection of 14 CG:SHOP 2022 instances.\n[20][21][22][23][24][25] with state-of-the-art graph coloring algorithms.The conflict optimizer underperforms except on the geometric graphs r* and dsjr*.CE39-0007), SEVERITAS (ANR-20-CE39-0005) and by the French government IDEX-ISITE initiative 16-IDEX-0001 (CAP[20][21][22][23][24][25].The work of Luc Libralesso is supported by the French ANR PRC grant DECRYPT (ANR-18-CE39-0007).\n\nabstract\n\nCG:SHOP is an annual geometric optimization challenge and the 2022 edition proposed the problem of coloring a certain geometric graph defined by line segments. Surprisingly, the top three teams used the same technique, called conflict optimization. This technique has been introduced in the 2021 edition of the challenge, to solve a coordinated motion planning problem.\nIn this paper, we present the technique in the more general framework of binary constraint satisfaction problems (binary CSP). Then, the top three teams describe their different implementations of the same underlying strategy. We evaluate the performance of those implementations to vertex color not only geometric graphs, but also other types of graphs.\n\nIntroduction\n\nThe CG:SHOP challenge (Computational Geometry: Solving Hard Optimization Problems) is an annual geometric optimization competition, whose first edition took place in 2019. The 2022 edition proposed a problem called minimum partition into plane subgraphs. The input is a graph G embedded in the plane with edges drawn as straight line segments, and the goal is to partition the set of edges into a small number of plane graphs (Fig. ) .\nThis goal can be formulated as a vertex coloring problem on a graph G defined as follows. The vertices of G are the segments defining the edges of G, and the edges of G correspond to pairs of crossing segments (segments that intersect only at a common endpoint are not considered crossing). The three top-ranking teams (Lasa, Gitastrophe, and Shadoks) on the CG:SHOP 2022 challenge all used a common approach called conflict optimization while the fourth team used a SAT-Boosted Tabu Search .\nConflict optimization is a technique used by Shadoks to obtain the first place in the CG:SHOP 2021 challenge for low-makespan coordinated motion planning , and the main ideas of the technique lent themselves well to the 2022 challenge. Next, we describe the conflict optimizer as a metaheuristic to solve constraint satisfaction problems (CSP) .\nWe start by describing a CSP. A CSP is a triple of • variables X = (x 1 , . . . , x n ), Each of the 57 colors is then presented in small figures. • domains D = (D 1 , . . . , D n ), and • constraints R. Each variable x i must be assigned a value in the corresponding domain D i such that all constraints are satisfied.\nIn general, the constraints may forbid arbitrary subsets of values. We restrict our attention to a particular type of constraints (binary CSP ), which only involve pairs of assignments. A partial evaluation is an assignment of a subset of the variables, called evaluated, with the remaining variables called non-evaluated.\nAll constraints involving a non-evaluated variable are satisfied by default. We only consider assignments and partial assignments that satisfy all constraints. The conflict optimizer iteratively modifies a partial evaluation with the goal of emptying the set S of non-evaluated variables, at which point it stops.\nAt each step, a variable x i is removed from S. If there exists a value x ∈ D i that satisfies all constraints, then we assign the value x to the variable x i . Otherwise, we proceed as follows. For each possible value x ∈ D i , we consider the set K(i, x) of variables (other than x i ) that are part of constraints violated by the assignment x i = x.\nWe assign to x i the value x that minimizes where w(j) is a weight function to be described later. The variables x j ∈ K(i, x) become non-evaluated and added to S. The weight function should be such that w(j) increases each time x j is added to S, in order to avoid loops that keep moving the same variables back and forth from S. Let q(j) be the number of times x j became non-evaluated.\nA possible weight function is w(j) = q(j). More generally, we can have w(j) = q(j) p for some exponent p (typically between 1 and 2). Of course, several details of the conflict optimizer are left open. For example, which element to choose from S, whether some random noise should be added to w, and the decision to restart the procedure from scratch after a certain time.\nThe CSP as is, does not apply to optimization problems. However, we can, impose a maximum value k of the objective function in order to obtain a CSP. The conflict optimizer was introduced in a low makespan coordinated motion planning setting. In that setting, the variables are the robots, the domains are their paths (of length at most k) and the constraints forbid collisions between two paths.\nIn the graph coloring setting, the domains are the k colors of the vertices and the constraints forbid adjacent vertices from having the same color. The conflict optimizer can be adapted to non-binary CSP, but in that case multiple variables may be unassigned for a single violated constraint. The strategy has some resemblance to the similarly named min-conflicts algorithm , but notable differences are that a partial evaluation is kept instead of an invalid evaluation and the weight function that changes over time.\nWhile the conflict optimization strategy is simple, there are different ways to apply it to the graph coloring problem. The goal of the paper is to present how the top three teams applied it or complemented it with additional strategies. We compare the relative benefits of each variant on the instances given in the CG:SHOP 2022 challenge.\nWe also compare them to baselines on some instances issued from graph coloring benchmarks. The paper is organized as follows. Section 2 presents the details of the conflict optimization strategy applied to graph coloring. In the three sections that follow, the three teams Lasa, Gitastrophe, and Shadoks present the different parameters and modified strategies that they used to make the algorithm more efficient for the CG:SHOP 2022 challenge.\nThe last section is devoted to the experimental results.\n\nLiterature Review\n\nThe study of graph coloring goes back to the 4-color problem (1852) and it has been intensively studied since the 1970s (see for surveys). Many heuristics have been proposed , as well as exact algorithms . We briefly present two classes of algorithms: greedy algorithms and exact algorithms. Greedy algorithms.\nThese algorithms are used to find good quality initial solutions in a short amount of time. The classic greedy heuristic considers the vertices in arbitrary order and colors each vertex with the smallest non-conflicting color. The two most famous modern greedy heuristics are DSATUR and Recursive Largest First (RLF ) .\nAt each step (until all vertices are colored), DSATUR selects the vertex v that has the largest number of different colors in its neighbourhood. Ties are broken by selecting a vertex with maximum degree. The vertex v is colored with the smallest non-conflicting color. RLF searches for a large independent set I, assigns the vertices I the same color, removes I from G , and repeats until all vertices are colored.\nExact algorithms. Some exact methods use a branch-and-bound strategy, for example extending the DSATUR heuristic by allowing it to backtrack . Another type of exact method (branch-and-cut-and-price) decomposes the vertex coloring problem into an iterative resolution of two sub-problems . The \"master problem\" maintains a small set of valid colors using a set-covering formulation.\nThe \"pricing problem\" finds a new valid coloring that is promising by solving a maximum weight independent set problem. Exact algorithms are usually able to find the optimal coloring for graphs with a few hundred vertices. However, even the smallest CG:SHOP 2022 competition instances involve at least a few thousands vertices.\n\nConflict Optimization for Graph Coloring\n\nHenceforth, we will only refer to the intersection conflict graph G induced by the instance. Vertices will refer to the vertices V (G ), and edges will refer to the edges E(G ). Our goal is to partition the vertices using a minimum set of k color classes C = {C 1 , . . . , C k }, where no two vertices in the same color class C i are incident to a common edge.\n\nConflict Optimization\n\nTABUCOL inspired neighbourhood One classical approach for the vertex coloring involves allowing solutions with conflicting vertices (two adjacent vertices with the same color). It was introduced in 1987 and called TABUCOL. It starts with an initial solution, removes a color (usually the one with the least number of vertices), and assigns uncolored vertices with a new color among the remaining ones.\nThis is likely to lead to some conflicts (i.e. two adjacent vertices sharing a same color). The local search scheme selects a conflicting vertex, and tries to swap its color, choosing the new coloring that minimises the number of conflicts. If it reaches a state with no conflict, it provides a solution with one color less than the initial solution.\nThe process is repeated until the stopping criterion is met. While the original TABUCOL algorithm includes a \"tabu-list\" mechanism to avoid cycling, it is not always sufficient, and requires some hyper-parameter tuning in order to obtain a good performance on a large variety of instances. To overcome this issue, we use a neighbourhood, but replace the \"tabu-list\" by the conflict optimizer scheme presented above.\nPARTIALCOL inspired neighbourhood PARTIALCOL another local search algorithm solving the vertex coloring problem was introduced in 2008. This algorithm proposes a new local search scheme that allows partial coloring (thus allowing uncolored vertices). The goal is to minimize the number of uncolored vertices.\nSimilarly to TABUCOL, PARTIALCOL starts with an initial solution, removes one color (unassigning its vertices), and performs local search iterations until no vertex is left uncolored. When coloring a vertex, the adjacent conflicting vertices are uncolored. Then, the algorithm repeats the process until all vertices are colored, or the stopping criterion is met.\nThis neighbourhood was also introduced alongside a tabu-search procedure. The tabu-search scheme is also replaced by a conflict-optimization scheme. Note that this neighbourhood was predominantly used by the other teams.\n\nFinding Initial Solutions\n\nLasa team used two approaches to find initial solutions: 1. DSATUR is the classical graph coloring algorithm presented in Section 1. 2. Orientation greedy is almost the only algorithm where the geometry of the segments is used. If segments are almost parallel, it is likely that they do not intersect (thus forming an independent set).\nThis greedy algorithm first sorts the segments by orientation, ranging from − π 2 to π 2 . For each segment in this order, the algorithm tries to color it using the first available color. If no color has been found, a new color is created for coloring the considered segment. This algorithm is efficient, produces interesting initial solutions and takes into account the specificities of the competition.\n\nSolution Initialization\n\nThe gitastrophe team uses the traditional greedy algorithm of Welsh and Powell to obtain initial solutions: order the vertices in decreasing order of degree, and assign each vertex the minimum-label color not used by its neighbors. During the challenge Gitastrophe attempted to use different orderings for the greedy algorithm, such as sorting by the slope of the line segment associated with each vertex (as the orientation greedy initialization presented in Section 3), and also tried numerous other strategies.\nUltimately, after running the solution optimizer for approximately the same amount of time, all initializations resulted in an equal number of colors.\n\nModifications to the Conflict Optimizer\n\nTaking inspiration from memetic algorithms, which alternate between an intensification and a diversification stage, the algorithm continually switched between a phase using the above conflict score, and one minimizing only the number of conflicts. Thus during the conflict-minimization phase, the random variables f (C j ) and w(u) are both fixed equal to 1 leading to a conflict score\nEach phase lasted for 10 5 iterations. Adding the conflict-minimization phase gave minor improvements to some of the challenge instances.\n\nShadoks\n\nIn this section, we describe the choices used by the Shadoks team for the options described in Section 2.1. The Shadoks generally chose to eliminate the color with the smallest number of elements. However, if the multistart option is toggled on, then a random color is used each time. The conflict set S is stored in a queue.\nThe Shadoks tried other strategies, but found that the queue gives the best results. The weight function used is w(u) = 1 + q(u) p , mostly with p = 1.2. The effect of the parameter p is shown in Fig. . Notice that in all figures, the number of colors shown is the average of ten executions of the code using different random seeds.\nThe algorithm uses σ = 0.15, easy vertices, q max = 59022, but does not use the BDFS nor any clique. If q(u) is larger than a threshold q max , the Shadoks set w(u) = ∞ so that the vertex u never reenters S. If at some point an uncolored vertex v is adjacent to some vertex u of infinite weight in every color class, then the conflict optimizer is restarted.\nWhen restarting, the initial coloring is shuffled by moving some vertices from their initial color class to a new one. Looking at Fig. , the value of q max does not seem to have much influence as long as it is not too small. Throughout the challenge the Shadoks almost exclusively used q max = 2000 • (75000/m) 2 , where m is the number of vertices.\nThis value roughly ensures a restart every few hours. q max =0.5k q max =5k q max =50k q max =100k q max =250k The Shadoks use the function f as a Gaussian random variable of mean 1 and variance σ. A good default value is σ = 0.15. The effect of the variance is shown in Fig. . Notice that setting σ = 0 gives much worse results.\nOption (e) The goal of BDFS is to further optimize very good solutions that the conflict optimizer is not able to improve otherwise. Fig. shows the influence of BDFS. While on this figure, the advantages of BDFS cannot be noticed, its use near the end of the challenge improved about 30 solutions. The bounded depth-first search (BDFS) algorithm tries to improve the dequeuing process.\nThe goal is to prevent a vertex in conflict with some adjacent colored vertices from entering in the conflict set. At the first level, the algorithm searches for a recoloring of some adjacent vertices which allows us to directly recolor the conflict vertex. If no solution is found, the algorithm In both figures the algorithm uses p = 1.2, easy vertices, q max = 59022, but does not use the BDFS nor any clique.\nFor σ ≥ 0.25, no solution better than 248 colors is found. could recolor some vertices at larger distances from the conflict vertex. To do so, a local search is performed by trying to recolor vertices at a bounded distance from the conflict vertex in the current partial solution. The BDFS algorithm has two parameters: adjacency bound a max and depth d.\nIn order to recolor a vertex v, BDFS gets the set C of color classes with at most a max neighbors of v. If a class in C has no neighbor of v, v is assigned to C. Otherwise, for each class C ∈ C, BDFS tries to recolor the vertices in C which are adjacent to v by recursively calling itself with depth d − 1.\nAt depth d = 0 the algorithm stops trying to color the vertices. During the challenge the Shadoks used BDFS with parameters a max = 3 and d = 3. The depth was increased to 5 (resp. 7) when the number of vertices in the queue was 2 (resp. 1). Degeneracy order Given a target number of colors k, we call easy vertices a set of vertices Y such that, if the remainder of the vertices of G are colored using k colors, then we are guaranteed to be able to color all vertices of G with k colors.\nThis is obtained using the degeneracy order Y . To obtain Y we iteratively remove from the graph a vertex v that has at most k − 1 neighbors, appending v to the end of Y . We repeat until no other vertex can be added to Y . Notice that, once we color the remainder of the graph with at least k colors, we can use a greedy coloring for Y in order from last to first without increasing the number of colors used.\nRemoving the easy vertices reduces the total number of vertices, making the conflict optimizer more effective. The Shadoks always toggle this option on (the challenge instances contain from 0 to 23% easy vertices).\n\nResults\n\nWe provide the results of the experiments performed with the code from the three teams on two classes of instances. First, we present the results on some selected CG:SHOP 2022 instances. These instances are intersection graphs of line segments. Second, we execute the code on graphs that are not intersection graphs, namely the classic DIMACS graphs , comparing the results of our conflict optimizer implementations to previous solutions.\nThe source code for the three teams is available at: • Lasa: https://github.com/librallu/dogs-color • Gitastrophe: https://github.com/jacketsj/cgshop2022-gitastrophe • Shadoks: https://github.com/gfonsecabr/shadoks-CGSHOP2022\n\nCG:SHOP 2022 Instances\n\nWe selected 14 instances (out of 225) covering the different types of instances given in the CG:SHOP 2022 challenge. The results are presented in Table . For comparison, we executed the HEAD code on some instances using the default parameters. The table shows the smallest number of colors for which HEAD found a solution.\nWe ran HEAD for 1 hour of repetitions for each target number of colors on a single CPU core (the HEAD solver takes the target number of colors as a parameter and we increased this parameter one by one). At the end of the challenge, 8 colorings computed by Lasa, 11 colorings computed by Gitastrophe, and 23 colorings computed by Shadoks over 225 instances have been proved optimal (their number of colors is equal to the size of a clique).\nIn order to compare the efficiency of the algorithms, we executed the different implementations on the CG:SHOP instance vispecn13806. The edge density of this graph is 19%, the largest clique that we found has 177 vertices and the best coloring found during the challenge uses 218 colors. Notice that vispecn13806 is the same instance used in other Shadoks experiments in Section 5. Notice also that HEAD algorithm provides 283 colors after one hour compared to less than 240 colors for the conflict optimizers.\nWe ran the three implementations on three different servers and compared the results shown in Figure . For each implementation, the x coordinate is the running time in hours, while the y coordinate is the smallest number of colors found at that time.\n\nResults on DIMACS Graphs\n\nWe tested the implementation of each team on the DIMACS instances to gauge the performance of the conflict optimizer on other classes of graphs. We compared our results to the best known bounds and to the state of the art coloring algorithms HEAD and QACOL . The time limit for Lasa's algorithms is 1 hour.\nCWLS is Lasa's conflict optimizer with the neighbourhood presented in TABUCOL , while PWLS is the optimizer with the neighbourhood presented in PARTIALCOL . Gitastrophe algorithm ran 10 minutes after which the number of colors no longer decreases. Shadoks algorithm ran for 1 hour without the BDFS option (results with BDFS are worse).\nResults are presented in Table . We only kept the difficult DIMACS instances. For the other instances, all the results match the best known bounds. The DIMACS instances had comparatively few edges (on the order of thousands or millions); the largest intersection graphs considered in the CG:SHOP challenge had over 1.5 billion edges.\nWe notice that the conflict optimizer works extremely poorly on random graphs, but it is fast and appears to perform well on geometric graphs (r250.5, r1000.1c, r1000.5, dsjr500.1c and dsjr500.5), matching the best-known results . Interestingly, these geometric graphs are not intersection graphs as in the CG:SHOP challenge, but are generated based on a distance threshold.\nOn the DIMACS graphs, Lasa implementation shows better performance than the other implementations.", "answers": ["Lasa, Gitastrophe, and Shadoks."], "length": 3791, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "191c14b84f0c8cdad3297f2bee552fb089178995208d7185"} {"input": "How many novels did Margaret Way write?", "context": "Margaret Way (b. Brisbane d. Cleveland, Queensland, Australia ) was an Australian writer of romance novels and women's fiction. A prolific author, Way wrote more than 120 novels since 1970, many through Mills & Boon, a romance imprint of British publisher Harlequin UK Ltd., owned by Harlequin Enterprises.\n\nBiography\nBefore her marriage, she was a well-known pianist, teacher, vocal coach and accompanist. She began writing when her son, Laurence Way, was born, a friend took a pile of Mills & Boon books to her, she read all and decided that she also could write these types of novels. She began to write and promote her country with her stories set in Australia. She sold her first novels in 1970. Margaret Way lives with her family in her native Brisbane. Beginning in 2013, Margaret began to self-publish, releasing her first \"e-book\" mid-July.\n\nMargaret died on the 10th of August 2022 in Cleveland, Queensland.\n\nBibliography\n\nSingle Novels\nKing Country (1970)\nBlaze of Silk (1970)\nThe Time of the Jacaranda (1970)\nBauhinia Junction (1971)\nMan from Bahl Bahla (1971)\nSummer Magic (1971)\nReturn to Belle Amber (1971)\nRing of Jade (1972)\nCopper Moon (1972)\nRainbow Bird (1972)\nMan Like Daintree (1972)\nNoonfire (1972)\nStorm Over Mandargi (1973)\nWind River (1973)\nLove Theme (1974)\nMcCabe's Kingdom (1974)\nSweet Sundown (1974)\nReeds of Honey (1975)\nStorm Flower (1975)\nLesson in Loving (1975)\nFlight into Yesterday (1976)\nRed Cliffs of Malpara (1976)\nMan on Half-moon (1976)\nSwan's Reach (1976)\nMutiny in Paradise (1977)\nOne Way Ticket (1977)\nPortrait of Jaime (1977)\nBlack Ingo (1977)\nAwakening Flame (1978)\nWild Swan (1978)\nRing of Fire (1978)\nWake the Sleeping Tiger (1978)\nValley of the Moon (1979)\nWhite Magnolia (1979)\nWinds of Heaven (1979)\nBlue Lotus (1979)\nButterfly and the Baron (1979)\nGolden Puma (1980)\nTemple of Fire (1980)\nLord of the High Valley (1980)\nFlamingo Park (1980)\nNorth of Capricorn (1981)\nSeason for Change (1981)\nShadow Dance (1981)\nMcIvor Affair (1981)\nHome to Morning Star (1981)\nBroken Rhapsody (1982)\nThe Silver Veil (1982)\nSpellbound (1982)\nHunter's Moon (1982)\nGirl at Cobalt Creek (1983)\nNo Alternative (1983)\nHouse of Memories (1983)\nAlmost a Stranger (1984)\nA place called Rambulara (1984)\nFallen Idol (1984)\nHunt the Sun (1985)\nEagle's Ridge (1985)\nThe Tiger's Cage (1986)\nInnocent in Eden (1986)\nDiamond Valley (1986)\nMorning Glory (1988)\nDevil Moon (1988)\nMowana Magic (1988)\nHungry Heart (1988)\nRise of an Eagle (1988)\nOne Fateful Summer (1993)\nThe Carradine Brand (1994)\nHolding on to Alex (1997)\nThe Australian Heiress (1997)\nClaiming His Child (1999)\nThe Cattleman's Bride (2000)\nThe Cattle Baron (2001)\nThe Husbands of the Outback (2001)\nSecrets of the Outback (2002)\nWith This Ring (2003)\nInnocent Mistress (2004)\nCattle Rancher, Convenient Wife (2007)\nOutback Marriages (2007)\nPromoted: Nanny to Wife (2007)\nCattle Rancher, Secret Son (2007)\nGenni's Dilemma (2008)\nBride At Briar Ridge (2009)\nOutback Heiress, Surprise Proposal (2009)\nCattle Baron, Nanny Needed (2009)\n\nLegends of the Outback Series\nMail Order Marriage (1999)\nThe Bridesmaid's Wedding (2000)\nThe English Bride (2000)\nA Wife at Kimbara (2000)\n\nKoomera Crossing Series\nSarah's Baby (2003)\nRunaway Wife (2003)\nOutback Bridegroom (2003)\nOutback Surrender (2003)\nHome to Eden (2004)\n\nMcIvor Sisters Series\nThe Outback Engagement (2005)\nMarriage at Murraree (2005)\n\nMen Of The Outback Series\nThe Cattleman (2006)\nThe Cattle Baron's Bride (2006)\nHer Outback Protector (2006)\nThe Horseman (2006)\n\nOutback Marriages Series\nOutback Man Seeks Wife (2007)\nCattle Rancher, Convenient Wife (2007)\n\nBarons of the Outback Series Multi-Author\nWedding At Wangaree Valley (2008)\nBride At Briar's Ridge (2008)\n\nFamily Ties Multi-Author\nOnce Burned (1995)\n\nHitched! Multi-Author\nA Faulkner Possession (1996)\n\nSimply the Best Multi-Author\nGeorgia and the Tycoon (1997)\n\nThe Big Event Multi-Author\nBeresford's Bride (1998)\n\nGuardian Angels Multi-Author\nGabriel's Mission (1998)\n\nAustralians Series Multi-Author\n7. Her Outback Man (1998)\n17. Master of Maramba (2001)\n19. Outback Fire (2001)\n22. Mistaken Mistress (2002)\n24. Outback Angel (2002)\n33. The Australian Tycoon's Proposal (2004)\n35. His Heiress Wife (2004)\n\nMarrying the Boss Series Multi-Author\nBoardroom Proposal (1999)\n\nContract Brides Series Multi-Author\nStrategy for Marriage (2002)\n\nEverlasting Love Series Multi-Author\nHidden Legacy (2008)\n\nDiamond Brides Series Multi-Author\nThe Australian's Society Bride (2008)\n\nCollections\nSummer Magic / Ring of Jade / Noonfire (1981)\nWife at Kimbara / Bridesmaid's Wedding (2005)\n\nOmnibus in Collaboration\nPretty Witch / Without Any Amazement / Storm Over Mandargi (1977) (with Lucy Gillen and Margaret Malcolm)\nDear Caliban / Heart of the Eagle / Swans' Reach (1978) (with Jane Donnelly and Elizabeth Graham)\nThe Bonds of Matrimony / Dragon Island / Reeds of Honey (1979) (with Elizabeth Hunter and Henrietta Reid)\nThe Man Outside / Castles in Spain / McCabe's Kingdom (1979) (with Jane Donnelly and Rebecca Stratton)\nWinds From The Sea / Island of Darkness / Wind River (1979) (with Margaret Pargeter and Rebecca Stratton)\nMoorland Magic / Tree of Idleness / Sweet Sundown (1980) (with Elizabeth Ashton and Elizabeth Hunter)\nThe Shifting Sands / Portrait of Jaime / Touched by Fire (1982) (with Jane Donnelly and Kay Thorpe)\nHead of Chancery / Wild Heart / One-Way Ticket (1986) (with Betty Beaty and Doris Smith)\nHeart of the Scorpion / The Winds of Heaven / Sweet Compulsion (1987) (with Janice Gray and Victoria Woolf)\nOne Brief Sweet Hour / Once More With Feeling / Blue Lotus (1990) (with Jane Arbor and Natalie Sparks)\nMarry Me Cowboy (1995) (with Janet Dailey, Susan Fox and Anne McAllister)\nHusbands on Horseback (1996) (with Diana Palmer)\nWedlocked (1999) (with Day Leclaire and Anne McAllister)\nMistletoe Magic (1999) (with Betty Neels and Rebecca Winters)\nThe Australians (2000) (with Helen Bianchin and Miranda Lee)\nWeddings Down Under (2001) (with Helen Bianchin and Jessica Hart)\nOutback Husbands (2002) (with Marion Lennox)\nThe Mother's Day Collection (2002) (with Helen Dickson and Kate Hoffmann)\nAustralian Nights (2003) (with Miranda Lee)\nOutback Weddings (2003) (with Barbara Hannay)\nAustralian Playboys (2003) (with Helen Bianchin and Marion Lennox)\nAustralian Tycoons (2004) (with Emma Darcy and Marion Lennox)\nA Mother's Day Gift (2004) (with Anne Ashley and Lucy Monroe)\nWhite Wedding (2004) (with Judy Christenberry and Jessica Steele)\nA Christmas Engagement (2004) (with Sara Craven and Jessica Matthews)\nA Very Special Mother's Day (2005) (with Anne Herries)\nAll I Want for Christmas... (2005) (with Betty Neels and Jessica Steele)\nThe Mills and Boon Collection (2006) (with Caroline Anderson and Penny Jordan)\nOutback Desire (2006) (with Emma Darcy and Carol Marinelli)\nTo Mum, with Love (2006) (with Rebecca Winters)\nAustralian Heroes (2007) (with Marion Lennox and Fiona McArthur)\nTall, Dark and Sexy (2008) (with Caroline Anderson and Helen Bianchin)\nThe Boss's Proposal (2008) (with Jessica Steele and Patricia Thayer)\nIsland Heat / Outback Man Seeks Wife / Prince's Forbidden Virgin / One Night Before Marriage / Their Lost-and-found Family / Single Dad's Marriage Wish (2008) (with Robyn Donald, Marion Lennox, Carol Marinelli, Sarah Mayberry and Anne Oliver)\nAustralian Billionaires (2009) (with Jennie Adams and Amy Andrews)\nCattle Baron : Nanny Needed / Bachelor Dad on Her Doorstep (2009) (with Michelle Douglas)\n\nExternal links\nMargaret Way at Harlequin Enterprises Ltd\n\nAustralian romantic fiction writers\nAustralian women novelists\nLiving people\nYear of birth missing (living people)\nWomen romantic fiction writers", "answers": ["Margaret Way wrote more than 120 novels."], "length": 1195, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "f01044ae4abc16064872421a7dc9b21c98dc2bb4f4b5351a"} {"input": "How does the framework capture the reduced-order dynamics?", "context": "Paper Info\n\nTitle: Interpretable reduced-order modeling with time-scale separation Interpretable reduced-order modeling with time-scale separation\nPublish Date: 7 March 2023\nAuthor List: Sebastian Kaltenbach (from CSE-Lab, ETH Zurich, Harvard SEAS), Phaedon-Stelios Koutsourelakis (from CSE-Lab, ETH Zurich, Harvard SEAS), Petros Koumoutsakos (from CSE-Lab, ETH Zurich, Harvard SEAS), Harvard Seas (from CSE-Lab, ETH Zurich, Harvard SEAS)\n\nFigure\n\nFIG. 5. Comparison between the phase-space of the reference solution (left) and the phase-space of the predictions\nFIG. 7. Comparison between predictions and reference solutions for a new initial condition fort = 1.25, 3.75, 7.5, 12.5, 20, 30 (from left to right and top to down).We note that with longer prediction time the uncertainty bounds increases.Despite the chaotic nature of the KS equation, the predictive posterior mean is close to the reference solution for t ≤ 12.5\n\nabstract\n\nPartial Differential Equations (PDEs) with high dimensionality are commonly encountered in computational physics and engineering. However, finding solutions for these PDEs can be computationally expensive, making model-order reduction crucial. We propose such a data-driven scheme that automates the identification of the time-scales involved and, can produce stable predictions forward in time as well as under different initial conditions not included in the training data.\nTo this end, we combine a non-linear autoencoder architecture with a time-continuous model for the latent dynamics in the complex space. It readily allows for the inclusion of sparse and irregularly sampled training data. The learned, latent dynamics are interpretable and reveal the different temporal scales involved.\nWe show that this data-driven scheme can automatically learn the independent processes that decompose a system of linear ODEs along the eigenvectors of the system's matrix. Apart from this, we demonstrate the applicability of the proposed framework in a hidden Markov Model and the (discretized) Kuramoto-Shivashinsky (KS) equation.\nAdditionally, we propose a probabilistic version, which captures predictive uncertainties and further improves upon the results of the deterministic framework.\n\nINTRODUCTION\n\nHigh-fidelity simulations of critical phenomena such as ocean dynamics and epidemics have become essential for decision-making. They are based on physically-motivated PDEs expressing system dynamics that span multiple spatiotemporal scales and which necessitate cumbersome computations . In recent years there is increased attention to the development of data-driven models that can accelerate the solution of these PDEs as well as reveal salient, lower-dimensional features that control the long-term evolution.\nIn most cases, data-driven reduced-order models are not interpretable. In particular, models based on neural networks despite good predictive capabilities , they offer a black-box description of the system dynamics. A possible remedy is applying a symbolic regression to the learned neural network representation , but this adds additional computational cost due to the two-step procedure.\nA number of frameworks such as SINDy allows to learn interpretable dynamics but it relies on the a-priori availability of lower-dimensional descriptors and of time-derivatives which can be very noisy for both simulation and experimental data. Other frameworks are tailored to specific problems such as molecular dynamics .\nHere, we present a framework that only needs the value of the observables, and not their derivatives, as training data and is capable of identifying interpretable latent dynamics. The deployment of interpretable latent dynamics ensures that conservation of important properties of that are reflected in the reduced-order model .\nThe present method is related to approaches based on the Koopman-operator extended Dynamic Mode Decomposition (eDMD) but uses continuous complex-valued latent space dynamics and only requires one scalar variable per latent dimension to describe the latent space dynamics. Therefore we do not have to enforce any parametrizations on the Koopman matrix .\nThe time-continuous formulation moreover allows to incorporate sparse and irregularly sampled training data and fast generation of predictions after the training phase. By using a complex-valued latent space we can also incorporate harmonic effects and reduce the number of latent variables needed. Linear and non-linear autoencoders are used to map the observed, high-dimensional time-series to the lower-dimensional, latent representation and we identify simultaneously the autoencoder as well as the latent dynamics by optimizing a combined loss function.\nHence the to tasks of dimensionality reduction and discovery of the reduced dynamics are unified while other frameworks treat the two parts separately . Apart from using an architecture based on autoencoders to identify the latent space, projection-based methods could also be employed . We are also proposing a probabilistic version of our algorithm ) that makes use of probabilistic Slow Feature Analysis .\nThis allows for a latent representation that arart from being time-continuous, can quantify the predictive uncertainty and hierarchically decompose the dynamics into their pertinent scales while promoting the discovery of slow processes that control the system's evolution over long time horizons. The rest of the paper is structured as follows: We introduce the methodological framework as well as algorithmic details in section II.\nParticular focus is paid on the interpretability of the inferred lower-dimensional dynamics. In section III we present three numerical illustrations, i.e. a system of linear ODEs, a hidden Markov Model and the discretized KS-equation. We then present in section IV the probabilistic extension of the framework and apply it to the KS-equation.\nWe conclude with a summary and a short discussion about possible next steps. We introduce the autoencoders deployed in this work, followed by the interpretable latent space dynamic and discuss the training process. We consider data from high-dimensional time series x n ∈ R f with n = 1, ..., T . We remark that the intervals between the different states do not need to be uniformly spaced.\n\nAutoencoder\n\nA core assumption of the method is that each high-dimensional state x n can be compressed to a lower-dimensional representation z n ∈ C c with c << f . We identify this lower-dimensional representation by an autoencoder consisiting of a parameterized encoder and decoder. The encoder maps the high-dimensional representation to the latent space as:\nThe latent space is complex-valued. The decoder reconstructs the high-dimensional representation based on the latent variables as: We denote the parameters of the encoder as well as the decoder by θ. As discussed later in Section II C, both set of parameters are optimized simultaneously during training and therefore there is no need for differentiating them.\n\nInterpretable Latent Space Dynamics\n\nWe employ a propagator in the latent space to capture the reduced-order dynamics of the system. In contrast to other time-extended variational autoencoder frameworks, our representation uses complex valued latent variables. In addition the latent variables are treated independently. The latter feature enables us to have an interpretable latent dynamics as well as a model that is especially suitable for being trained in the Small Data regime due to the small number of required parameters.\nThis is in contrast to temporal propagators such as LSTMs . For each dimension i of the latent variable z we are using the following continuous ODE in the complex plane: By solving this ODE, we can define the operator: Interpretable reduced-order modeling with time-scale separation Here, λ is a vector containing all the individual λ's and ∆t n indicates the time-step between the latent states.\nThe symbol is used to indicate a component-wise multiplication. We remark that the latent variables and the parameter governing the temporal evolution are complex numbers and their role in describing the system dynamics is similar to that of an eigenvalue. The real part is associated with growth and decay whereas the imaginary part is representing the periodic component.\nThis approach has similarities with the Koopman-operator based methods and the extended dynamic mode decomposition . In contrast to the methods mentioned before we are using a continuous formulation in the latent space that allows us to incorporate scarce and irregularly sampled training data and directly rely on complex numbers in the latent space.\n\nTraining and Predictions\n\nWe optimize a loss function that combines both a reconstruction loss as well as a loss associated with the error of our learned propagator in the latent space: (5) We note that we could directly incorporate mini-batch training by only taking the summation over a subset of the N available training data.\nFor new predictions of unseen states, we use the encoder to generate a latent representations which is then advanced in time by the learned propagator. At a designated time step we are using the decoder to reconstruct the high-dimensional solution. We applied our algorithm to three systems. First, we show that the algorithm is capable of exactly reproducing the solution of a linear ODE and to identify its eigenvalues.\nAfterwards we are applying the framework to a high-dimensional process generated by a complex latent dynamics, which is correctly identified. As a final test case, we are applying the algorithm to a Kuramoto Shivashinski (KS) equation. Interpretable reduced-order modeling with time-scale separation\n\nLinear ODE\n\nWe are considering a two-dimensional ODE system for x = y 1 y 2 : Based on the obtained training data we run our algorithm using a linear encoder and decoder structure as well as two latent variables z. The loss function was optimized using the Adam algorithm . As we consider a linear ODE we can analytically compute the eigenvalues involved and compare it with the parameters λ identified by our algorithm.\nWe observe in Figure that the algorithm was able to recover the correct values, i.e. the eigenvalues 7 and 3 of the given linear ODE. The system does not have a periodic component and the two imaginary parts correctly go to zero, whereas the real parts converge to the reference value. Moreover we are also able to identify for the linear mapping between our latent variables z and the training data a matrix consisting of a multiple of the eigenvectors (1,1) and (1,-1) and thus the correct solution.\nThis example was chosen to show that the algorithm is able to quickly identify the exact solution of a linear ODE in terms of its linearly independent components.\n\nHidden multiscale dynamics\n\nWe consider eight-dimensional synthetic time series data produced by an underlying twodimensional complex valued process. In particular, the data points x are generated by first solving for the temporal evolution for the two complex-valued processes p 1 and p 2 and than mapping to the eight-dimensional space by using a randomly sampled linear mapping W .\nOne of the two processes used to generate the data is chosen to be much slower than the other one and both processes have a periodic component. dp 2 dt = (−0.9 + 1.5i)p 2 (8) As training data we consider 40 time series with 150 data points each, obtained by simulating the described processes for a maximum of t = 15 s and then sampling from the obtained data points.\nHence the training data consists of: • 40 time-series • with each consisting 150 observations of the x at a uniform time-step ∆t = 0.0025 The autoencoder obtained consists of one linear layer for both the decoder as well as the encoder. The model is trained for 5000 iterations using the Adam optimizer and a learning rate of 10 −3 .\nThe results for the convergence of the parameters λ 1 and λ 2 can be found in Figure . We note that the process which is slower decaying and thus more responsible for the long-term evolution of the system has a higher convergence rate than the faster process. With the obtained parameters λ as well as the trained autoencoder, we compute predictions based on the last time step used for training, i.e. we apply the encoder to obtain our latent representation and than use the latent dynamics to advance the latent representation in time.\nAfterwards, we employ the decoder to reconstruct the full high-dimensional system. The results can be found in Figure and show very good agreement between predictions and reference data. This example shows that our model is successfully able to carry out dimensionality reduction and moreover indicates that the convergence rate between latent processes can be different.\nThe latter is relevant when training models as for accurate predictions all latent processes and their dynamics should be converged.\n\nKuramoto-Sivashinsky\n\nFinally, we applied our algorithm to the KS equation and aim to identify a reduced-order model for the solution u(y, t): We employed periodic boundary conditions, µ = 1 and a domain size y ∈ [0, 22]. For this domain-size, the KS-equation exhibits a structurally stable chaotic attractor as discussed in The black lines divides the area for which training data was given from the area without raining data.\n; . The equation is discretized in space using a discretization step of 22 64 resulting in a state vector x of dimension 64 and a nonlinear system of coupled ODEs. This is solved using a stiff fourth-order solver We employed a non-linear encoder and decoder with four fully-connected layers each and ReLU-activation functions as well as Dropout Layers between the fully-connected layers.\nWe trained the model for 200000 iterations using Adam and a learning rate of 5 • 10 4 and assuming a five-dimensional latent space. We obtained the λ's in Figure . Four latent variables have λ's close to zero and thus a slow temporal dynamic that is responsible for the long-term evolution whereas one latent variable is quickly decaying.\nBased on the obtained parameters, we do predictions based on an unseen initial condition not contained in the training data. We are able to reconstruct the correct phase space based on our predictions despite only using a very limited amount of training data. The results for the phase space can be seen in Figure .\nAlthough the small-scale fluctuations in the temporal dynamics are not well captured, the model identifies the correct manifold which has a good accuracy compared to the reference solution. All phase-spaces were obtained by using a finite-difference operator on the data or predictions. These results are in accordance Interpretable reduced-order modeling with time-scale separation with whose LSTM-based temporal dynamic model was also able to find the correct phase space but not to track the actual dynamics for long-term predictions.\nOur model is not able to account for noise in the temporal evolution and thus dealing with chaotic, small-scale fluctuations is challenging. We believe that a probabilistic version of our algorithm could be advantageous here. This section contains a fully probabilistic formulation for the deterministic model discussed before.\nWe replace the Autoencoder with a Variational Autoencoder and the ODE in the latent space with a SDE. The loss function which we optimize is the Evidence-Lower Bound (ELBO).\n\nModel Structure\n\nWe postulate the following relations for our probabilistic model using an Ornstein-Uhlenbeck (OU) for each dimension i of the latent space and a Wiener process W t in the latent space: We again assume that the latent variables z t are complex-valued and a priori independent. Complex variables were chosen as their evolution includes a harmonic components which are observed in many physical systems.\nWe assume an initial conditions z 0,i ∼ CN (0, σ 2 0,i ). The total parameters associated with the latent space dynamics of our model are thus {σ 2 0,i , σ 2 i , λ i } c i=1 and will be denoted by θ together with all parameters responsible for the decoder mapping G (see next section). These parameters along with the state variables z t have to be inferred from the data x t .\nBased on probabilistic Slow Feature Analysis (SFA) , we set σ 2 i = 2; (λ j ) and σ 2 0,i = 1. As a consequence, a priori, the latent dynamics are stationary. A derivation and reasoning for this choice can be found in Appendix A. Hence the only independent parameters are the λ i , the imaginary part of which can account for periodic effects in the latent dynamics.\n\nVariational Autoencoder\n\nWe employ a variational autoencoder to account for a probabilistic mappings from the lower-dimensional representation z n to the high-dimensional system x n . In particular we are employing a probabilistic decoder The encoder is used to infer the state variables z based on the given data and thus defined in the inference and learning section.\n\nInference and Learning\n\nGiven the probabilistic relations , our goal is to infer the latent variables z 0:T as well as all model parameters θ. We follow a hybrid Bayesian approach in which the posterior of the state variables is approximated using amortized Variational Inference and Maximum-A-Posteriori (MAP) point-estimates for θ are computed.\nThe application of Bayes' rule for each data sequence x 0:T leads to the following posterior: where p(θ) denotes the prior on the model parameters. In the context of variational inference, we use the following factorization of the approximate posterior i.e. we infer only the mean µ and variance σ for each state variable based on the given data points.\nThis conditional density used for inference is the encoder-counterpart to the probabilistic decoder defined in the section before. It can be readily shown that the optimal parameter values are found by maximizing the Evidence Lower Bound (ELBO) F(q φ (z 0:T ), θ) which is derived in Appendix B. We compute Monte Carlo estimates of the gradient of the ELBO with respect to φ and θ with the help of the reparametrization trick and carry out stochastic optimization with the ADAM algorithm .\n\nResults for the probabilistic extension\n\nWe applied our probabilistic version to the KS-equation. We used the same settings as for the deterministic approach but considered up to 10 complex latent variables. The obtained λ's are in Figure . The probabilistic model allows us to quantify the uncertainty in predictions. In Figure predictions for various time-steps and the respective uncertainty bounds are shown for an unseen initial condition.\nDue to the chaotic nature of the KS-equation and the small amount of training data, the underlying linear dynamic of our model is only able to capture the full dynamics for a limited time horizon. Fortunately, due to the probabilistic approach the model is capable of capturing chaotic fluctuations with increasingly wide uncertainty bounds.\nWe also computed the phase space representation for the KS-equation based on the predictions obtained by our model and compare it with the reference solution. The probabilistic model identifies the correct manifold with a better accuracy than the deterministic model. As some of the small-scale fluctuations are accounted as noise, the resulting manifold is more concentrated at the origin and the obtained values are slightly smaller than the reference manifold although their shape is very similar.", "answers": ["By using a propagator in the latent space."], "length": 3083, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "47b9dbb6d85243ffdaf6d95e842986b30d49d95913f21b6a"} {"input": "Where can users go for troubleshooting and support?", "context": "'Quectel_QuecPython_BC25 开发板使用说明 版本:Quectel_QuecPython_BC25 开发板使用说明_V1.1日期:2021-11-30 状态:临时文件\nQuectel_QuecPython_BC25 开发板使用说明一、基本概述BC25_QuecPython_EVB_V1.1 开发板(本文简称“V1.1 开发板”)是专门针对 BC25 制造,是一款小巧便携的“口袋型”开发板。体型虽小,但是功能丰富,拥 有 SIM 卡座、板载天线、磁开关、LED 等元件。开发者仅需一条 USB Type-C 数据线即可轻松玩转开发板。二、开发板资源Quectel 移远 BC25 通信模组NANO SIM 自弹卡座USB Type-C 数据接口开机按键,唤醒按键磁开关单色灯GPIO 排针上海市闵行区田林路 1016 号科技绿洲 3 期(B 区)5 号楼 200233 邮箱: info@quectel.com 网址: www.quectel.com 1 / 6\n三、开发板介绍Quectel_QuecPython_BC25 开发板使用说明开发板是为方便开发者使用 QuecPython,而设计的一款基于 BC25 通信模块 的开发板,其上集成了开发常用的配置,可以满足开发者的开发需求。V1.1 开发板正面接口V1.1 开发板配置开发板配备了多种外设。明细如下:序 号名称型号是否支持接口类 型1磁开关KTH1601SL-ST3是GPIO2LED 灯S3528UG6W9TLC2G- 是GPIOTJ- 34微动按键GPIOA5--------是是---------上海市闵行区田林路 1016 号科技绿洲 3 期(B 区)5 号楼 200233 邮箱: info@quectel.com 网址: www.quectel.com 2 / 6\nQuectel_QuecPython_BC25 开发板使用说明四、功能详解4.1 磁开关开发板集成了一个磁开关。使用磁铁靠近,可使磁开关输出引脚变为低电平, 默认为高电平。4.2 LED 灯开发板集成了一颗高亮度灯珠,可以用来做显著指示灯。上海市闵行区田林路 1016 号科技绿洲 3 期(B 区)5 号楼 200233 邮箱: info@quectel.com 网址: www.quectel.com 3 / 6\n4.3 按键开发板集成了 2 个微动按键,其功能是 S1 为开机键,S2 为睡眠唤醒按键。Quectel_QuecPython_BC25 开发板使用说明五、调试步骤1.拿到开发板 V1.1 先插上 USB 安装串口驱动,在官方 QQ 群文件搜 CP210 或者自 行百度下载 CP210x 的串口芯片驱动进行安装。2.使用串口工具(例如 QCOM_V1.6)连接 BC25 的主串口(硬件 17、18 脚)。V1.1 选择 Enhanced COM 口,波特率选择 9600,打开串口,按下 PWK 键约一秒松开进 行开机,串口工具收到消息则代表开机成 功,然后按下 EINT 键串口显示 +QATWAKEUP 表示模组唤醒了。3.从 https://python.quectel.com/download 下载 BC25QuecPython 版本固件, 使用 Qflash(群文件下载)选择 BC25 的调试串口(硬件 38、39 脚),波特率选 择 921600,选择 lod 后缀的固件,按下 EINT 键串口工具显示模组已经唤醒串口 工具发 AT+QSCLK=0 可关闭睡眠(不会发 AT 则多按几次 EINT 键),点击 Start 开 始下载固件,下载进度条开始下载,等待下载完成。关闭以上所有工具,并给板 子断电重新上电。4.从 https://python.quectel.com/download 下载 QPYCOM 工具,直接解压运行 工具,选择主串口(同第 2 步),选择 57600 波特率,打开串口。再按 PWK 按键 进行开机,会看到 QPYCOM 有打印 mount.Type \"help()\" for more information.然后就可以进行 QuecPython 的交互调 试了。上海市闵行区田林路 1016 号科技绿洲 3 期(B 区)5 号楼 200233 邮箱: info@quectel.com 网址: www.quectel.com 4 / 6\n六、常见问题解决Quectel_QuecPython_BC25 开发板使用说明Q:模块的固件在哪?A:请登录 QuecPython 网站下载:http://python.quectel.com/downloadQ:哪里有开发板和其他常用资料?A:请登录 QuecPython 网站下载:http://python.quectel.com/downloadP.S. 如果您遇到任何问题,请参照本官网在线文档进行解决或访问 QuecPython 社区进行搜索、交流、提问:QuecPython 社区或者联系我们的在线支持:QQ 群 445121768获取 QuecPython 开发固件及加入官方交流群官网主页:https://python.quectel.com官网文件下载(各类资料、工具):https://python.quectel.com/download官网 wiki(常用于视频教程、手把手教程下载、API 库):https://python.quectel.com/wiki/#/官网文档中心(拥有从入门到精通的各种文档介绍、必看):https://python.quectel.com/doc/工单系统:https://workorder.quectel.com/QuecPython 社区:https://forumschinese.quectel.com/c/function-subjects/quectpython/43QuecPython 官方 QQ 开发交流群:445121768微信公众号:QuecPython移远 OTA 升级平台: https://cloudota.quectel.com/移远 IoT 管理平台:https://python.quectel.com/doc/doc/Advanced_development/zh/QuecPython Cloud/QuecCloud.html上海市闵行区田林路 1016 号科技绿洲 3 期(B 区)5 号楼 200233 邮箱: info@quectel.com 网址: www.quectel.com 5 / 6\n附录 1 V1.1 开发板丝印图Quectel_QuecPython_BC25 开发板使用说明附录 2 V1.1 开发板原理图上海市闵行区田林路 1016 号科技绿洲 3 期(B 区)5 号楼 200233 邮箱: info@quectel.com 网址: www.quectel.com 6 / 6\nPIU101 PIU102 PIU103 PIU104 PIU105 PIU106 PIU107 PIU108 COR9 PIR902 PIR901 PIU109 PIU1010 PIU1011 PIU1012 PIU1013 PIU1044 PIU1043 PIU1042 PIU1041 PIU1040 PIU1039 PIU1038 PIU1037 PIU1036 COU1A PIU1014 PIU1015 PIU1016 PIU1017 PIU1018 PIU1019 PIU1020 PIU1021 PIU1022 PIU1035 PIU1034 PIU1033 PIU1032 PIU1031 PIU1030 PIU1029 PIU1028 PIU1027 PIU1026 PIU1025 PIU1024 PIU1023 COJ1 PIJ101 COC1 PIC101 PIC102 COC2 PIC201 PIC202 PIC602 COC6 PIC601 COR22 PIR2201 PIR2301 COR23 COR24 PIR2401 PIR2501 COR25 COR14 PIR1401 PIR1601 COR16 PIR2202 PIR2302 PIR2402 PIR2502 PIR1402 PIR1602 COC14 PIC1402 PIC1401 COR33 PIR3301 PIR3302 COU2 PIU201 PIU202 PIU203 PIR3002 COR30 COD6 PIR3001 PID601 PID602 COR31 PIR3102 PIR3101 PIQ201 PIQ203 COQ2 COR32 PIQ202 PIR3201 PIR3202 COR1 COD1 PIR102 PIR101 PID101 PID102 PIQ103 COQ1 PIQ102 COR13 PIR1302 PIR1301 PIQ101 PIR1501 COR15 PIR1502 PIR302 COR3 PIR301 PIR402 COR4 PIR401 PIU1045 PIU1046 PIU1047 PIU1048 PIU1049 PIU1050 PIU1051 PIU1052 PIU1053 COR19 PIR1902 PIR1901 COR20 PIR2002 PIR2001 PIU1072 PIU1071 PIU1070 PIU1069 PIU1068 COU1B PIU1054 PIU1055 PIU1056 PIU1057 PIU1058 PIU1067 PIU1066 PIU1065 PIU1064 PIU1063 PIU1062 PIU1061 PIU1060 PIU1059 PIU1073 PIU1074 PIU1075 PIU1076 PIU1077 PIU1078 PIU1079 PIU1080 COU1C PIU1088 PIU1087 PIU1086 PIU1085 PIU1084 PIU1083 PIU1082 PIU1081 PIU1089 PIU1090 PIU1091 COU1D PIU1094 PIU1093 PIU1092 COM2 COM1 1122334455667788DDCCBBAATitleNumberRevisionSizeA3Date:2021/11/1Sheet ofFile:E:\\\\\\\\..\\\\1.BC25.SchDocDrawn By:1J1ADCR44.7KR34.7KADC_INGNDQUECTEL_LOGOQuecPythonGNDC1100uF 6.3VGNDAUX_TXD_1V8AUX_RXD_1V8GNDUSIM1_VDDRESETNETLIGHTM_RXD_1V8M_TXD_1V8PIN19PIN20VDD_EXTPIN23PIN22PIN21R234.7KR224.7KVDD_EXTR254.7KR244.7KPIN20PIN21PIN23PIN22PIN25PIN30PIN31PIN32PIN33GNDR14.7K312Q1D1蓝 LEDNETR130RR15NCNETLIGHTC61uFGNDGND1RESERVED2MIC_P3MIC_N4SPK_P5SPK_N6PWRKEY7RESERVED8RESERVED9GND10USIM_DATA11USIM_RST12USIM_CLK13USIM_VDD14RESET_N15NET_STATUS16MAIN_RXD17MAIN_TXD18MAIN_DTR19MAIN_RI20MAIN_DCD21MAIN_CTS22MAIN_RTS23VDD_EXT24STATUS25RESERVED26GND27AUX_RXD28AUX_TXD29PCM_CLK30PCM_SYNC31PCM_DIN32PCM_DOUT33GND34ANT_MAIN35GND36GND37DBG_RXD38DBG_TXD39GND40GND41VBAT42VBAT43RESERVED44U1ABC25/EC800NGND45GND46GND47GND48RESERVED49RESERVED50RESERVED51RESERVED52RESERVED53RESERVED54RESERVED55RESERVED56RESERVED57RESERVED58USB_DP59USB_DM60USB_VBUS61RESERVED62RESERVED63RESERVED64RESERVED65I2C_SDA66I2C_SCL67RESERVED68RESERVED69GND70GND71GND72U1BEC800NGND73RESERVED74RESERVED75RESERVED76RESERVED77RESERVED78USIM_DET79RESERVED80RESERVED81USB_BOOT82RESERVED83RESERVED84RESERVED85RESERVED86RESERVED87GND88U1CEC800NGND89GND90GND91GND92GND93GND94U1DEC800NGNDGNDUSIM_DETUSB_BOOTGNDVBUSDM_EC800NDP_EC800NGNDGNDPIN3POWRKEYUSIM1_CLKUSIM1_RSTUSIM1_DATAGNDGNDD_RXD_1V8D_TXD_1V8C2100uF 6.3V+3.8VR90RADCI2C_SCL_EC800NI2C_SDA_EC800NR164.7KR144.7KI2C_SDA_EC800NI2C_SCL_EC800N+3.8VPIN4PIN5PIN6R190RR200RDM_EC800NDP_EC800NUSB_DMUSB_DP+3.8VGNDR304.7K312Q2D6翠绿灯珠NETR310RR32NCPIN30+3.8VGND3OUTPUT2VCC1U2KTH1601SL-ST3VCC_1V8C141uFGNDGNDR3310KVCC_1V8PIN31磁性开关灯珠EC800N焊接R19、R20电源部分请参考官方设计BC25不焊接\nCOC9 PIC902 PIC901 COU3 PIU301 PIU302 PIU303 PIU306 PIU305 PIU304 COR7 PIR702 PIR701 COL1 PIL101 PIL102 PIC701 PIC702 COC7 COD2 PID202 PID201 COC10 PIC1001 PIC1002 PIC1201 PIC1202 COC12 COR8 PIR802 PIR801 PID501 PID502 COD5 COU6 PIU601 PIU602 PIU603 PIU605 PIU604 PIC1102 COC11 PIC1101 PIC802 COC8 PIC801 COR21 PIR2101 PIR2102 COUSBC1 PIUSBC100 PIUSBC10A12 PIUSBC10A9 PIUSBC10A8 PIUSBC10A7 PIUSBC10A6 PIUSBC10A5 PIUSBC10A4 PIUSBC10A1 PIUSBC10B1 PIUSBC10B4 PIUSBC10B5 PIUSBC10B6 PIUSBC10B7 PIUSBC10B8 PIUSBC10B9 PIUSBC10B12 COR10 PIR1002 PIR1001 COR11 PIC1301 PIC1302 COC13 PIR1101 PIR1102 COD4 PID401 PID402 COD3 PID301 PID302 COD7 PID701 PID702 11223344DDCCBBAATitleNumberRevisionSizeA4Date:2021/11/1Sheet ofFile:E:\\\\\\\\..\\\\2.POWER.SchDocDrawn By:type-CDCDCGNDGNDB1VBUSB4CC2B5DP2B6DN2B7SBU2B8VBUSB9GNDB1200000000GNDA1VBUSA4CC1A5DP1A6DN1A7SBU1A8VBUSA9GNDA12USBC1USB3.1C16PFSMTGNDGNDUSB_DMUSB_DMUSB_DPUSB_DPVBUSVBUSVBUSVBUSD3ESD9L5.0ST5GD4ESD9L5.0ST5GD2SMBJ6.5CAGND1SW2VIN3VFB4EN5VBST6U3TPS563201DDCRGND2.2uHL1WPN4020H2R2MTC90.1uFR710KC120.1uFGNDGND+5V+3.8V+5VR1110KR1040.2KC130.1uFGND+3.8VD54.7KR8GND+3.8VC722uF 10VC1022uF 10VVCC_1V8C84.7uFR2110KVIN1GND2EN3NC4VOUT5U6ME6212C18M5GGNDGNDC114.7uFGND+5VD7SS34VBUS+5V\nCOC3 PIC301 PIC302 COCARD1 PICARD10C1 PICARD10C2 PICARD10C3 PICARD108 PICARD109 PICARD1010 PICARD1011 PICARD10C5 PICARD10C6 PICARD10C7 PICARD10CD PIR1202 COR12 PIR1201 PIU501 PIU503 PIU504 PIU505 PIU506 COU5 PIU502 11223344DDCCBBAATitleNumberRevisionSizeA4Date:2021/11/1Sheet ofFile:E:\\\\\\\\..\\\\4.SIM-CARD.SchDocDrawn By:123456U5USIM1_VDDUSIM1_RSTUSIM1_CLKUSIM1_DATAGND10KR12USIMGNDVCCC1RSTC2CLKC3I/OC7VPPC6GNDC5CDCDEP8EP9EP10EP11CARD1SMN-303GNDC30.1uFUSIM_DET\nCOJ5 PIJ501 PIJ502 PIJ503 PIJ504 PIJ505 PIJ506 PIJ507 PIJ508 PIJ509 PIJ5010 PIJ5011 PIJ5012 PIJ5013 PIJ5014 PIJ5015 COJ6 PIJ601 PIJ602 PIJ603 PIJ604 PIJ605 PIJ606 PIJ607 PIJ608 PIJ609 PIJ6010 PIJ6011 PIJ6012 PIJ6013 PIJ6014 PIJ6015 COU4 PIU409 PIR501 PIR502 COR5 COS1 PIS101 PIS102 COR17 COS2 PIR1702 PIR1701 PIS201 PIS202 COR18 PIR1802 PIR1801 COR2 PIR201 PIR202 PIC402 COC4 PIC401 PIC502 COC5 PIC501 PIR602 COR6 PIU405 PIR601 PIU406 PIC1702 PIC1701 COC17 PIU407 PIU402 PIU408 PIU403 PIU404 PIU401 PIU4024 PIU4023 PIU4022 PIU4021 PIU4020 PIU4019 PIU4018 PIU4017 PIU4016 PIU4015 PIU4014 PIU4013 PIU4012 PIU4011 PIU4010 PIU400 COR26 COR27 PIR2602 PIR2702 PIR2601 PIR2701 COR28 COR29 PIR2802 PIR2902 PIR2801 PIR2901 11223344DDCCBBAATitleNumberRevisionSizeA4Date:2021/11/1Sheet ofFile:E:\\\\\\\\..\\\\6.GPIO+UART.SchDocDrawn By:GPIOAUX_RXD_1V8AUX_TXD_1V8GNDD_TXD_1V8D_RXD_1V8S1S2GNDVDD_EXTPOWRKEYPIN19VBUSRI_SCI1GND2D+3D-4VIO5VDD6REGIN7VBUS8-RST9CTS_ECI10RTS_ECI11RXD_ECI12TXD_ECI13GPIO.1_ECI14GPIO.0_ECI15NC16RI_ECI17CTS_SCI18RTS_SCI19RXD_SCI20TXD_SCI21GPIO.2_SCI22GPIO.1_SCI23GPIO.0_SCI24GND0U4CP2105GND1uFC17R5NC1uFC4C50.1uFGNDGNDR6NCR20RADC_INM_TXD_1V8M_RXD_1V8PIN19PIN25PIN33PIN30PIN31PIN32USB_DMUSB_DPM_RXD_1V8M_TXD_1V8R260RR270RR280RR290RD_RXD_1V8D_TXD_1V8PIN3PIN4PIN5PIN6PIN20PIN21PIN22PIN23123456789101112131415J5Header 15123456789101112131415J6Header 15R170RR180RUSB_BOOTI2C_SCL_EC800NI2C_SDA_EC800N+3.8VRESETGNDVCC_1V8VCC_1V8+5VEC800N不焊接CP2105\n'", "answers": ["Online documentation, QuecPython community, online support: QQ group 445121768."], "length": 682, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "ec016e90ac78727f3c4aa176db472724bda53c07cb8cb696"} {"input": "When was McPherson County established as a county?", "context": "McPherson County (standard abbreviation: MP) is a county located in the U.S. state of Kansas. As of the 2020 census, the county population was 30,223. The largest city and county seat is McPherson. The county is named for Civil War General James B. McPherson.\n\nHistory\n\nEarly history\n\nFor many millennia, the Great Plains of North America was inhabited by nomadic Native Americans. From the 16th century to 18th century, the Kingdom of France claimed ownership of large parts of North America. In 1762, after the French and Indian War, France secretly ceded New France to Spain, per the Treaty of Fontainebleau. In 1802, Spain returned most of the land to France, but keeping title to about 7,500 square miles.\n\nIn 1803, most of the land for modern day Kansas was acquired by the United States from France as part of the 828,000 square mile Louisiana Purchase for 2.83 cents per acre. In 1848, after the Mexican–American War, the Treaty of Guadalupe Hidalgo with Spain brought into the United States all or part of land for ten future states, including southwest Kansas. In 1854, the Kansas Territory was organized, then in 1861 Kansas became the 34th U.S. state.\n\n19th century\n\nFrom the 1820s to 1870s, the Santa Fe Trail passed through, what is now McPherson County. The trail entered the county, east of Canton, then south of Galva, then north of Inman, and west towards Lyons. In 1855, Charles O. Fuller established a ranch adjacent to the Running Turkey Creek Crossing about two miles south and one mile east of Galva. Fuller's Ranch provided accommodations for travelers on the Santa Fe Trail and was probably the first white settlement in McPherson County.\n\nPeketon County was established in 1860, by the passage of a bill by S. N. Wood: An act to establish Peketon County. Section 1. - That all that territory west of the sixth principal meridian and south of Township 16, in Kansas Territory, be and the same is hereby erected into a county, to be known by the name of Peketon County. On February 17, 1865, Peketon County was abolished, and McPherson County was made a part of Marion County, which extended from the west line of Chase County to the present western boundary of Kansas.\n\nIn 1868, Solomon Stephens and L. N. Holmberg were appointed Justices of the Peace—the first officers in what is now McPherson County. The next year (1869) occurred the first election for the township, now the county of McPherson. McPherson was regularly organized as a county in the spring of 1870, a mass meeting being held at Sweadal. Sweadal, the county seat thus selected, was located about one mile and a half southwest of the present site of Lindsborg. In September, however, the County Commissioners resolved to meet at the latter place, McPherson which had already been located some two years.\n\nIn April, 1873, a petition was filed for the county seat re-location. It was signed by 483 voters, and a special election was accordingly ordered for June 10. Upon that day, McPherson received 605 votes, New Gottland 325, King City 3 and Lindsborg 1; McPherson's majority over all, 276. In May the McPherson Town Company had offered, as an inducement for the location of the county seat at this point, the free use of rooms for ten years, and the donation of two squares of land on the town site. The offer was accepted the next month, the County Commissioners selecting blocks 56 and 65. Thus the county seat was established at McPherson and has remained since.\n\nAs early as 1875, city leaders of Marion held a meeting to consider a branch railroad from Florence. In 1878, Atchison, Topeka and Santa Fe Railway and parties from Marion County and McPherson County chartered the Marion and McPherson Railway Company. In 1879, a branch line was built from Florence to McPherson, in 1880 it was extended to Lyons, in 1881 it was extended to Ellinwood. The line was leased and operated by the Atchison, Topeka and Santa Fe Railway. The line from Florence to Marion, was abandoned in 1968. In 1992, the line from Marion to McPherson was sold to Central Kansas Railway. In 1993, after heavy flood damage, the line from Marion to McPherson was abandoned. The original branch line connected Florence, Marion, Canada, Hillsboro, Lehigh, Canton, Galva, McPherson, Conway, Windom, Little River, Mitchell, Lyons, Chase, then connected with the original AT&SF main line at Ellinwood.\n\nIn 1887, the Chicago, Kansas and Nebraska Railway extended its main line from Herington to Pratt. This main line connected Herington, Ramona, Tampa, Durham, Waldeck, Canton, Galva, McPherson, Groveland, Inman, Medora, Hutchinson, Whiteside, Partridge, Arlington, Langdon, Turon, Preston, Natrona, Pratt. In 1888, this main line was extended to Liberal. Later, this line was extended to Tucumcari, New Mexico and Santa Rosa, New Mexico, where it made a connection with the Southern Pacific from El Paso, Texas. The Chicago, Kansas and Nebraska Railway was absorbed by the Chicago, Rock Island and Pacific Railway. This line is also called the \"Golden State Route\".\n\n20th century\nThe National Old Trails Road, also known as the Ocean-to-Ocean Highway, was established in 1912, and was routed through Windom, Conway, McPherson.\n\nGeography\n\nAccording to the U.S. Census Bureau, the county has a total area of , of which is land and (0.3%) is water.\n\nAdjacent counties\n Saline County (north)\n Dickinson County (northeast)\n Marion County (east)\n Harvey County (southeast)\n Reno County (southwest)\n Rice County (west)\n Ellsworth County (northwest)\n\nMajor highways\n Interstate 135\n U.S. Route 56\n U.S. Route 81\n K-4\n K-61\n K-153\n\nDemographics\n\nThe McPherson Micropolitan Statistical Area includes all of McPherson County.\n\n2000 census\nAs of the census of 2000, there were 29,554 people, 11,205 households, and 7,966 families residing in the county. The population density was 33 people per square mile (13/km2). There were 11,830 housing units at an average density of 13 per square mile (5/km2). The racial makeup of the county was 96.53% White, 0.81% Black or African American, 0.34% Native American, 0.32% Asian, 0.06% Pacific Islander, 0.79% from other races, and 1.16% from two or more races. 1.94% of the population were Hispanic or Latino of any race. 37.1% were of German, 12.9% Swedish, 12.1% American, 6.7% English and 6.3% Irish ancestry according to Census 2000.\n\nThere were 11,205 households, out of which 33.00% had children under the age of 18 living with them, 62.50% were married couples living together, 6.00% had a female householder with no husband present, and 28.90% were non-families. 25.50% of all households were made up of individuals, and 11.80% had someone living alone who was 65 years of age or older. The average household size was 2.49 and the average family size was 2.99.\n\nIn the county, the population was spread out, with 25.40% under the age of 18, 10.30% from 18 to 24, 25.20% from 25 to 44, 21.80% from 45 to 64, and 17.30% who were 65 years of age or older. The median age was 38 years. For every 100 females there were 95.90 males. For every 100 females age 18 and over, there were 92.90 males.\n\nThe median income for a household in the county was $41,138, and the median income for a family was $48,243. Males had a median income of $33,530 versus $21,175 for females. The per capita income for the county was $18,921. About 4.20% of families and 6.60% of the population were below the poverty line, including 5.20% of those under age 18 and 8.10% of those age 65 or over.\n\nGovernment\n\nPresidential elections\nMcPherson county is often carried by Republican candidates. The last time a Democratic candidate has carried this county was in 1964 by Lyndon B. Johnson.\n\nLaws\nFollowing amendment to the Kansas Constitution in 1986, the county remained a prohibition, or \"dry\", county until 1996, when voters approved the sale of alcoholic liquor by the individual drink with a 30 percent food sales requirement.\n\nEducation\n\nColleges\n McPherson College in McPherson\n Bethany College in Lindsborg\n Central Christian College in McPherson\n\nUnified school districts\n Smoky Valley USD 400\n McPherson USD 418\n Canton-Galva USD 419\n Moundridge USD 423\n Inman USD 448\n\nSchool district office in neighboring county\n Goessel USD 411\n Little River-Windom USD 444\n\nMuseums\n Birger Sandzén Memorial Gallery in Lindsborg\n McCormick-Deering Days Museum in Inman\n McPherson Museum in McPherson\n Lindsborg Old Mill & Swedish Heritage Museum in Lindsborg\n Kansas Motorcycle Museum in Marquette\n\nCommunities\n\nCities\n\n Canton\n Galva\n Inman\n Lindsborg\n Marquette\n McPherson (county seat) \n Moundridge\n Windom\n\nUnincorporated communities\n† means a Census-Designated Place (CDP) by the United States Census Bureau.\n Conway\n Elyria†\n Groveland\n Johnstown\n New Gottland\n Roxbury†\n\nGhost towns\n Alta Mills\n Battle Hill\n Christian\n Doles Park\n Elivon\n King City\n Sweadal\n\nTownships\nMcPherson County is divided into twenty-five townships. The cities of Lindsborg and McPherson are considered governmentally independent and are excluded from the census figures for the townships. In the following table, the population center is the largest city (or cities) included in that township's population total, if it is of a significant size.\n\nSee also\n List of people from McPherson County, Kansas\n National Register of Historic Places listings in McPherson County, Kansas\n McPherson Valley Wetlands\n Maxwell Wildlife Refuge\n\nReferences\n\nNotes\n\nFurther reading\n\n Wheeler, Wayne Leland. \"An Analysis of Social Change in a Swedish-Immigrant Community: The Case of Lindsborg, Kansas.\" (PhD dissertation, University of Missouri-Columbia; ProQuest Dissertations Publishing, 1959. 5905657).\n\nCounty\n Through the Years: A Pictorial History of McPherson County; McPherson Sentinel' Heritage House Publishing Co; 1992.\n McPherson County First Courthouse Built About 1869 or 1870; Lindsborg News-Record; March 30, 1959.\n Pioneer Life and Lore of McPherson County, Kansas; Edna Nyquist; Democratic-Opinion Press; 1932.\n A History of the Church of the Brethren in Kansas (includes McPherson College history); Elmer LeRoy Craik; McPherson Daily; Republican Press; 397 pages; 1922.\n Portrait and Biographical Record of Dickinson, Saline, McPherson, and Marion Counties, Kansas; Chapman Bros; 614 pages; 1893.\n Standard Atlas of McPherson County, Kansas; Geo. A. Ogle & Co; 82 pages; 1921.\n Plat Book of McPherson County, Kansas; North West Publishing Co; 50 pages; 1903.\n Edwards' Atlas of McPherson County, Kansas; John P. Edwards; 51 pages; 1884.\n\nTrails\n The Story of the Marking of the Santa Fe Trail by the Daughters of the American Revolution in Kansas and the State of Kansas; Almira Cordry; Crane Co; 164 pages; 1915. (Download 4MB PDF eBook)\n The National Old Trails Road To Southern California, Part 1 (LA to KC); Automobile Club Of Southern California; 64 pages; 1916. (Download 6.8MB PDF eBook)\n\nMennonite Settlements\n Impact of Mennonite settlement on the cultural landscape of Kansas; Brenda Martin; Kansas State University; 1985/1988. \n Mennonite settlement : the relationship between the physical and cultural environment; Susan Movle; University of Utah; 1975/1886.\n Status of Mennonite women in Kansas in their church and home relationships; Eva Harshbarger; Bluffton College; 1925/1945.\n\nExternal links\n\nCounty\n \n McPherson County - Directory of Public Officials\nHistorical\n , from Hatteberg's People'' on KAKE TV news\nMaps\n McPherson County Maps: Current, Historic, KDOT\n Kansas Highway Maps: Current, Historic, KDOT\n Kansas Railroad Maps: Current, 1996, 1915, KDOT and Kansas Historical Society\n\n \nKansas counties\n1867 establishments in Kansas\nPopulated places established in 1867", "answers": ["McPherson County was established as a county in 1867."], "length": 1860, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "4c9131da4fb2a9dbc8dadce39d20bdc302a092b5740c414a"} {"input": "What is the proposed approach in this research paper?", "context": "\\section{Introduction}\n\\label{sec:introduction}\n\nProbabilistic models have proven to be very useful in a lot of applications in signal processing where signal estimation is needed \\cite{rabiner1989tutorial,arulampalam2002tutorial,ji2008bayesian}. Some of their advantages are that 1) they force the designer to specify all the assumptions of the model, 2) they provide a clear separation between the model and the algorithm used to solve it, and 3) they usually provide some measure of uncertainty about the estimation.\n\nOn the other hand, adaptive filtering is a standard approach in estimation problems when the input is received as a stream of data that is potentially non-stationary. This approach is widely understood and applied to several problems such as echo cancellation \\cite{gilloire1992adaptive}, noise cancellation \\cite{nelson1991active}, and channel equalization \\cite{falconer2002frequency}.\n\nAlthough these two approaches share some underlying relations, there are very few connections in the literature. The first important attempt in the signal processing community to relate these two fields was the connection between a linear Gaussian state-space model (i.e. Kalman filter) and the RLS filter, by Sayed and Kailath \\cite{sayed1994state} and then by Haykin \\emph{et al.} \\cite{haykin1997adaptive}. The RLS adaptive filtering algorithm emerges naturally when one defines a particular state-space model (SSM) and then performs exact inference in that model. This approach was later exploited in \\cite{van2012kernel} to design a kernel RLS algorithm based on Gaussian processes.\n\nA first attempt to approximate the LMS filter from a probabilistic perspective was presented in \\cite{park2014probabilistic}, focusing on a kernel-based implementation. The algorithm of \\cite{park2014probabilistic} makes use of a Maximum a Posteriori (MAP) estimate as an approximation for the predictive step. However, this approximation does not preserve the estimate of the uncertainty in each step, therefore degrading the performance of the algorithm.\n\nIn this work, we provide a similar connection between state-space models and least-mean-squares (LMS). Our approach is based on approximating the posterior distribution with an isotropic Gaussian distribution. We show how the computation of this approximated posterior leads to a linear-complexity algorithm, comparable to the standard LMS. Similar approaches have already been developed for a variety of problems such as channel equalization using recurrent RBF neural networks \\cite{cid1994recurrent}, or Bayesian forecasting \\cite{harrison1999bayesian}. Here, we show the usefulness of this probabilistic approach for adaptive filtering.\n\nThe probabilistic perspective we adopt throughout this work presents two main advantages. Firstly, a novel LMS algorithm with adaptable step size emerges naturally with this approach, making it suitable for both stationary and non-stationary environments. The proposed algorithm has less free parameters than previous LMS algorithms with variable step size \\cite{kwong1992variable,aboulnasr1997robust,shin2004variable}, and its parameters are easier to be tuned w.r.t. these algorithms and standard LMS. Secondly, the use of a probabilistic model provides us with an estimate of the error variance, which is useful in many applications.\n\nExperiments with simulated and real data show the advantages of the presented approach with respect to previous works. However, we remark that the main contribution of this paper is that it opens the door to introduce more Bayesian machine learning techniques, such as variational inference and Monte Carlo sampling methods \\cite{barber2012bayesian}, to adaptive filtering.\\\\\n\n\n\\section{Probabilistic Model}\n\nThroughout this work, we assume the observation model to be linear-Gaussian with the following distribution,\n\n\\begin{equation}\np(y_k|{\\bf w}_k) = \\mathcal{N}(y_k;{\\bf x}_k^T {\\bf w}_k , \\sigma_n^2),\n\\label{eq:mess_eq}\n\\end{equation}\nwhere $\\sigma_n^2$ is the variance of the observation noise, ${\\bf x}_k$ is the regression vector and ${\\bf w}_k$ is the parameter vector to be sequentially estimated, both $M$-dimensional column vectors.\n\n\nIn a non-stationary scenario, ${\\bf w}_k$ follows a dynamic process. In particular, we consider a diffusion process (random-walk model) with variance $\\sigma_d^2$ for this parameter vector:\n\n\n\\begin{equation}\np({\\bf w}_k|{\\bf w}_{k-1})= \\mathcal{N}({\\bf w}_k;{\\bf w}_{k-1}, \\sigma_d^2 {\\bf I}),\n\\label{eq:trans_eq}\n\\end{equation}\nwhere $\\bf I$ denotes the identity matrix. In order to initiate the recursion, we assume the following prior distribution on ${\\bf w}_k$\n\n\\begin{equation}\np({\\bf w}_0)= \\mathcal{N}({\\bf w}_0;0, \\sigma_d^2{\\bf I}).\\nonumber\n\\end{equation}\n\n\\section{Exact inference in this model: Revisiting the RLS filter}\n\nGiven the described probabilistic SSM, we would like to infer the posterior probability distribution $p({\\bf w}_k|y_{1:k})$.\nSince all involved distributions are Gaussian, one can perform exact inference, leveraging the probability rules in a straightforward manner. The resulting probability distribution is\n\\begin{equation}\np({\\bf w}_k|y_{1:k}) = \\mathcal{N}({\\bf w}_k;{\\bf\\boldsymbol\\mu}_{k}, \\boldsymbol\\Sigma_{k}), \\nonumber\n\\end{equation}\nin which the mean vector ${\\bf\\boldsymbol\\mu}_{k}$ is given by\n\\begin{equation}\n{\\bf\\boldsymbol\\mu}_k = {\\bf\\boldsymbol\\mu}_{k-1} + {\\bf K}_k (y_k - {\\bf x}_k^T {\\bf\\boldsymbol\\mu}_{k-1}){\\bf x}_k, \\nonumber\n\\end{equation}\nwhere we have introduced the auxiliary variable\n\\begin{equation}\n{\\bf K}_k = \\frac{ \\left(\\boldsymbol\\Sigma_{k-1} + \\sigma_d^2 {\\bf I}\\right)}{{\\bf x}_k^T \\left(\\boldsymbol\\Sigma_{k-1} + \\sigma_d^2 {\\bf I}\\right) {\\bf x}_k + \\sigma_n^2}, \\nonumber\n\\end{equation}\nand the covariance matrix $\\boldsymbol\\Sigma_k$ is obtained as\n\\begin{equation}\n\\boldsymbol\\Sigma_k = \\left( {\\bf I} - {\\bf K}_k{\\bf x}_k {\\bf x}_k^T \\right) ( \\boldsymbol\\Sigma_{k-1} +\\sigma_d^2), \\nonumber\n\\end{equation}\nNote that the mode of $p({\\bf w}_k|y_{1:k})$, i.e. the maximum-a-posteriori estimate (MAP), coincides with the RLS adaptive rule\n\\begin{equation}\n{{\\bf w}}_k^{(RLS)} = {{\\bf w}}_{k-1}^{(RLS)} + {\\bf K}_k (y_k - {\\bf x}_k^T {{\\bf w}}_{k-1}^{(RLS)}){\\bf x}_k .\n\\label{eq:prob_rls}\n\\end{equation}\nThis rule is similar to the one introduced in \\cite{haykin1997adaptive}.\n\nFinally, note that the covariance matrix $\\boldsymbol\\Sigma_k$ is a measure of the uncertainty of the estimate ${\\bf w}_k$ conditioned on the observed data $y_{1:k}$. Nevertheless, for many applications a single scalar summarizing the variance of the estimate could prove to be sufficiently useful. In the next section, we show how such a scalar is obtained naturally when $p({\\bf w}_k|y_{1:k})$ is approximated with an isotropic Gaussian distribution. We also show that this approximation leads to an LMS-like estimation.\n \n\n\n\\section{Approximating the posterior distribution: LMS filter }\n\nThe proposed approach consists in approximating the posterior distribution $p({\\bf w}_k|y_{1:k})$, in general a multivariate Gaussian distribution with a full covariance matrix, by an isotropic spherical Gaussian distribution \n\n\\begin{equation}\n\\label{eq:aprox_post}\n\\hat{p}({\\bf w}_{k}|y_{1:k})=\\mathcal{N}({\\bf w}_{k};{\\bf \\hat{\\boldsymbol\\mu}}_{k}, \\hat{\\sigma}_{k}^2 {\\bf I} ).\n\\end{equation}\n\nIn order to estimate the mean and covariance of the approximate distribution $\\hat{p}({\\bf w}_{k}|y_{1:k})$, we propose to select those that minimize the Kullback-Leibler divergence with respect to the original distribution, i.e., \n\n\\begin{equation}\n\\{\\hat{\\boldsymbol\\mu}_k,\\hat{\\sigma}_k\\}=\\arg \\displaystyle{ \\min_{\\hat{\\boldsymbol\\mu}_k,\\hat{\\sigma}_k}} \\{ D_{KL}\\left(p({\\bf w}_{k}|y_{1:k}))\\| \\hat{p}({\\bf w}_{k}|y_{1:k})\\right) \\}. \\nonumber\n\\end{equation}\n\nThe derivation of the corresponding minimization problem can be found in Appendix A. In particular, the optimal mean and the covariance are found as\n\\begin{equation}\n{\\hat{\\boldsymbol\\mu}}_{k} = {\\boldsymbol\\mu}_{k};~~~~~~ \\hat{\\sigma}_{k}^2 = \\frac{{\\sf Tr}\\{ \\boldsymbol\\Sigma_k\\} }{M}.\n\\label{eq:sigma_hat}\n\\end{equation}\n\n\nWe now show that by using \\eqref{eq:aprox_post} in the recursive predictive and filtering expressions we obtain an LMS-like adaptive rule. First, let us assume that we have an approximate posterior distribution at $k-1$, $\\hat{p}({\\bf w}_{k-1}|y_{1:k-1}) = \\mathcal{N}({\\bf w}_{k-1};\\hat{\\bf\\boldsymbol\\mu}_{k-1}, \\hat{\\sigma}_{k-1}^2 {\\bf I} )$. Since all involved distributions are Gaussian, the predictive distribution\nis obtained as %\n\\begin{eqnarray}\n\\hat{p}({\\bf w}_k|y_{1:k-1}) &=& \\int p({\\bf w}_k|{\\bf w}_{k-1}) \\hat{p}({\\bf w}_{k-1}|y_{1:k-1}) d{\\bf w}_{k-1} \\nonumber\\\\\n&=& \\mathcal{N}({\\bf w}_k;{\\bf\\boldsymbol\\mu}_{k|k-1}, \\boldsymbol\\Sigma_{k|k-1}), \n\\label{eq:approx_pred}\n\\end{eqnarray}\nwhere the mean vector and covariance matrix are given by\n\\begin{eqnarray}\n\\hat{\\bf\\boldsymbol\\mu}_{k|k-1} &=& \\hat{\\bf\\boldsymbol\\mu}_{k-1} \\nonumber \\\\\n\\hat{\\boldsymbol\\Sigma}_{k|k-1} &=& (\\hat{\\sigma}_{k-1}^2 + \\sigma_d^2 ){\\bf I}\\nonumber.\n\\end{eqnarray}\n\nFrom \\eqref{eq:approx_pred}, the posterior distribution at time $k$ can be computed using Bayes' Theorem and standard Gaussian manipulations (see for instance \\cite[Ch. 4]{murphy2012machine}). Then, we approximate the posterior $p({\\bf w}_k|y_{1:k})$ with an isotropic Gaussian,\n\\begin{equation}\n\\hat{p}({\\bf w}_k|y_{1:k}) = \\mathcal{N}({\\bf w}_k ; {\\hat{\\boldsymbol\\mu}}_{k}, \\hat{\\sigma}_k^2 {\\bf I} ),\\nonumber\n\\end{equation}\nwhere \n\\begin{eqnarray}\n{\\hat{\\boldsymbol\\mu}}_{k} &= & {\\hat{\\boldsymbol\\mu}}_{k-1}+ \\frac{ (\\hat{\\sigma}_{k-1}^2+ \\sigma_d^2) }{(\\hat{\\sigma}_{k-1}^2+ \\sigma_d^2) \\|{\\bf x}_k\\|^2 + \\sigma_n^2} (y_k - {\\bf x}_k^T {\\hat{\\boldsymbol\\mu}}_{k-1}){\\bf x}_k \\nonumber \\\\\n&=& {\\hat{\\boldsymbol\\mu}}_{k-1}+ \\eta_k (y_k - {\\bf x}_k^T {\\hat{\\boldsymbol\\mu}}_{k-1}){\\bf x}_k . \n\\label{eq:prob_lms}\n\\end{eqnarray}\nNote that, instead of a gain matrix ${\\bf K}_k$ as in Eq.~\\eqref{eq:prob_rls}, we now have a scalar gain $\\eta_k$ that operates as a variable step size.\n\n\nFinally, to obtain the posterior variance, which is our measure of uncertainty, we apply \\eqref{eq:sigma_hat} and the trick ${\\sf Tr}\\{{\\bf x}_k{\\bf x}_k^T\\}= {\\bf x}_k^T{\\bf x}_k= \\|{\\bf x}_k \\|^2$,\n\n\\begin{eqnarray}\n\\hat{\\sigma}_k^2 &=& \\frac{{\\sf Tr}(\\boldsymbol\\Sigma_k)}{M} \\\\\n&=& \\frac{1}{M}{\\sf Tr}\\left\\{ \\left( {\\bf I} - \\eta_k {\\bf x}_k {\\bf x}_k^T \\right) (\\hat{\\sigma}_{k-1}^2 +\\sigma_d^2)\\right\\} \\\\\n&=& \\left(1 - \\frac{\\eta_k \\|{\\bf x}_k\\|^2}{M}\\right)(\\hat{\\sigma}_{k-1}^2 +\\sigma_d^2).\n\\label{eq:sig_k}\n\\end{eqnarray}\nIf MAP estimation is performed, we obtain an adaptable step-size LMS estimation\n\n\\begin{equation}\n{\\bf w}_{k}^{(LMS)} = {\\bf w}_{k-1}^{(LMS)} + \\eta_k (y_k - {\\bf x}_k^T {\\bf w}_{k-1}^{(LMS)}){\\bf x}_k, \t\n\\label{eq:lms}\n\\end{equation}\nwith\n\\begin{equation}\n\\eta_k = \\frac{ (\\hat{\\sigma}_{k-1}^2+ \\sigma_d^2) }{(\\hat{\\sigma}_{k-1}^2+ \\sigma_d^2) \\|{\\bf x}_k\\|^2 + \\sigma_n^2}.\\nonumber\n\\end{equation}\nAt this point, several interesting remarks can be made:\n\n\\begin{itemize}\n\n\\item The adaptive rule \\eqref{eq:lms} has linear complexity since it does not require us to compute the full matrix $\\boldsymbol\\Sigma_k$.\n\n\\item For a stationary model, we have $\\sigma_d^2=0$ in \\eqref{eq:prob_lms} and \\eqref{eq:sig_k}. In this case, the algorithm remains valid and both the step size and the error variance, $\\hat{\\sigma}_{k}$, vanish over time $k$. \n\n\\item Finally, the proposed adaptable step-size LMS has only two parameters, $\\sigma_d^2$ and $\\sigma_n^2$, (and only one, $\\sigma_n^2$, in stationary scenarios) in contrast to other variable step-size algorithms \\cite{kwong1992variable,aboulnasr1997robust,shin2004variable}. More interestingly, both $\\sigma_d^2$ and $\\sigma_n^2$ have a clear underlying physical meaning, and they can be estimated in many cases. We will comment more about this in the next section. \n\\end{itemize}\n\n\n\n\\section{Experiments}\n\\label{sec:experiments}\n\nWe evaluate the performance of the proposed algorithm in both stationary and tracking experiments. In the first experiment, we estimate a fixed vector ${\\bf w}^{o}$ of dimension $M=50$. The entries of the vector are independently and uniformly chosen in the range $[-1,1]$. Then, the vector is normalized so that $\\|{\\bf w}^o\\|=1$. Regressors $\\boldsymbol{x}_{k}$ are zero-mean Gaussian vectors with identity covariance matrix. The additive noise variance is such that the SNR is $20$ dB. We compare our algorithm with standard RLS and three other LMS-based algorithms: LMS, NLMS \\cite{sayed2008adaptive}, VSS-LMS \\cite{shin2004variable}.\\footnote{The used parameters for each algorithm are: for RLS $\\lambda=1$, $\\epsilon^{-1}=0.01$; for LMS $\\mu=0.01$; for NLMS $\\mu=0.5$; and for VSS-LMS $\\mu_{max}=1$, $\\alpha=0.95$, $C=1e-4$.} The probabilistic LMS algorithm in \\cite{park2014probabilistic} is not simulated because it is not suitable for stationary environments.\n\nIn stationary environments, the proposed algorithm has only one parameter, $\\sigma^2_n$. We simulate both the scenario where we have perfectly knowledge of the amount of noise (probLMS1) and the case where the value $\\sigma^2_n$ is $100$ times smaller than the actual value (probLMS2). The Mean-Square Deviation (${\\sf MSD} = {\\mathbb E} \\| {\\bf w}_0 - {\\bf w}_k \\|^2$), averaged out over $50$ independent simulations, is presented in Fig. \\ref{fig:msd_statationary}.\n\n\n\n\\begin{figure}[htb]\n\\centering\n\\begin{minipage}[b]{\\linewidth}\n \\centering\n \\centerline{\\includegraphics[width=\\textwidth]{results_stationary_MSD}}\n\\end{minipage}\n\\caption{Performance in terms of MSD of probabilistic LMS with both optimal (probLMS1) and suboptimal (probLMS2) compared to LMS, NLMS, VS-LMS, and RLS.}\n\\label{fig:msd_statationary}\n\\end{figure}\n\nThe performance of probabilistic LMS is close to RLS (obviously at a much lower computational cost) and largely outperforms previous variable step-size LMS algorithms proposed in the literature. Note that, when the model is stationary, i.e. $\\sigma^2_d=0$ in \\eqref{eq:trans_eq}, both the uncertainty $\\hat{\\sigma}^2_k$, and the adaptive step size $\\eta_k$, vanish over time. This implies that the error tends to zero when $k$ goes to infinity. Fig. \\ref{fig:msd_statationary} also shows that the proposed approach is not very sensitive to a bad choice of its only parameter, as demonstrated by the good results of probLMS2, which uses a $\\sigma^2_n$ that is $100$ times smaller than the optimal value. \n\n\n\\begin{figure}[htb]\n\\centering\n\\begin{minipage}[b]{\\linewidth}\n \\centering\n \\centerline{\\includegraphics[width=\\textwidth]{fig2_final}}\n\\end{minipage}\n\\caption{Real part of one coefficient of the measured and estimated channel in experiment two. The shaded area represents two standard deviations from the prediction {(the mean of the posterior distribution)}.}\n\\label{fig_2}\n\\end{figure}\n\n\n\\begin{table}[ht]\n\\begin{footnotesize}\n\\setlength{\\tabcolsep}{2pt}\n\\def1.5mm{1.5mm}\n\\begin{center}\n\\begin{tabular}{|l@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|}\n\\hline\nMethod & LMS & NLMS & LMS-2013 & VSSNLMS & probLMS & RLS \\\\\n\\hline\n\\hline\nMSD (dB) &-28.45 &-21.07 &-14.36 &-26.90 &-28.36 &-25.97\\\\\n\\hline \n\\end{tabular}\n\\end{center}\n\\caption{Steady-state MSD of the different algorithms for the tracking of a real MISO channel.}\n\\label{tab:table_MSD}\n\\end{footnotesize}\n\n\\end{table}\n\\newpage\nIn a second experiment, we test the tracking capabilities of the proposed algorithm with {real} data of a wireless MISO channel acquired in a realistic indoor scenario. More details on the setup can be found in \\cite{gutierrez2011frequency}. Fig. \\ref{fig_2} shows the real part of one of the channels, and the estimate of the proposed algorithm. The shaded area represents the estimated uncertainty for each prediction, i.e. $\\hat{\\mu}_k\\pm2\\hat{\\sigma}_k$. Since the experimental setup does not allow us to obtain the optimal values for the parameters, we fix these parameters to their values that optimize the steady-state mean square deviation (MSD). \\hbox{Table \\ref{tab:table_MSD}} shows this steady-state MSD of the estimate of the MISO channel with different methods. As can be seen, the best tracking performance is obtained by standard LMS and the proposed method. \n\n\n\n\n\n\\section{Conclusions and Opened Extensions}\n\\label{sec:conclusions}\n\n{We have presented a probabilistic interpretation of the least-mean-square filter. The resulting algorithm is an adaptable step-size LMS that performs well both in stationary and tracking scenarios. Moreover, it has fewer free parameters than previous approaches and these parameters have a clear physical meaning. Finally, as stated in the introduction, one of the advantages of having a probabilistic model is that it is easily extensible:}\n\n\\begin{itemize}\n\\item If, instead of using an isotropic Gaussian distribution in the approximation, we used a Gaussian with diagonal covariance matrix, we would obtain a similar algorithm with different step sizes and measures of uncertainty, for each component of ${\\bf w}_k$. Although this model can be more descriptive, it needs more parameters to be tuned, and the parallelism with LMS vanishes.\n\\item Similarly, if we substitute the transition model of \\eqref{eq:trans_eq} by an Ornstein-Uhlenbeck process, \n\n\\begin{equation}\np({\\bf w}_k|{\\bf w}_{k-1})= \\mathcal{N}({\\bf w}_k;\\lambda {\\bf w}_{k-1}, \\sigma_d^2), \\nonumber\n\\label{eq:trans_eq_lambda}\n\\end{equation}\na similar algorithm is obtained but with a forgetting factor $\\lambda$ multiplying ${\\bf w}_{k-1}^{(LMS)}$ in \\eqref{eq:lms}. This algorithm may have improved performance under such a kind of autoregresive dynamics of ${\\bf w}_{k}$, though, again, the connection with standard LMS becomes dimmer.\n\n\\item As in \\cite{park2014probabilistic}, the measurement model \\eqref{eq:mess_eq} can be changed to obtain similar adaptive algorithms for classification, ordinal regression, and Dirichlet regression for compositional data. \n\n\\item A similar approximation technique could be applied to more complex dynamical models, i.e. switching dynamical models \\cite{barber2010graphical}. The derivation of efficient adaptive algorithms that explicitly take into account a switch in the dynamics of the parameters of interest is a non-trivial and open problem, though the proposed approach could be useful.\n\n\\item Finally, like standard LMS, this algorithm can be kernelized for its application in estimation under non-linear scenarios.\n\n\\end{itemize}\n\n\n\\begin{appendices}\n\n\\section{KL divergence between a general gaussian distribution and an isotropic gaussian}\n\\label{sec:kl}\n\n We want to approximate $p_{{\\bf x}_1}(x) = \\mathcal{N}({\\bf x}; \\boldsymbol\\mu_1,\\boldsymbol\\Sigma_1)$ by $p_{{\\bf x}_2}({\\bf x}) = \\mathcal{N}({\\bf x}; \\boldsymbol\\mu_2,\\sigma_2^2 {\\bf I})$. In order to do so, we have to compute the parameters of $p_{{\\bf x}_2}({\\bf x})$, $\\boldsymbol\\mu_2$ and $\\sigma_2^2$, that minimize the following Kullback-Leibler divergence,\n\n\\begin{eqnarray}\nD_{KL}(p_{{\\bf x}_1}\\| p_{{\\bf x}_2}) &=&\\int_{-\\infty}^{\\infty} p_{{\\bf x}_1}({\\bf x}) \\ln{\\frac{p_{{\\bf x}_1}({\\bf x})}{p_{{\\bf x}_2}({\\bf x})}}d{\\bf x} \\nonumber \\\\\n&= & \\frac{1}{2} \\{ -M + {\\sf Tr}(\\sigma_2^{-2} {\\bf I}\\cdot \\boldsymbol\\Sigma_1^{-1}) \\nonumber \\\\\n & & + (\\boldsymbol\\mu_2 - \\boldsymbol\\mu_1 )^T \\sigma^{-2}_2{\\bf I} (\\boldsymbol\\mu_2 - \\boldsymbol\\mu_1 ) \\nonumber \\\\\n & & + \\ln \\frac{{\\sigma_2^2}^M}{\\det\\boldsymbol\\Sigma_1} \\}. \n\\label{eq:divergence}\n\\end{eqnarray}\nUsing symmetry arguments, we obtain \n\\begin{equation}\n\\boldsymbol\\mu_2^{*} =\\arg \\displaystyle{ \\min_{\\boldsymbol\\mu_2}} \\{ D_{KL}(p_{{\\bf x}_1}\\| p_{{\\bf x}_2}) \\} = \\boldsymbol\\mu_1.\n\\end{equation}\nThen, \\eqref{eq:divergence} gets simplified into \n\n\\begin{eqnarray}\nD_{KL}(p_{{\\bf x}_1}\\| p_{{\\bf x}_2}) = \\frac{1}{2}\\lbrace { -M + {\\sf Tr}(\\frac{\\boldsymbol\\Sigma_1}{\\sigma_2^{2}}) + \\ln \\frac{\\sigma_2^{2M}}{\\det\\boldsymbol\\Sigma_1}}\\rbrace.\n\\end{eqnarray}\nThe variance $\\sigma_2^2$ is computed in order to minimize this Kullback-Leibler divergence as\n\n\\begin{eqnarray}\n\\sigma_2^{2*} &=& \\arg\\min_{\\sigma_2^2} D_{KL}(P_{x_1}\\| P_{x_2}) \\nonumber \\\\\n &=& \\arg\\min_{\\sigma_2^2}\\{ \\sigma_2^{-2}{\\sf Tr}\\{\\boldsymbol\\Sigma_1\\} + M\\ln \\sigma_2^{2} \\} .\n\\end{eqnarray}\nDeriving and making it equal zero leads to\n\n\\begin{equation}\n\\frac{\\partial}{\\partial \\sigma_2^2} \\left[ \\frac{{\\sf Tr}\\{\\boldsymbol\\Sigma_1\\}}{\\sigma_2^{2}} + M \\ln \\sigma_2^{2} \\right] = \\left. {\\frac{M}{\\sigma_2^{2}}-\\frac{{\\sf Tr}\\{\\boldsymbol\\Sigma_1\\}}{(\\sigma_2^{2})^2}}\\right|_{\\sigma_2^{2}=\\sigma_2^{2*}}\\left. =0 \\right. .\n\\nonumber\n\\end{equation}\nFinally, since the divergence has a single extremum in $R_+$,\n\\begin{equation}\n\\sigma_2^{2*} = \\frac{{\\sf Tr}\\{\\boldsymbol\\Sigma_1\\}}{M}.\n\\end{equation}\n\n\n\n\n\\end{appendices}\n\n\\vfill\n\\clearpage\n\n\\bibliographystyle{IEEEbib}\n", "answers": ["This research paper proposed an approach based on approximating the posterior distribution with an isotropic Gaussian distribution."], "length": 2556, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "394cb48c037481d97cdf1dbd7adef475061b9e77235842e2"} {"input": "How many years has KSTP-FM 102.1 been on the air?", "context": "KSTP (1500 AM; SKOR North) is a commercial AM radio station licensed to Saint Paul, Minnesota. It is the flagship AM radio station of Hubbard Broadcasting, which also owns several other television and radio stations across the United States. KSTP has a sports radio format and is the ESPN Radio Network affiliate for Minneapolis-St. Paul. The radio studios are on University Avenue in Minneapolis, shared with sister stations KSTP-FM, KSTP-TV, KTMY, and KSTC-TV. On weekdays, KSTP airs local sports shows from 9 a.m. to 9 p.m. and carries ESPN programming weekday mornings, late nights and weekends. Some KSTP shows are simulcast on other sports radio stations in the region.\n\nKSTP runs the maximum power for AM stations, 50,000 watts. It shares clear-channel, Class A status on 1500 AM with WFED in Washington, D.C. KSTP broadcasts a directional signal at night, using a three-tower array, with its transmitter on U.S. Route 61 at Beam Avenue in Maplewood. Programming is also heard on 250 watt FM translator K235BP at 94.9 MHz in Bemidji.\n\nHistory\n\nWAMD and KFOY\nKSTP's start in 1928 was the product of a merger between two pioneering Twin Cities stations: WAMD (\"Where All Minneapolis Dances\") in Minneapolis, first licensed on February 16, 1925 to Stanley E. Hubbard, and KFOY in St. Paul, first licensed on March 12, 1924 to the Beacon Radio Service in St. Paul.\n\nFollowing a few test transmissions, WAMD made its formal debut broadcast on February 22, 1925. (In later interviews Stanley Hubbard traced WAMD's start to April 1924.) It was located at the Marigold Dance Garden, and featured nightly \"Midnight Frolics\" broadcasts by the ballroom's orchestra. It is claimed that WAMD was the first radio station to be completely supported by running paid advertisements. Effective June 15, 1927, WAMD was assigned to 1330 kHz.\n\nOn November 11, 1927 WAMD's transmitter site at Oxboro Heath on Lyndale Avenue South burned down, two weeks after the station had been sold to the National Battery Company. An initial arrangement was made to carry WAMD's programs over WRHM (now WWTC), transmitting on WAMD's 1330 kHz frequency. Beginning on November 24, 1927 the WAMD broadcasts, still on 1330 kHz, were shifted to KFOY's facility in St. Paul. (At this time KFOY was assigned to 1050 kHz). The next day it was announced that National Battery had purchased KFOY, and as of December 1, 1927 both KFOY and WAMD were reassigned to 1350 kHz. WAMD continued making regular broadcasts until the end of March 1928, while KFOY, although it continued to be licensed for a few more months on a time-sharing basis with WAMD, ceased operations at this point.\n\nNational Battery Company\nIn mid-December 1927, the National Battery Company announced it had received permission from the Federal Radio Commission (FRC) to build a new station, with the call letters KSTP, operating from a transmitter site to be constructed three miles south of Wescott. The next month it was reported that the new station, still under construction, had been assigned to 1360 kHz. KSTP made its debut broadcast on March 29, 1928. Although technically it was a separate station from WAMD and KFOY, both of which were formally deleted on April 30, 1928, overall KSTP was treated as the direct successor to a consolidated WAMD and KFOY.\n\nHubbard became the merged station's general manager, acquiring controlling interest in 1941. A month after the merger, KSTP became an affiliate for the NBC Red Network. It remained with NBC for 46 years. On November 11, 1928, under the provisions of the FRC's General Order 40, KSTP was assigned to a \"high-powered regional\" frequency of 1460 kHz. The only other station assigned to this frequency was WTFF in Mount Vernon Hills, Virginia (later WJSV, now WFED, Washington, D.C.). On February 7, 1933, the FRC authorized KSTP to increase its daytime power to 25 KW. In 1938 and 1939 KSTP also operated a high-fidelity AM \"experimental audio broadcasting station\" Apex station, W9XUP, originally on 25,950 kHz and later on 26,150 kHz. In 1941, as part of the implementation of the North American Regional Broadcasting Agreement, KSTP was assigned to its current \"clear channel\" frequency of 1500 kHz, with the provision that it and WJSV, as \"Class I-B\" stations, had to maintain directional antennas at night in order to mutually protect each other from interference. An FM station, KSTP-FM, was founded in 1946 but shut down in 1952.\n\nHubbard reportedly acquired an RCA TV camera in 1939, and started experimenting with television broadcasts. But World War II put a hold on the development of television. In 1948, with the war over, KSTP-TV became the first television station in Minnesota. With KSTP 1500 already associated with NBC Radio, KSTP-TV became an NBC Television Network affiliate. From 1946 to 1952, KSTP also had an FM counterpart. KSTP-FM 102.1 was only on the air four years. There were few radios equipped to receive FM signals in that era, and management decided to discontinue FM broadcasts.\n\nMOR and Top 40\nAs network programming moved from radio to television, KSTP programmed a full service Middle of the Road (MOR) radio format, in the shadow of its chief competitor, CBS Radio affiliate 830 WCCO. In 1965, a new FM station, reviving the KSTP-FM call sign, was put on the air, largely simulcasting the AM station. But by the late 1960s, KSTP-FM began a separate format of beautiful music. KSTP was the radio home of the Minnesota Vikings football team from 1970 to 1975. \n\nIn 1973, KSTP broke away from its longtime adult MOR sound and became one of four area stations at the time to program a Top 40 format. \"15 KSTP, The Music Station\" competed with Top 40 AM rivals WDGY, KDWB and later, WYOO. The competition would eventually shake itself out, with outrageous rocker WYOO dropping out after being sold in 1976, and then the staid WDGY switching to country music the following year. As for uptempo hits station 15 KSTP, it went from a tight Top 40 format to leaning adult rock in 1978, to leaning adult contemporary in 1979, to evolving into adult contemporary/talk by 1980. In 1982, it officially shifted to talk. Most Top 40 rock music, by this time, had moved to the FM band.\n\nPast Personalities\n\nNotable hosts who have been on KSTP include John Hines, Jesse Ventura, Larry Carolla, Tom Barnard, Big Al Davis, Don Vogel, John MacDougall, Griff, Mike Edwards, Geoff Charles, Joe Soucheray, James Lileks, Leigh Kamman, Barbara Carlson, Peter Thiele, Tom Mischke, Jason Lewis, Chuck Knapp, Machine Gun Kelly, Charle Bush, Mark O'Connell and Paul Brand. These broadcasters were supported by producers such as Bruce Huff, Rob Pendleton, Alison Brown, Jean Bjorgen, David Elvin (who Vogel dubbed the \"Steven Spielberg of Talk Radio\"), Mitch Berg and others.\n\nThe station has, for the most part, emphasized local hosts over the years. But in 1988, KSTP was one of Rush Limbaugh's first affiliates when his conservative talk show was rolled out for national syndication. (Clear Channel-owned KTLK-FM took over rights to Limbaugh's show in January 2006). Other syndicated hosts previously heard on KSTP include Sean Hannity, Bruce Williams, Larry King, and Owen Spann.\n\nSports Radio\nKSTP switched to Sports Radio on February 15, 2010. As the station had to wait for ESPN's contract with rival KFAN and its sister station KFXN to expire, it did not become an ESPN Radio affiliate until April 12, the same day that the Minnesota Twins were scheduled to play the first game in their new ball park, Target Field, against the Boston Red Sox. As a result Coast to Coast AM and Live on Sunday Night, it's Bill Cunningham were retained during this period. One ESPN Radio network program, The Herd with Colin Cowherd, was picked up by KSTP immediately following the format change.\n\nIn 2018, the station was approved for an FM translator on 94.1 FM, broadcasting from a transmitter atop the IDS Center in downtown Minneapolis. The two-watt signal threw most of its power to the west, preventing interference to low powered FM stations on the same channel including WFNU-LP in St. Paul. With only two watts of power, however, the signal was limited to the immediate downtown area surrounding the IDS Center. It later acquired a 250 watt translator, K235BP at 94.9 MHz. The original translator was discontinued.\n\nOn January 15, 2019, KSTP rebranded as \"SKOR North\" (a reference to the Vikings team song/chant, \"Skol, Vikings\"), with local programming between 12 noon and 7 pm. About a year later, in May of 2020, KSTP suspended most of its local programming and laid off nearly all of its local staff. Station management cited the economic toll of the coronavirus for the changes. Sports broadcasting continues, primarily composed of ESPN radio network broadcasts.\n\nSports Teams\n\nKSTP-AM served as the radio flagship for the Minnesota Vikings football team from 1970 to 1975.\n\nOn August 1, 2006, the station announced that it would be the new flagship station for the Minnesota Twins baseball team, effective with the start of the 2007 season. The Twins had been on rival WCCO since arriving in Minnesota in 1961. KSTP served as the flagship for the Twins until the end of the 2012 season, when games moved to 96.3 KTWN-FM (now KMWA). The Twins have since returned to WCCO 830.\n\nThe switch to a fairly weak FM station caused dissent among some listeners, particularly in communities that had trouble picking up KSTP 1500. Although KSTP is the state's second most powerful AM station, it must operate directionally at night, delivering a reduced signal to parts of the market. WCCO, by comparison, offers a signal with a wider coverage area during the day than KSTP does, with WCCO's non-directional 50,000 watt signal. In response, the Twins have expanded the number of affiliates.\n\nOn March 9, 2011, KSTP announced it would be the new flagship for the University of Minnesota Golden Gophers men's and women's basketball and men's ice hockey, ending a 68-year run on WCCO. The rights have since moved to KFXN-FM, which already aired Gopher football.\n\nOn March 2, 2017, KSTP announced it would be the first radio broadcaster for Minnesota United FC. The move brings live soccer action to 1500 AM.\n\nPrevious logos\n\nReferences\n\nExternal links\nKSTP website\n\nFCC History Cards for KSTP (covering 1928-1980)\nRadiotapes.com Historic Minneapolis/St. Paul airchecks dating back to 1924 including KSTP and other Twin Cities radio stations.\nRick Burnett's TwinCitiesRadioAirchecks.com has additional airchecks of KSTP and other Twin Cities radio stations from the '60s and '70s, including Chuck Knapp's 2nd show on KSTP.\n\nHubbard Broadcasting\nESPN Radio stations\nPeabody Award winners\nRadio stations in Minneapolis–Saint Paul\nRadio stations established in 1925\n1925 establishments in Minnesota\nMinnesota Kicks\nSports radio stations in the United States\nClear-channel radio stations.", "answers": ["Four years."], "length": 1802, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "622b577978afc1346c315f98a18678c8e6d5ccbf82511f74"} {"input": "Does DUO contain more instances per image than COCO?", "context": "\\section{Introduction}\nUnderwater robot picking is to use the robot to automatically capture sea creatures like holothurian, echinus, scallop, or starfish in an open-sea farm where underwater object detection is the key technology for locating creatures. Until now, the datasets used in this community are released by the Underwater Robot Professional Contest (URPC$\\protect\\footnote{Underwater Robot Professional Contest: {\\bf http://en.cnurpc.org}.}$) beginning from 2017, in which URPC2017 and URPC2018 are most often used for research. Unfortunately, as the information listed in Table \\ref{Info}, URPC series datasets do not provide the annotation file of the test set and cannot be downloaded after the contest. \nTherefore, researchers \\cite{2020arXiv200511552C,2019arXiv191103029L} first have to divide the training data into two subsets, including a new subset of training data and a new subset of testing data, and then train their proposed method and other \\emph{SOTA} methods. On the one hand, training other methods results in a significant increase in workload. On the other hand, different researchers divide different datasets in different ways, \n\\begin{table}[t]\n\\renewcommand\\tabcolsep{3.5pt}\n\\caption{Information about all the collected datasets. * denotes the test set's annotations are not available. \\emph{3} in Class means three types of creatures are labeled, \\emph{i.e.,} holothurian, echinus, and scallop. \\emph{4} means four types of creatures are labeled (starfish added). Retention represents the proportion of images that retain after similar images have been removed.}\n\\centering \n\\begin{tabular}{|l|c|c|c|c|c|}\n\\hline\nDataset&Train&Test&Class&Retention&Year \\\\ \n\\hline \nURPC2017&17,655&985*&3&15\\%&2017 \\\\\n\\hline\nURPC2018&2,901&800*&4&99\\%&2018 \\\\\n\\hline\nURPC2019&4,757&1,029*&4&86\\%&2019 \\\\\n\\hline\nURPC2020$_{ZJ}$&5,543&2,000*&4&82\\%&2020 \\\\\n\\hline\nURPC2020$_{DL}$&6,575&2,400*&4&80\\%&2020 \\\\\n\\hline\nUDD&1,827&400&3&84\\%&2020 \\\\\n\\hline \n\n\\end{tabular}\n\\label{Info}\n\\end{table}\n\\begin{figure*}[htbp]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{example.pdf}\n\\end{center}\n \\caption{Examples in DUO, which show a variety of scenarios in underwater environments.}\n\\label{exam}\n\\end{figure*}\ncausing there is no unified benchmark to compare the performance of different algorithms.\nIn terms of the content of the dataset images, there are a large number of similar or duplicate images in the URPC datasets. URPC2017 only retains 15\\% images after removing similar images compared to other datasets. Thus the detector trained on URPC2017 is easy to overfit and cannot reflect the real performance.\nFor other URPC datasets, the latter also includes images from the former, \\emph{e.g.}, URPC2019 adds 2,000 new images compared to URPC2018; compared with URPC2019, URPC2020$_{ZJ}$ adds 800 new images. The URPC2020$_{DL}$ adds 1,000 new images compared to the URPC2020$_{ZJ}$. It is worth mentioning that the annotation of all datasets is incomplete; some datasets lack the starfish labels and it is easy to find error or missing labels. \\cite{DBLP:conf/iclr/ZhangBHRV17} pointed out that although the CNN model has a strong fitting ability for any dataset, the existence of dirty data will significantly weaken its robustness.\nTherefore, a reasonable dataset (containing a small number of similar images as well as an accurate annotation) and a corresponding recognized benchmark are urgently needed to promote community development.\n\n\nTo address these issues, we introduce a dataset called Detecting Underwater Objects (DUO) by collecting and re-annotating all the available underwater datasets. It contains 7,782 underwater images after deleting overly similar images and has a more accurate annotation with four types of classes (\\emph{i.e.,} holothurian, echinus, scallop, and starfish). \nBesides, based on the MMDetection$\\protect\\footnote{MMDetection is an open source object detection toolbox based on PyTorch. {\\bf https://github.com/open-mmlab/mmdetection}}$ \\cite{chen2019mmdetection} framework, we also provide a \\emph{SOTA} detector benchmark containing efficiency and accuracy indicators, providing a reference for both academic research and industrial applications. It is worth noting that JETSON AGX XAVIER$\\protect\\footnote{JETSON AGX XAVIER is an embedded development board produced by NVIDIA which could be deployed in an underwater robot. Please refer {\\bf https://developer.nvidia.com/embedded/jetson-agx-xavier-developer-kit} for more information.}$ was used to assess all the detectors in the efficiency test in order to simulate robot-embedded environment. DUO will be released in https://github.com/chongweiliu soon.\n\nIn summary, the contributions of this paper can be listed as follows.\n\n $\\bullet$ By collecting and re-annotating all relevant datasets, we introduce a dataset called DUO with more reasonable annotations as well as a variety of underwater scenes.\n\n $\\bullet$ We provide a corresponding benchmark of \\emph{SOTA} detectors on DUO including efficiency and accuracy indicators which could be a reference for both academic research and industrial applications. \n\n\n\\pagestyle{empty}\n\\section{Background}\nIn the year of 2017, underwater object detection for open-sea farming is first proposed in the target recognition track of Underwater Robot Picking Contest 2017$\\protect\\footnote{From 2020, the name has been changed into Underwater Robot Professional Contest which is also short for URPC.}$ (URPC2017) which aims to promote the development of theory, technology, and industry of the underwater agile robot and fill the blank of the grabbing task of the underwater agile robot. The competition sets up a target recognition track, a fixed-point grasping track, and an autonomous grasping track. The target recognition track concentrates on finding the {\\bf high accuracy and efficiency} algorithm which could be used in an underwater robot for automatically grasping.\n\nThe datasets we used to generate the DUO are listed below. The detailed information has been shown in Table \\ref{Info}.\n\n {\\bf URPC2017}: It contains 17,655 images for training and 985 images for testing and the resolution of all the images is 720$\\times$405. All the images are taken from 6 videos at an interval of 10 frames. However, all the videos were filmed in an artificial simulated environment and pictures from the same video look almost identical. \n \n {\\bf URPC2018}: It contains 2,901 images for training and 800 images for testing and the resolutions of the images are 586$\\times$480, 704$\\times$576, 720$\\times$405, and 1,920$\\times$1,080. The test set's annotations are not available. Besides, some images were also collected from an artificial underwater environment.\n \n {\\bf URPC2019}: It contains 4,757 images for training and 1029 images for testing and the highest resolution of the images is 3,840$\\times$2,160 captured by a GOPro camera. The test set's annotations are also not available and it contains images from the former contests.\n \n {\\bf URPC2020$_{ZJ}$}: From 2020, the URPC will be held twice a year. It was held first in Zhanjiang, China, in April and then in Dalian, China, in August. URPC2020$_{ZJ}$ means the dataset released in the first URPC2020 and URPC2020$_{DL}$ means the dataset released in the second URPC2020. This dataset contains 5,543 images for training and 2,000 images for testing and the highest resolution of the images is 3,840$\\times$2,160. The test set's annotations are also not available.\n \n {\\bf URPC2020$_{DL}$}: This dataset contains 6,575 images for training and 2,400 images for testing and the highest resolution of the images is 3,840$\\times$2,160. The test set's annotations are also not available.\n \n {\\bf UDD \\cite{2020arXiv200301446W}}: This dataset contains 1,827 images for training and 400 images for testing and the highest resolution of the images is 3,840$\\times$2,160. All the images are captured by a diver and a robot in a real open-sea farm.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{pie.pdf}\n\\end{center}\n \\caption{The proportion distribution of the objects in DUO.}\n\\label{pie}\n\\end{figure}\n\n\n\n\\begin{figure*}\n \\centering\n \\subfigure[]{\\includegraphics[width=3.45in]{imagesize.pdf}}\n \\subfigure[]{\\includegraphics[width=3.45in]{numInstance.pdf}}\n \\caption{(a) The distribution of instance sizes for DUO; (b) The number of categories per image.}\n \\label{sum}\n\\end{figure*}\n\\section{Proposed Dataset}\n\n\\subsection{Image Deduplicating}\nAs we explained in Section 1, there are a large number of similar or repeated images in the series of URPC datasets. Therefore, it is important to delete duplicate or overly similar images and keep a variety of underwater scenarios when we merge these datasets together. Here we employ the Perceptual Hash algorithm (PHash) to remove those images. PHash has the special property that the hash value is dependent on the image content, and it remains approximately the same if the content is not significantly modified. Thus we can easily distinguish different scenarios and delete duplicate images within one scenario. \n\nAfter deduplicating, we obtain 7,782 images (6,671 images for training; 1,111 for testing). The retention rate of the new dataset is 95\\%, which means that there are only a few similar images in the new dataset. Figure \\ref{exam} shows that our dataset also retains various underwater scenes.\n\n\\subsection{Image Re-annotation}\nDue to the small size of objects and the blur underwater environment, there are always missing or wrong labels in the existing annotation files. In addition, some test sets' annotation files are not available and some datasets do not have the starfish annotation. In order to address these issues, we follow the next process which combines a CNN model and manual annotation to re-annotate these images. Specifically, we first train a detector (\\emph{i.e.,} GFL \\cite{li2020generalized}) with the originally labeled images. After that, the trained detector predicts all the 7,782 images. We treat the prediction as the groundtruth and use it to train the GFL again. We get the final GFL prediction called {\\bf the coarse annotation}. Next, we use manual correction to get the final annotation called {\\bf the fine annotation}. Notably, we adopt the COCO \\cite{Belongie2014} annotation form as the final format.\n\\subsection{Dataset Statistics}\n{\\bf The proportion of classes}: The total number of objects is 74,515. Holothurian, echinus, scallop, and starfish are 7,887, 50,156, 1,924, and 14,548, respectively. Figure \\ref{pie} shows the proportion of each creatures where echinus accounts for 67.3\\% of the total. The whole data distribution shows an obvious long-tail distribution because the different economic benefits of different seafoods determine the different breed quantities.\n\n{\\bf The distribution of instance sizes}: Figure \\ref{sum}(a) shows an instance size distribution of DUO. \\emph{Percent of image size} represents the ratio of object area to image area, and \\emph{Percent of instance} represents the ratio of the corresponding number of objects to the total number of objects. Because of these small creatures and high-resolution images, the vast majority of objects occupy 0.3\\% to 1.5\\% of the image area.\n\n{\\bf The instance number per image}: Figure \\ref{sum}(b) illustrates the number of categories per image for DUO. \\emph{Number of instances} represents the number of objects one image has, and \\emph{ Percentage of images} represents the ratio of the corresponding number of images to the total number of images. Most images contain between 5 and 15 instances, with an average of 9.57 instances per image.\n\n{\\bf Summary}:\nIn general, smaller objects are harder to detect. For PASCAL VOC \\cite{Everingham2007The} or COCO \\cite{Belongie2014}, roughly 50\\% of all objects occupy no more than 10\\% of the image itself, and others evenly occupy from 10\\% to 100\\%. \nIn the aspect of instances number per image, COCO contains 7.7 instances per image and VOC contains 3. In comparison, DUO has 9.57 instances per image and most instances less than 1.5\\% of the image size.\nTherefore, DUO contains almost exclusively massive small instances and has the long-tail distribution at the same time, which means it is promising to design a detector to deal with massive small objects and stay high efficiency at the same time for underwater robot picking.\n\n\\section{Benchmark}\nBecause the aim of underwater object detection for robot picking is to find {\\bf the high accuracy and efficiency} algorithm, we consider both the accuracy and efficiency evaluations in the benchmark as shown in Table \\ref{ben}.\n\n\\subsection{Evaluation Metrics}\nHere we adopt the standard COCO metrics (mean average precision, \\emph{i.e.,} mAP) for the accuracy evaluation and also provide the mAP of each class due to the long-tail distribution.\n\n{\\bf AP} -- mAP at IoU=0.50:0.05:0.95.\n\n{\\bf AP$_{50}$} -- mAP at IoU=0.50.\n\n{\\bf AP$_{75}$} -- mAP at IoU=0.75. \n\n{\\bf AP$_{S}$} -- {\\bf AP} for small objects of area smaller than 32$^{2}$.\n\n{\\bf AP$_{M}$} -- {\\bf AP} for objects of area between 32$^{2}$ and 96$^{2}$.\n\n{\\bf AP$_{L}$} -- {\\bf AP} for large objects of area bigger than 96$^{2}$.\n\n{\\bf AP$_{Ho}$} -- {\\bf AP} in holothurian.\n\n{\\bf AP$_{Ec}$} -- {\\bf AP} in echinus.\n\n{\\bf AP$_{Sc}$} -- {\\bf AP} in scallop.\n\n{\\bf AP$_{St}$} -- {\\bf AP} in starfish.\n\n\nFor the efficiency evaluation, we provide three metrics:\n\n{\\bf Param.} -- The parameters of a detector.\n\n{\\bf FLOPs} -- Floating-point operations per second.\n\n{\\bf FPS} -- Frames per second.\n\nNotably, {\\bf FLOPs} is calculated under the 512$\\times$512 input image size and {\\bf FPS} is tested on a JETSON AGX XAVIER under MODE$\\_$30W$\\_$ALL. \n\n\\subsection{Standard Training Configuration}\nWe follow a widely used open-source toolbox, \\emph{i.e.,} MMDetection (V2.5.0) to produce up our benchmark. During the training, the standard configurations are as follows:\n\n $\\bullet$ We initialize the backbone models (\\emph{e.g.,} ResNet50) with pre-trained parameters on ImageNet \\cite{Deng2009ImageNet}.\n\n $\\bullet$ We resize each image into 512 $\\times$ 512 pixels both in training and testing. Each image is flipped horizontally with 0.5 probability during training.\n\n $\\bullet$ We normalize RGB channels by subtracting 123.675, 116.28, 103.53 and dividing by 58.395, 57.12, 57.375, respectively.\n\n $\\bullet$ SGD method is adopted to optimize the model. The initial learning rate is set to be 0.005 in a single GTX 1080Ti with batchsize 4 and is decreased by 0.1 at the 8th and 11th epoch, respectively. WarmUp \\cite{2019arXiv190307071L} is also employed in the first 500 iterations. Totally there are 12 training epochs.\n\n $\\bullet$ Testing time augmentation (\\emph{i.e.,} flipping test or multi-scale testing) is not employed.\n\n\n\n\\subsection{Benchmark Analysis}\nTable \\ref{ben} shows the benchmark for the \\emph{SOTA} methods. Multi- and one- stage detectors with three kinds of backbones (\\emph{i.e.,} ResNet18, 50, 101) give a comprehensive assessment on DUO. We also deploy all the methods to AGX to assess efficiency.\n\nIn general, the multi-stage (Cascade R-CNN) detectors have high accuracy and low efficiency, while the one-stage (RetinaNet) detectors have low accuracy and high efficiency. However, due to recent studies \\cite{zhang2019bridging} on the allocation of more reasonable positive and negative samples in training, one-stage detectors (ATSS or GFL) can achieve both high accuracy and high efficiency.\n\n\\begin{table*}[htbp]\n\\renewcommand\\tabcolsep{3.0pt}\n\n\\begin{center}\n\\caption{Benchmark of \\emph{SOTA} detectors (single-model and single-scale results) on DUO. FPS is measured on the same machine with a JETSON AGX XAVIER under the same MMDetection framework, using a batch size of 1 whenever possible. R: ResNet.} \n\\label{ben}\n\\begin{tabular}{|l|l|c|c|c|ccc|ccc|cccc|}\n\\hline\nMethod&Backbone&Param.&FLOPs&FPS&AP&AP$_{50}$&AP$_{75}$&AP$_{S}$&AP$_{M}$&AP$_{L}$&AP$_{Ho}$&AP$_{Ec}$&AP$_{Sc}$&AP$_{St}$ \\\\ \n\\hline \n\\emph{multi-stage:} &&&&&&&&&&&&&& \\\\\n\n\\multirow{3}{*}{Faster R-CNN \\cite{Ren2015Faster}}\n&R-18&28.14M&49.75G&5.7&50.1&72.6&57.8&42.9&51.9&48.7&49.1&60.1&31.6&59.7\\\\\n&R-50&41.14M&63.26G&4.7&54.8&75.9&63.1&53.0&56.2&53.8&55.5&62.4&38.7&62.5\\\\\n&R-101&60.13M&82.74G&3.7&53.8&75.4&61.6&39.0&55.2&52.8&54.3&62.0&38.5&60.4\\\\\n\\hline\n\n\\multirow{3}{*}{Cascade R-CNN \\cite{Cai_2019}}\n&R-18&55.93M&77.54G&3.4&52.7&73.4&60.3&\\bf 49.0&54.7&50.9&51.4&62.3&34.9&62.3\\\\\n&R-50&68.94M&91.06G&3.0&55.6&75.5&63.8&44.9&57.4&54.4&56.8&63.6&38.7&63.5\\\\\n&R-101&87.93M&110.53G&2.6&56.0&76.1&63.6&51.2&57.5&54.7&56.2&63.9&41.3&62.6\\\\\n\\hline\n\n\\multirow{3}{*}{Grid R-CNN \\cite{lu2019grid}}\n&R-18&51.24M&163.15G&3.9&51.9&72.1&59.2&40.4&54.2&50.1&50.7&61.8&33.3&61.9\\\\\n&R-50&64.24M&176.67G&3.4&55.9&75.8&64.3&40.9&57.5&54.8&56.7&62.9&39.5&64.4\\\\\n&R-101&83.24M&196.14G&2.8&55.6&75.6&62.9&45.6&57.1&54.5&55.5&62.9&41.0&62.9\\\\\n\\hline\n\n\\multirow{3}{*}{RepPoints \\cite{yang2019reppoints}}\n&R-18&20.11M&\\bf 35.60G&5.6&51.7&76.9&57.8&43.8&54.0&49.7&50.8&63.3&33.6&59.2\\\\\n&R-50&36.60M&48.54G&4.8&56.0&80.2&63.1&40.8&58.5&53.7&56.7&65.7&39.3&62.3\\\\\n&R-101&55.60M&68.02G&3.8&55.4&79.0&62.6&42.2&57.3&53.9&56.0&65.8&39.0&60.9\\\\\n\\hline \n\\hline \n\\emph{one-stage:} &&&&&&&&&&&&&& \\\\\n\\multirow{3}{*}{RetinaNet \\cite{Lin2017Focal}}\n&R-18&19.68M&39.68G&7.1&44.7&66.3&50.7&29.3&47.6&42.5&46.9&54.2&23.9&53.8\\\\\n&R-50&36.17M&52.62G&5.9&49.3&70.3&55.4&36.5&51.9&47.6&54.4&56.6&27.8&58.3\\\\\n&R-101&55.16M&72.10G&4.5&50.4&71.7&57.3&34.6&52.8&49.0&54.6&57.0&33.7&56.3\\\\\n\\hline \n\n\\multirow{3}{*}{FreeAnchor \\cite{2019arXiv190902466Z}}\n&R-18&19.68M&39.68G&6.8&49.0&71.9&55.3&38.6&51.7&46.7&47.2&62.8&28.6&57.6\\\\\n&R-50&36.17M&52.62G&5.8&54.4&76.6&62.5&38.1&55.7&53.4&55.3&65.2&35.3&61.8\\\\\n&R-101&55.16M&72.10G&4.4&54.6&76.9&62.9&36.5&56.5&52.9&54.0&65.1&38.4&60.7\\\\\n\\hline \n\n\\multirow{3}{*}{FoveaBox \\cite{DBLP:journals/corr/abs-1904-03797}}\n&R-18&21.20M&44.75G&6.7&51.6&74.9&57.4&40.0&53.6&49.8&51.0&61.9&34.6&59.1\\\\\n&R-50&37.69M&57.69G&5.5&55.3&77.8&62.3&44.7&57.4&53.4&57.9&64.2&36.4&62.8\\\\\n&R-101&56.68M&77.16G&4.2&54.7&77.3&62.3&37.7&57.1&52.4&55.3&63.6&38.9&60.8\\\\\n\\hline \n\n\\multirow{3}{*}{PAA \\cite{2020arXiv200708103K}}\n&R-18&\\bf 18.94M&38.84G&3.0&52.6&75.3&58.8&41.3&55.1&50.2&49.9&64.6&35.6&60.5\\\\\n&R-50&31.89M&51.55G&2.9&56.8&79.0&63.8&38.9&58.9&54.9&56.5&66.9&39.9&64.0\\\\\n&R-101&50.89M&71.03G&2.4&56.5&78.5&63.7&40.9&58.7&54.5&55.8&66.5&42.0&61.6\\\\\n\\hline \n\n\\multirow{3}{*}{FSAF \\cite{zhu2019feature}}\n&R-18&19.53M&38.88G&\\bf 7.4&49.6&74.3&55.1&43.4&51.8&47.5&45.5&63.5&30.3&58.9\\\\\n&R-50&36.02M&51.82G&6.0&54.9&79.3&62.1&46.2&56.7&53.3&53.7&66.4&36.8&62.5\\\\\n&R-101&55.01M&55.01G&4.5&54.6&78.7&61.9&46.0&57.1&52.2&53.0&66.3&38.2&61.1\\\\\n\\hline \n\n\\multirow{3}{*}{FCOS \\cite{DBLP:journals/corr/abs-1904-01355}}\n&R-18&\\bf 18.94M&38.84G&6.5&48.4&72.8&53.7&30.7&50.9&46.3&46.5&61.5&29.1&56.6\\\\\n&R-50&31.84M&50.34G&5.4&53.0&77.1&59.9&39.7&55.6&50.5&52.3&64.5&35.2&60.0\\\\\n&R-101&50.78M&69.81G&4.2&53.2&77.3&60.1&43.4&55.4&51.2&51.7&64.1&38.5&58.5\\\\\n\\hline \n\n\\multirow{3}{*}{ATSS \\cite{zhang2019bridging}}\n&R-18&\\bf 18.94M&38.84G&6.0&54.0&76.5&60.9&44.1&56.6&51.4&52.6&65.5&35.8&61.9\\\\\n&R-50&31.89M&51.55G&5.2&58.2&\\bf 80.1&66.5&43.9&60.6&55.9&\\bf 58.6&67.6&41.8&64.6\\\\\n&R-101&50.89M&71.03G&3.8&57.6&79.4&65.3&46.5&60.3&55.0&57.7&67.2&42.6&62.9\\\\\n\\hline \n\n\\multirow{3}{*}{GFL \\cite{li2020generalized}}\n&R-18&19.09M&39.63G&6.3&54.4&75.5&61.9&35.0&57.1&51.8&51.8&66.9&36.5&62.5\\\\\n&R-50&32.04M&52.35G&5.5&\\bf 58.6&79.3&\\bf 66.7&46.5&\\bf 61.6&55.6&\\bf 58.6&\\bf 69.1&41.3&\\bf 65.3\\\\\n&R-101&51.03M&71.82G&4.1&58.3&79.3&65.5&45.1&60.5&\\bf 56.3&57.0&\\bf 69.1&\\bf 43.0&64.0\\\\\n\n\n\\hline \n\\end{tabular}\n\\end{center}\n\\end{table*}\nTherefore, in terms of accuracy, the accuracy difference between the multi- and the one- stage methods in AP is not obvious, and the AP$_{S}$ of different methods is always the lowest among the three size AP. For class AP, AP$_{Sc}$ lags significantly behind the other three classes because it has the smallest number of instances. In terms of efficiency, large parameters and FLOPs result in low FPS on AGX, with a maximum FPS of 7.4, which is hardly deployable on underwater robot. Finally, we also found that ResNet101 was not significantly improved over ResNet50, which means that a very deep network may not be useful for detecting small creatures in underwater scenarios. \n\nConsequently, the design of high accuracy and high efficiency detector is still the main direction in this field and there is still large space to improve the performance.\nIn order to achieve this goal, a shallow backbone with strong multi-scale feature fusion ability can be proposed to extract the discriminant features of small scale aquatic organisms; a specially designed training strategy may overcome the DUO's long-tail distribution, such as a more reasonable positive/negative label sampling mechanism or a class-balanced image allocation strategy within a training batch.\n\n\\section{Conclusion}\nIn this paper, we introduce a dataset (DUO) and a corresponding benchmark to fill in the gaps in the community. DUO contains a variety of underwater scenes and more reasonable annotations. Benchmark includes efficiency and accuracy indicators to conduct a comprehensive evaluation of the \\emph{SOTA} decoders. The two contributions could serve as a reference for academic research and industrial applications, as well as promote community development.", "answers": ["Yes, DUO has 9.57 instances per image while COCO contains 7.7."], "length": 2619, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "dffb383ee06d8413ac40e2c1ec7bde5548ef07fc8f35ad2f"} {"input": "When did the Tevatron Collider Run II start and when is it expected to end?", "context": "\\section{INTRODUCTION}\nThe Tevatron Collider Run II started in March 2002 and is expected\nto continue until the end of this decade. The Tevatron and the \ntwo detectors, CDF and D\\O, have been performing well in 2004,\neach experiment is collecting data at the rate \nof $\\approx$10 pb$^{-1}$ per week.\nThe total luminosity accumulated by August 2004 is $\\approx$500 pb$^{-1}$\nper detector.\nThe rich physics program includes the\nproduction and precision measurement of properties of standard model (SM)\nobjects, as well as searches for phenomena beyond standard model.\nIn this brief review we focus on areas of most interest \nto the lattice community. We present\nnew results on the top quark mass\nand their implication for the mass of the SM Higgs boson, \non searches for the SM Higgs boson, on evidence for the $X(3872)$ state, \non searches for pentaquarks, and on $b$ hadron properties.\nAll Run II results presented here are preliminary. \n\n\\section{TOP QUARK MASS}\n\nThe experiments CDF and D\\O\\ published several direct measurements of\nthe top quark pole mass, $\\ensuremath{M_{\\mathrm{top}}}$, \nbased on Run I data (1992-1996).\nThe ``lepton $+$ jets'' channel yields the most precise determination of\n$\\ensuremath{M_{\\mathrm{top}}}$. Recently, the\nD\\O\\ collaboration published a new measurement~\\cite{Mtop1-D0-l+j-new},\nbased on a powerful analysis technique yielding greatly improved precision.\nThe differential probability \nthat the measured variables in any event correspond to the signal\nis calculated as a function of $\\ensuremath{M_{\\mathrm{top}}}$. \nThe maximum in the product of the individual event probabilities \nprovides the best estimate of $\\ensuremath{M_{\\mathrm{top}}}$.\nThe critical differences from previous analyses \nin the lepton $+$ jets decay channel lie in \nthe assignment of more \nweight to events that are well measured or more likely to correspond to \n$t \\bar t$ signal, \nand the handling of the combinations of final-state objects\n(lepton, jets, and imbalance in transverse momentum) \nand their identification with\ntop-quark decay products in an event. \nThe new combined value for the top-quark mass from Run I is \n$\\ensuremath{M_{\\mathrm{top}}} = 178.0\\pm4.3~\\ensuremath{\\mathrm{ Ge\\kern -0.1em V }\\kern -0.2em /c^2 }$.\n\nIn Run II, both collaborations have been exploring several different techniques \nfor $\\ensuremath{M_{\\mathrm{top}}}$\nmeasurements. The best single CDF result comes from a dynamic likelihood method\n(DLM). The method is similar to\nthe technique used in Ref.~\\cite{Mtop1-D0-l+j-new}.\nThe result is $\\ensuremath{M_{\\mathrm{top}}} = 177.8^{+4.5}_{-5.0} (stat) \\pm 6.2 (syst) ~\\ensuremath{\\mathrm{ Ge\\kern -0.1em V }\\kern -0.2em /c^2 }$.\nThe joint likelihood of the selected events is shown in Fig. ~\\ref{fig:cdf_tml}. \nThe Run II goal is a 1\\% uncertainty on $\\ensuremath{M_{\\mathrm{top}}}$. \n\n\n\n\n\\begin{figure}[htb]\n\\vspace*{-5mm}\n\\includegraphics[height=5.8cm,width=8.1cm] {data_22ev_likelihood.eps}\n\\vspace*{-1.2cm}\n\\caption{The joint likelihood of top candidates(CDF).}\n\\label{fig:cdf_tml}\n\\end{figure}\n\n\n\n\n\\section{SEARCH FOR SM HIGGS BOSON}\n\n\nThe constraints on the SM Higgs ($H$) boson mass from\npublished measurements, updated to include the new D\\O\\ top mass\nmeasurement~\\cite{Mtop1-D0-l+j-new}, are\n$M_H = 117 ^{+67}_{-45}~\\ensuremath{\\mathrm{ Ge\\kern -0.1em V }\\kern -0.2em /c^2 }$, $M_H < 251~\\ensuremath{\\mathrm{ Ge\\kern -0.1em V }\\kern -0.2em /c^2 }$ at 95\\% C.L.\nThe new most likely value of $M_H$\nis above the experimentally excluded range,\nand sufficiently low for $H$ to be observed at the Tevatron.\n\n\n\\begin{figure}[htb]\n\\vspace*{-5mm}\n\\includegraphics[height=7.5cm,width=7.8cm] {d0_wbb_fig_3_err.eps}\n\\vspace*{-1.1cm}\n\\caption{Distribution of the dijet\ninvariant mass for $W+2 b$-tagged jets events,\ncompared to the expectation (D\\O). \n}\n\\label{fig:d0_wbb_2tag}\n\\end{figure}\n\n\n\nD\\O\\ has conducted a search for $H$ at $M_H < 140~\\ensuremath{\\mathrm{ Ge\\kern -0.1em V }\\kern -0.2em /c^2 }$ \nin the production channel \n$p \\bar{p} \\rightarrow WH \\rightarrow e \\nu b \\bar{b}$. \nThe experimental signature of $WH \\rightarrow e \\nu b \\bar{b}$\nis a final state with \none high $p_T$ electron, two $b$ jets, and\nlarge missing transverse energy resulting from\nthe undetected neutrino.\nThe dominant backgrounds to $WH$ production\nare $W b \\bar{b}$, $t \\bar{t}$ and single-top production.\nThe distribution \nof the dijet mass for events with two $b$-tagged jets is shown in\nFig.~\\ref{fig:d0_wbb_2tag}. \nAlso shown is the expected contribution ($0.06$ events) \nfrom the $b \\bar{b}$ decay of a\nSM Higgs boson with $M_H =$ 115 $\\ensuremath{\\mathrm{ Ge\\kern -0.1em V }\\kern -0.2em /c^2 }$.\nNo events are observed in the dijet mass window of 85--135 $\\ensuremath{\\mathrm{ Ge\\kern -0.1em V }\\kern -0.2em /c^2 }$.\nD\\O\\ sets a limit on the cross section\nfor $\\sigma( p\\bar{p} \\rightarrow WH) \\times B(H \\rightarrow b \\bar{b}) $\nof 9.0 pb at the 95\\% C.L., for a 115 $\\ensuremath{\\mathrm{ Ge\\kern -0.1em V }\\kern -0.2em /c^2 }$ Higgs boson.\nThe results for mass points 105, 125, and 135 $\\ensuremath{\\mathrm{ Ge\\kern -0.1em V }\\kern -0.2em /c^2 }$\n are 11.0, 9.1 and 12.2 pb, respectively.\n\n\n\n\\begin{figure}[htb]\n\\vspace*{-1.2cm}\n\\includegraphics[height=0.33\\textheight,width=8.0cm]{whww_aps04_bw.eps}\n\n\\vspace*{-1.2cm}\n\\caption{95\\% limits on the $H$ production (CDF).}\n\\label{fig:cdf_whww}\n\\end{figure}\n\n\nCDF has done a similar search, allowing either an electron or a muon \nin the final state. Both groups have also searched for $H$ produced in\ngluon-gluon fusion, with subsequent decay to a pair of $W$ bosons.\nThe CDF results for both channels are shown in Fig.~\\ref{fig:cdf_whww}. \n\n\n\n\\section{THE STATE X(3872)}\n\n\n\\begin{figure}[htb]\n\n\\includegraphics[height=8.0cm,width=7.5cm] {X3872cdfPRL1FullM.eps}\n\\vspace*{-1cm}\n\\caption{The $X(3872)$ signal (CDF).}\n\\label{fig:cdf_x}\n\\end{figure}\n\n\n\n\n The existence of the $X(3872)$ state discovered by \nthe Belle Collaboration~\\cite{Belle-X}\n has been confirmed \n in $p \\bar{p}$ collisions by CDF~\\cite{cdf-X} (see Fig.~\\ref{fig:cdf_x})\nand D\\O~\\cite{d0-X}.\n It is still unclear whether this particle is a $c\\bar{c}$ state,\n or a more complex object. When the data are separated according to\nproduction and decay variables, D\\O\\ finds no significant\ndifferences between the $X(3872)$ and\nthe $c \\bar{c}$ state $\\psi(2S)$.\nCDF has analysed the ``lifetime'' distribution of the $X(3872)$ events in order to\nquantify what fraction of this state arises from decay of $B$ hadrons, as opposed to\nthose produced promptly. The authors find that for the selected samples\n28.3$\\pm$1.0$(stat)\\pm$0.7$(syst)$\\% of $\\psi(2S)$ candidates are from $b$ decays,\nwhereas 16.1$\\pm$4.9$(stat)\\pm$2.0$(syst)$\\% of $X$ mesons arise from such decays.\n\n\n\n\n\n\\section{SEARCH FOR PENTAQUARKS}\n\n\n\n\\begin{figure}[htb]\n\n\\includegraphics[height=0.27\\textheight,width=7.6cm] {mpks_1stminbias.eps}\n\\vspace*{-1.2cm}\n\n\\caption{Invariant mass distribution of an identified proton and a $K^0_s$ candidate. (CDF)\n}\n\\label{fig:pqtheta}\n\\end{figure}\n\n\n\n\\begin{figure}[htb]\n\n\\vspace*{-0.9cm}\n\\includegraphics[height=0.25\\textheight,width=8.0cm] {CM_xicst_cc_1.eps}\n\\vspace*{-1.2cm}\n\\caption{Invariant mass distribution of the $(\\Xi^-,\\pi^+)$ system. (CDF) \n}\n\\label{fig:pqxi}\n\\end{figure}\n\n\n\\begin{figure}[htb]\n\\vspace*{-0.9cm}\n\n\\includegraphics[height=0.25\\textheight,width=7.6cm] {theta_note_dstp_dedx_pt.eps}\n\\vspace*{-1.2cm}\n\\caption{Mass of the ($D^{*+}\\bar p$) system. The arrow indicates the position of \nthe $\\Theta_c$ state (CDF).}\n\\label{fig:pqthetac}\n\\end{figure}\n\n\n\nFollowing reports of evidence for exotic\nbaryons containing five quarks (pentaquarks), CDF has analysed \nits data for evidence of the following pentaquarks:\n$\\Theta^+$ ($uud\\bar d \\bar s$), doubly strange states \n$\\Xi_{3/2}$, charmed states $\\Theta_c$, and, most recently, \na state $(udus\\bar b)$, dubbed $R^+_s$, through its weak decay to $(J/\\psi, p)$. \nWith its excellent particle indentification and mass resolution,\nCDF has a unique capability to search for pentaquark states.\nThe signals of known states: $\\phi$, $\\Lambda$,\n$\\Lambda(1520)$, $K^*$, $\\Xi$, \ncompare favorably with those provided\nby the authors of the pentaquark evidence.\nThe group finds no evidence for pentaquark states, see Figs \n~\\ref{fig:pqtheta},{\\ref{fig:pqxi},\\ref{fig:pqthetac}.\nThis can be interpreted as an indication that the pentaquark production \nin $p \\bar p$ collisions is heavily suppressed compared to the conventional\nhadron production, or as an evidence against the existence of pentaquarks.\n\n\\clearpage\n\n\\section{RECENT B PHYSICS RESULTS}\n\n\n\\subsection{Spectroscopy}\n\nCDF has measured the mass of $b$ hadrons in exclusive $J/\\psi$ channels.\nThe measurements of the $B_s$ and $\\Lambda_b$ (Fig. \\ref{fig:masslb})\nmasses are the current world's best.\\\\\n\n$m(B^+)$ = 5279.10$\\pm$0.41$(stat)\\pm$0.36$(syst)$,\n\n$m(B^0)$ = 5279.63$\\pm$0.53$(stat)\\pm$0.33$(syst)$,\n\n$m(B_s)$ = 5366.01$\\pm$0.73$(stat)\\pm$0.33$(syst)$,\n\n$m(\\Lambda_b)$ = 5619.7$\\pm$1.2$(stat)\\pm$1.2$(syst)$ MeV/$c^2$.\\\\\n\n\n\\begin{figure}[htb]\n\\vspace*{-1mm}\n\\includegraphics[height=0.30\\textheight,width=7.5cm] {lambdav1c.eps}\n\\vspace*{-1cm}\n\n\\caption{The mass spectrum of $\\Lambda_b$ candidates (CDF).}\n\\label{fig:masslb}\n\\end{figure}\n\n\nD\\O\\ reports the first observation of the excited $B$ mesons \n$B_1$ and $B^*_2$ as two separate states in fully reconstructed\ndecays to $B^{(*)}\\pi$. The mass of $B_1$ is measured to be\n5724$\\pm$4$\\pm$7 MeV/c$^2$, and the mass difference $\\Delta M$ between\n$B^*_2$ and $B_1$ is 23.6$\\pm$7.7$\\pm$3.9 MeV/c$^2$\n(Fig. \\ref{fig:d0_bexc}).\n\nD\\O\\ observes semileptonic $B$ decays to narrow $D^{**}$ states,\nthe orbitally excited states of the $D$ meson\nseen as resonances in the $D^{*+}\\pi^-$ invariant mass spectrum.\nThe $D^*$ mesons are reconstructed through the decay sequence \n$D^{*+} \\rightarrow D^0\\pi^+$, $D^0\\rightarrow K^-\\pi^+$.\nThe invariant mass of oppositely charged $(D^*,\\pi)$ pairs\nis shown in Fig. \\ref{fig:d0_dstst}.\nThe mass peak between 2.4 and 2.5 GeV/$c^2$ can be interpreted as two merged \nnarrow $D^{**}$ states, $D^0_1(2420)$ and $D^0_2(2460)$.\nThe combined branching fraction is \n$ {\\cal B}(B\\rightarrow D^0_1,D^0_2)\\cdot {\\cal B}(D^0_1,D^0_2\\rightarrow D^{*+}\\pi^-)=(0.280\\pm0.021(stat)\\pm0.088(syst)$\\%. The systematic error includes the unknown phase between the\ntwo resonances. Work is in progress on extracting the two Breit-Wigner\namplitudes.\n\n\n\\begin{figure}[htb]\n\\vspace*{-2mm}\n\\hspace*{-3mm}\n\\includegraphics[height=0.28\\textheight,width=8.3cm] {B08F02.eps}\n\n\\vspace*{-1cm}\n\\caption{Mass difference $\\Delta M = M(B\\pi)-M(B)$ for exclusive $B$ decays.\nThe background-subtracted signal is a sum of \n$B^*_1 \\rightarrow B^* \\pi$, $B^* \\rightarrow B \\gamma $ (open area)\nand $B^*_2 \\rightarrow B^*\\pi$ $B^*\\rightarrow B \\gamma$ (lower peak in the shaded area)\nand $B^*_2 \\rightarrow B \\pi$ (upper peak in the shaded area) \n(D\\O).}\n\\label{fig:d0_bexc}\n\\end{figure}\n\n\n\\begin{figure}[htb]\n\\includegraphics[height=0.25\\textheight,width=7.5cm] {B05F03.eps}\n\n\\vspace*{-1cm}\n\\caption{The invariant mass distribution of\n$(D^*,\\pi)$ pairs, opposite sign (points) and same-sign (solid histogram).}\n\\label{fig:d0_dstst}\n\\end{figure}\n\n\n\n\n\n\n\\subsection{Lifetimes}\n\n\nCDF and D\\O\\ have measured lifetimes of $b$ hadrons through the exclusively\nreconstructed decays $B^+ \\rightarrow J/\\psi K^+$, $B^0 \\rightarrow J/\\psi K^{*0}$,\n$B_s \\rightarrow J/\\psi \\phi$, \nand $\\Lambda_b \\rightarrow J/\\psi \\Lambda$\n(Fig. \\ref{fig:d0_lbctau}).\nThe latest results are: \\\\\n\n\n\n $\\tau(B^+)$=1.65 $\\pm$ 0.08 $^{+0.096}_{-0.123}$ ps ~(D\\O\\ 2003),\n\n $\\tau(B^+)$=1.662 $\\pm$ 0.033 $\\pm$ 0.008 ps ~(CDF),\n\n $\\tau(B^0_d)$=1.473 $^{+0.052}_{-0.050}$ $\\pm$ 0.023 ps ~(D\\O).\n\n $\\tau(B^0_d)$=1.539 $\\pm$ 0.051 $\\pm$ 0.008 ps ~(CDF),\n\n $\\tau(B^0_s)$=1.444 $^{+0.098}_{-0.090}$ $\\pm$ 0.020 ps ~(D\\O),\n\n $\\tau(B^0_s)$=1.369 $\\pm$ 0.100 $\\pm$ $^{+0.008}_{0.010}$ ps ~(CDF),\n\n\n $\\tau(\\Lambda_b)$=1.221 $^{+0.217}_{-0.179}$ $\\pm$ 0.043 ps ~(D\\O),\n\n\n $\\tau(\\Lambda_b)$=1.25 $\\pm$ 0.26 $\\pm$ 0.10 ps ~(CDF 2003).\\\\\n\n\n\nThe measured lifetimes correspond to the following lifetime ratios:\\\\\n\n$\\tau(B^+)/\\tau(B^0_d)$ = 1.080$\\pm$0.042 ~(CDF),\n \n$\\tau(B^0_s)/\\tau(B^0_d)$ = 0.890$\\pm$0.072 ~(CDF),\n\n$\\tau(B^0_s)/\\tau(B^0_d)$ = 0.980$ ^{+0.075}_{-0.070} \\pm$0.003 ~(D\\O),\n\n$\\tau(\\Lambda_b)/\\tau(B^0_d)$ = 0.874$ ^{+0.169}_{-0.142} \\pm$0.028 ~(D\\O).\\\\\n\n\n\n\\begin{figure}[htb]\n\\includegraphics[height=0.3\\textheight,width=8.2cm] {d0_lbctau_B11F02.eps}\n\\vspace*{-1cm}\n\n\\caption{ Fit projection on $c\\tau$ for the $\\Lambda_b$ candidates. (D\\O)}\n\\label{fig:d0_lbctau}\n\\end{figure}\n\n\nThe $B_s$ lifetime measurements listed above are results of\na single-lifetime fit to data, integrated over the decay angles.\nBecause of the presence of final\nstates common to ${B_s^0}$\\ and its charge conjugate ${\\overline{B}_s^0}$,\nthe two meson states are expected\nto mix in such a way that the two CP eigenstates may have a relatively\nlarge lifetime difference.\nIt is possible to\nseparate the two CP components of ${B_s^0 \\rightarrow J/\\psi \\phi}$\\ and thus to measure the\nlifetime difference by studying the time evolution of the\npolarization states of the vector mesons in the final state.\nCDF has carried out a combined analysis of $B_s$ lifetimes \nand polarization amplitudes. The results for the lifetimes of the\nlow mass (CP even) and high mass (CP odd) eigenstates, and the relative \nwidth difference are:\\\\\n\n $\\tau_L = 1.05 ^{+0.16}_{-0.13} \\pm 0.02$ ~ps,\n \n $\\tau_H = 2.07 ^{+0.58}_{-0.46} \\pm 0.03$ ~ps,\n\n $\\Delta \\Gamma /\\overline \\Gamma = 0.65 ^{+0.25}_{-0.33} \\pm 0.01$.\\\\\n\nFigure \\ref{fig:cdf_dg} shows the scan of the likelihood function \nfor $\\Delta \\Gamma /\\overline \\Gamma$.\nPseudoexperiments tossed with $\\Delta \\Gamma /\\overline \\Gamma =0$\nyield the betting odds for observing the above results at\n1/315. For $\\Delta \\Gamma /\\overline \\Gamma = 0.12$ (SM prediction,\nwhich has recently been updated to 0.14$\\pm$0.05~\\cite{dg_un}) the betting odds are\n1/84.\n\n\\begin{figure}[htb]\n\\vspace*{-1mm}\n\\includegraphics[height=0.3\\textheight,width=8.2cm] {cdf_scan-dg-un.eps}\n\n\\vspace*{-1cm}\n\\caption{Scan of the likelihood function \nfor $\\Delta \\Gamma /\\overline \\Gamma$ (CDF).\n}\n\\label{fig:cdf_dg}\n\\end{figure}\n\n\n\n\nD\\O\\ has used a novel technique to measure the lifetime ratio\nof the charged and neutral $B$ mesons, exploiting the large\nsemileptonic sample. $B$ hadrons were reconstructed in the channels\n$B\\rightarrow \\mu^+ \\nu D^*(2010)^-X$, which are dominated by $B^0$ decays, \nand $B\\rightarrow \\mu^+ \\nu D^0X$, which are dominated by $B^+$ decays.\nThe lifetime ratio was\nobtained from the variation of the ratio of the number of events in these two\nprocesses at different decay lengths.\nThe result is \\\\\n\n\n$\\tau(B^+)/\\tau(B^0_d)$ = 1.093$\\pm$0.021$\\pm$0.022. ~(D\\O)\n\n\n\n\n\\subsection{Towards $B_s$ mixing}\n\nMeasurement of the $B_s$ oscillation frequency via ${B_s^0}$ -${\\overline{B}_s^0}$ ~mixing\nwill provide an important constraint on the CKM matrix. The oscillation\nfrequency is proportional to the mass difference between the mass eigenstates,\n$\\Delta m_s$, and is related to the CKM matrix through \n$\\Delta m_s \\propto |V_{tb}V_{ts}|$. When combined with the\n$B_d$ mass difference, $\\Delta m_d$ it helps in extraction of $|V_{td}|$,\nand thereby the CP violating phase. \n\nAs a benchmark for future $B_s$ oscillation measurement, both groups\nstudy $B_d$ mixing, gaining an understanding of the different components\nof a $B$ mixing analysis (sample composition, flavor tagging, vertexing,\nasymmetry fitting). For a sample of partially reconstructed decays\n$B\\rightarrow D^*(2010)^+\\mu^-X$, D\\O\\ obtains \n$\\Delta m_d = 0.506 \\pm 0.055 (stat) \\pm 0.049 (syst))$ ps$^{-1}$ and\n$\\Delta m_d = 0.488 \\pm 0.066 (stat) \\pm 0.044 (syst))$ ps$^{-1}$\nwhen employing opposite side muon tagging and the same side tagging,\nrespectively.\n\nThe CDF result for semileptonic channels is\n$\\Delta m_d = 0.536 \\pm 0.037 (stat) \\pm 0.009 (s.c.) \\pm 0.015 (syst)$ ps$^{-1}$.\nCDF also reports a result on $B$ oscillations using fully reconstructed\ndecays:\n$\\Delta m_d = 0.526 \\pm 0.056 (stat) \\pm 0.005 (syst))$ ps$^{-1}$.\n\nReconstructing $B_s$ decays into different final states is another\nimportant\n step in the ${B_s^0}$ -${\\overline{B}_s^0}$ ~mixing analysis.\nThanks to the large muon and tracking coverage, D\\O\\ is accumulating\na high statistics sample of semileptonic $B_s$ decays.\nD\\O\\ reconstructs the $B_s \\rightarrow D^+_s \\mu^- X$ decays, with\n$D^+_s \\rightarrow \\phi \\pi^+ $ and\n$D^+_s \\rightarrow K^* K^+ $,\nat a rate of $\\approx$ 40(25) events per pb$^{-1}$, respectively.\nFigure \\ref{fig:d0_bsdsphipi} shows the mass distribution of the\n$D^+_s \\rightarrow \\phi \\pi$ candidates.\n\n\n\\begin{figure}[htb]\n\\vspace*{-5mm}\n\\includegraphics[height=0.3\\textheight,width=8.0cm] {blds-250.eps}\n\\vspace*{-1.2cm}\n\\caption{ $D^+_s \\rightarrow \\phi \\pi^+$ signal. (D\\O)}\n\\label{fig:d0_bsdsphipi}\n\\end{figure}\n\n\n\\begin{figure}[htb]\n\\vspace*{-10mm}\n\\hspace*{-4mm}\n\\includegraphics[height=0.35\\textheight,width=7.9cm] {cdf_Bs-DsPi-PhiPi.eps}\n\n\\vspace*{-1.0cm}\n\\caption{ $B_s \\rightarrow D_s \\pi$, $D_s \\rightarrow \\phi \\pi$ signal. (CDF)}\n\\label{fig:cdf_bsdsphipi}\n\\end{figure}\n\n\nCDF has clean signals for fully hadronic, flavor-specific $B_s$ decays,\nproviding the best sensitivity to $B_s$ oscillations at high\n$\\Delta m_s$. Figure \\ref{fig:cdf_bsdsphipi} shows the signal for\nthe best channel, $B_s \\rightarrow D_s \\pi$, $D_s \\rightarrow \\phi \\pi$.\n\n\\clearpage\n\n\n\\subsection{Rare decays}\n\nThe purely leptonic decays $B_{d,s}^0 \\rightarrow \\mu^+\n\\mu^-$ are flavor-changing neutral current (FCNC) processes.\nIn the standard model, these decays are forbidden at the tree level and\nproceed at a very low rate through higher-order diagrams.\nThe latest SM prediction~\\cite{sm_ref3}\nis ${\\cal B}(B^0_s \\rightarrow \\mu^+ \\mu^-)=(3.42\\pm 0.54)\\times\n10^{-9}$, where the error is dominated by non-perturbative uncertainties. The\nleptonic branching fraction of the $B_d^0$ decay is suppressed by CKM matrix elements $|V_{td}/V_{ts}|^2$\nleading to a predicted SM branching fraction of $(1.00\\pm0.14)\\times 10^{-10}$.\nThe best published experimental bound (Fig.~\\ref{fig:cdf_bsmumu})\n for the branching fraction\nof $B^0_s$ $(B^0_d)$ is presently\n${\\cal B}(B^0_s \\, (B^0_d) \\rightarrow \\mu^+\\mu^-)<7.5\\times 10^{-7}\\, \n(1.9\\times 10^{-7})$ at the 95\\% C.L.~\\cite{cdfII}.\nThe decay amplitude of $B^0_{d,s} \\rightarrow \\mu^+ \\mu^-$ can be\nsignificantly enhanced in some extensions of the SM. \n\n\\begin{figure}[htb]\n\\includegraphics[height=8.3cm,width=7.9cm] {cdfbsmumu_results_prl.eps}\n\n\\vspace*{-1cm}\n\\caption{Invariant mass for the events passing all requirements. (CDF)}\n\\label{fig:cdf_bsmumu}\n\\end{figure}\n\n\nAssuming no contributions \nfrom the decay $B^0_d\\rightarrow \\mu^+\\mu^-$ in the signal region,\nD\\O\\ finds the conservative upper limit on the branching fraction \nto be ${\\cal B}(B^0_s \\rightarrow \\mu^+ \\mu^-) \\leq 4.6\\times 10^{-7}$ \nat the 95\\% C.L. (Fig.~\\ref{fig:d0_bsmumu}).\n\n\n\n\n\n\n\\begin{figure}[htb]\n\\includegraphics[height=5.0cm,width=8.0cm] {B06F03.eps}\n\\vspace*{-1cm}\n\\caption{Invariant mass for the events passing all requirements. (D\\O)}\n\\label{fig:d0_bsmumu}\n\\end{figure}\n\n", "answers": ["The Tevatron Collider Run II started in March 2002 and is expected to continue until the end of this decade."], "length": 2431, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "d70ca68d05af951cd8ff2052095597d1b4dca557bfb0b40b"} {"input": "What molecule was the focus of the study?", "context": "\\section{Introduction}\n\nSpectral line surveys have revealed that high-mass star-forming\nregions are rich reservoirs of molecules from simple diatomic species\nto complex and larger molecules (e.g.,\n\\citealt{schilke1997b,hatchell1998b,comito2005,bisschop2007}).\nHowever, there have been rarely studies undertaken to investigate the\nchemical evolution during massive star formation from the earliest\nevolutionary stages, i.e., from High-Mass Starless Cores (HMSCs) and\nHigh-Mass Cores with embedded low- to intermediate-mass protostars\ndestined to become massive stars, via High-Mass Protostellar Objects\n(HMPOs) to the final stars that are able to produce Ultracompact H{\\sc\n ii} regions (UCH{\\sc ii}s, see \\citealt{beuther2006b} for a recent\ndescription of the evolutionary sequence). The first two evolutionary\nstages are found within so-called Infrared Dark Clouds (IRDCs). While\nfor low-mass stars the chemical evolution from early molecular\nfreeze-out to more evolved protostellar cores is well studied (e.g.,\n\\citealt{bergin1997,dutrey1997,pavlyuchenkov2006,joergensen2007}),\nit is far from clear whether similar evolutionary patterns are present\nduring massive star formation.\n\nTo better understand the chemical evolution of high-mass star-forming\nregions we initiated a program to investigate the chemical properties\nfrom IRDCs to UCH{\\sc ii}s from an observational and theoretical\nperspective. We start with single-dish line surveys toward a large\nsample obtaining their basic characteristics, and then perform\ndetailed studies of selected sources using interferometers on smaller\nscales. These observations are accompanied by theoretical modeling of\nthe chemical processes. Long-term goals are the chemical\ncharacterization of the evolutionary sequence in massive star\nformation, the development of chemical clocks, and the identification\nof molecules as astrophysical tools to study the physical processes\nduring different evolutionary stages. Here, we present an initial\nstudy of the reactive radical ethynyl (C$_2$H) combining single-dish\nand interferometer observations with chemical modeling. Although\nC$_2$H was previously observed in low-mass cores and Photon Dominated\nRegions (e.g., \\citealt{millar1984,jansen1995}), so far it was not\nsystematically investigated in the framework of high-mass star\nformation.\n\n\\section{Observations}\n\\label{obs}\n\nThe 21 massive star-forming regions were observed with the Atacama\nPathfinder Experiment (APEX) in the 875\\,$\\mu$m window in fall 2006.\nWe observed 1\\,GHz from 338 to 339\\,GHz and 1\\,GHz in the image\nsideband from 349 to 350\\,GHz. The spectral resolution was\n0.1\\,km\\,s$^{-1}$, but we smoothed the data to\n$\\sim$0.9\\,km\\,s$^{-1}$. The average system temperatures were around\n200\\,K, each source had on-source integration times between 5 and 16\nmin. The data were converted to main-beam temperatures with forward\nand beam efficiencies of 0.97 and 0.73, respectively\n\\citep{belloche2006}. The average $1\\sigma$ rms was 0.4\\,K. The main\nspectral features of interest are the C$_2$H lines around 349.4\\,GHz\nwith upper level excitation energies $E_u/k$ of 42\\,K (line blends of\nC$_2$H$(4_{5,5}-3_{4,4})$ \\& C$_2$H$(4_{5,4}-3_{4,3})$ at\n349.338\\,GHz, and C$_2$H$(4_{4,4}-3_{3,3})$ \\&\nC$_2$H$(4_{4,3}-3_{3,2})$ at 349.399\\,GHz). The beam size was $\\sim\n18''$.\n\nThe original Submillimeter Array (SMA) C$_2$H data toward the\nHMPO\\,18089-1732 were first presented in \\citet{beuther2005c}. There\nwe used the compact and extended configurations resulting in good\nimages for all spectral lines except of C$_2$H. For this project, we\nre-worked on these data only using the compact configuration. Because\nthe C$_2$H emission is distributed on larger scales (see\n\\S\\ref{results}), we were now able to derive a C$_2$H image. The\nintegration range was from 32 to 35\\,km\\,s$^{-1}$, and the achieved\n$1\\sigma$ rms of the C$_2$H image was 450\\,mJy\\,beam$^{-1}$. For more\ndetails on these observations see \\citet{beuther2005c}.\n\n\\section{Results}\n\\label{results}\n\nThe sources were selected to cover all evolutionary stages from IRDCs\nvia HMPOs to UCH{\\sc ii}s. We derived our target list from the samples\nof \\citet{klein2005,fontani2005,hill2005,beltran2006}. Table\n\\ref{sample} lists the observed sources, their coordinates, distances,\nluminosities and a first order classification into the evolutionary\nsub-groups IRDCs, HMPOs and UCH{\\sc ii}s based on the previously\navailable data. Although this classification is only based on a\nlimited set of data, here we are just interested in general\nevolutionary trends. Hence, the division into the three main classes\nis sufficient.\n\nFigure \\ref{spectra} presents sample spectra toward one source of each\nevolutionary group. While we see several CH$_3$OH lines as well as\nSO$_2$ and H$_2$CS toward some of the HMPOs and UCH{\\sc ii}s but not\ntoward the IRDCs, the surprising result of this comparison is the\npresence of the C$_2$H lines around 349.4\\,GHz toward all source types\nfrom young IRDCs via the HMPOs to evolved UCH{\\sc ii}s. Table\n\\ref{sample} lists the peak brightness temperatures, the integrated\nintensities and the FWHM line-widths of the C$_2$H line blend at\n349.399\\,GHz. The separation of the two lines of 1.375\\,MHz already\ncorresponds to a line-width of 1.2\\,km\\,s$^{-1}$. We have three C$_2$H\nnon-detections (2 IRDCs and 1 HMPO), however, with no clear trend with\nrespect to the distances or the luminosities (the latter comparison is\nonly possible for the HMPOs). While IRDCs are on average colder than\nmore evolved sources, and have lower brightness temperatures, the\nnon-detections are more probable due to the relatively low sensitivity\nof the short observations (\\S\\ref{obs}). Hence, the data indicate\nthat the C$_2$H lines are detected independent of the evolutionary\nstage of the sources in contrast to the situation with other\nmolecules. When comparing the line-widths between the different\nsub-groups, one finds only a marginal difference between the IRDCs and\nthe HMPOs (the average $\\Delta v$ of the two groups are 2.8 and\n3.1\\,km\\,s$^{-1}$). However, the UCH{\\sc ii}s exhibit significantly\nbroader line-widths with an average value of 5.5\\,km\\,s$^{-1}$.\n\nIntrigued by this finding, we wanted to understand the C$_2$H spatial\nstructure during the different evolutionary stages. Therefore, we\nwent back to a dataset obtained with the Submillimeter Array toward\nthe hypercompact H{\\sc ii} region IRAS\\,18089-1732 with a much higher\nspatial resolution of $\\sim 1''$ \\citep{beuther2005c}. Albeit this\nhypercompact H{\\sc ii} region belongs to the class of HMPOs, it is\nalready in a relatively evolved stage and has formed a hot core with a\nrich molecular spectrum. \\citet{beuther2005c} showed the spectral\ndetection of the C$_2$H lines toward this source, but they did not\npresent any spatially resolved images. To recover large-scale\nstructure, we restricted the data to those from the compact SMA\nconfiguration (\\S\\ref{obs}). With this refinement, we were able to\nproduce a spatially resolved C$_2$H map of the line blend at\n349.338\\,GHz with an angular resolution of $2.9''\\times 1.4''$\n(corresponding to an average linear resolution of 7700\\,AU at the\ngiven distance of 3.6\\,kpc). Figure \\ref{18089} presents the\nintegrated C$_2$H emission with a contour overlay of the 860\\,$\\mu$m\ncontinuum source outlining the position of the massive protostar. In\ncontrast to almost all other molecular lines that peak along with the\ndust continuum \\citep{beuther2005c}, the C$_2$H emission surrounds the\ncontinuum peak in a shell-like fashion.\n\n\\section{Discussion and Conclusions}\n\nTo understand the observations, we conducted a simple chemical\nmodeling of massive star-forming regions. A 1D cloud model with a mass\nof 1200\\,M$_\\sun$, an outer radius of 0.36\\,pc and a power-law density\nprofile ($\\rho\\propto r^p$ with $p=-1.5$) is the initially assumed\nconfiguration. Three cases are studied: (1) a cold isothermal cloud\nwith $T=10$\\,K, (2) $T=50$\\,K, and (3) a warm model with a temperature\nprofile $T\\propto r^q$ with $q=-0.4$ and a temperature at the outer\nradius of 44\\,K. The cloud is illuminated by the interstellar UV\nradiation field (IRSF, \\citealt{draine1978}) and by cosmic ray\nparticles (CRP). The ISRF attenuation by single-sized $0.1\\mu$m\nsilicate grains at a given radius is calculated in a plane-parallel\ngeometry following \\citet{vandishoeck1988}. The CRP ionization rate is\nassumed to be $1.3\\times 10^{-17}$~s$^{-1}$ \\citep{spitzer1968}. The\ngas-grain chemical model by \\citet{vasyunin2008} with the desorption\nenergies and surface reactions from \\citet{garrod2006} is used.\nGas-phase reaction rates are taken from RATE\\,06 \\citep{woodall2007},\ninitial abundances, were adopted from the ``low metal'' set of\n\\citet{lee1998}.\n\nFigure \\ref{model} presents the C$_2$H abundances for the three models\nat two different time steps: (a) 100\\,yr, and (b) in a more evolved\nstage after $5\\times10^4$\\,yr. The C$_2$H abundance is high toward the\ncore center right from the beginning of the evolution, similar to\nprevious models (e.g., \\citealt{millar1985,herbst1986,turner1999}).\nDuring the evolution, the C$_2$H abundance stays approximately\nconstant at the outer core edges, whereas it decreases by more than\nthree orders of magnitude in the center, except for the cold $T=10$~K\nmodel. The C$_2$H abundance profiles for all three models show\nsimilar behavior.\n\nThe chemical evolution of ethynyl is determined by relative removal\nrates of carbon and oxygen atoms or ions into molecules like CO, OH,\nH$_2$O. Light ionized hydrocarbons CH$^+_{\\rm n}$ (n=2..5) are quickly\nformed by radiative association of C$^+$ with H$_2$ and hydrogen\naddition reactions: C$^+$ $\\rightarrow$ CH$_2^+$ $\\rightarrow$\nCH$_3^+$ $\\rightarrow$ CH$_5^+$. The protonated methane reacts with\nelectrons, CO, C, OH, and more complex species at later stage and\nforms methane. The CH$_4$ molecules undergo reactive collisions with\nC$^+$, producing C$_2$H$_2^+$ and C$_2$H$_3^+$. An alternative way to\nproduce C$_2$H$_2^+$ is the dissociative recombination of CH$_5^+$\ninto CH$_3$ followed by reactions with C$^+$. Finally, C$_2$H$_2^+$\nand C$_2$H$_3^+$ dissociatively recombine into CH, C$_2$H, and\nC$_2$H$_2$. The major removal for C$_2$H is either the direct\nneutral-neutral reaction with O that forms CO, or the same reaction\nbut with heavier carbon chain ions that are formed from C$_2$H by\nsubsequent insertion of carbon. At later times, depletion and\ngas-phase reactions with more complex species may enter into this\ncycle. At the cloud edge the interstellar UV radiation\ninstantaneously dissociates CO despite its self-shielding,\nre-enriching the gas with elemental carbon.\n\nThe transformation of C$_2$H into CO and other species proceeds\nefficiently in dense regions, in particular in the ``warm'' model\nwhere endothermic reactions result in rich molecular complexity of the\ngas (see Fig.~\\ref{model}). In contrast, in the ``cold'' 10\\,K model\ngas-grain interactions and surface reactions become important. As a\nresult, a large fraction of oxygen is locked in water ice that is hard\nto desorb ($E_{\\rm des} \\sim 5500$~K), while half of the elemental\ncarbon goes to volatile methane ice ($E_{\\rm des} \\sim 1300$~K). Upon\nCRP heating of dust grains, this leads to much higher gas-phase\nabundance of C$_2$H in the cloud core for the cold model compared to\nthe warm model. The effect is not that strong for less dense regions\nat larger radii from the center.\n\nSince the C$_2$H emission is anti-correlated with the dust continuum\nemission in the case of IRAS\\,18089-1732 (Fig.\\,\\ref{18089}), we do\nnot have the H$_2$ column densities to quantitatively compare the\nabundance profiles of IRAS\\,18089-1732 with our model. However, data\nand model allow a qualitative comparison of the spatial structures.\nEstimating an exact evolutionary time for IRAS\\,18089-1732 is hardly\npossible, but based on the strong molecular line emission, its high\ncentral gas temperatures and the observed outflow-disk system\n\\citep{beuther2004a,beuther2004b,beuther2005c}, an approximate age of\n$5\\times10^4$\\,yr appears reasonable. Although dynamical and chemical\ntimes are not necessarily exactly the same, in high-mass star\nformation they should not differ to much: Following the models by\n\\citet{mckee2003} or \\citet{krumholz2006b}, the luminosity rises\nstrongly right from the onset of collapse which can be considered as a\nstarting point for the chemical evolution. At the same time disks and\noutflows evolve, which should hence have similar time-scales. The\ndiameter of the shell-like C$_2$H structure in IRAS\\,18089-1732 is\n$\\sim 5''$ (Fig.\\,\\ref{18089}), or $\\sim$9000\\,AU in radius at the\ngiven distance of 3.6\\,kpc. This value is well matched by the modeled\nregion with decreased C$_2$H abundance (Fig.\\,\\ref{model}). Although\nin principle optical depths and/or excitation effects could mimic the\nC$_2$H morphology, we consider this as unlikely because the other\nobserved molecules with many different transitions all peak toward the\ncentral submm continuum emission in IRAS\\,18089-1732\n\\citep{beuther2005c}. Since C$_2$H is the only exception in that rich\ndataset, chemical effects appear the more plausible explanation.\n\nThe fact that we see C$_2$H at the earliest and the later evolutionary\nstages can be explained by the reactive nature of C$_2$H: it is\nproduced quickly early on and gets replenished at the core edges by\nthe UV photodissociation of CO. The inner ``chemical'' hole observed\ntoward IRAS\\,18089-1732 can be explained by C$_2$H being consumed in\nthe chemical network forming CO and more complex molecules like larger\ncarbon-hydrogen complexes and/or depletion.\n\nThe data show that C$_2$H is not suited to investigate the central gas\ncores in more evolved sources, however, our analysis indicates that\nC$_2$H may be a suitable tracer of the earliest stages of (massive)\nstar formation, like N$_2$H$^+$ or NH$_3$ (e.g.,\n\\citealt{bergin2002,tafalla2004,beuther2005a,pillai2006}). While a\nspatial analysis of the line emission will give insights into the\nkinematics of the gas and also the evolutionary stage from chemical\nmodels, multiple C$_2$H lines will even allow a temperature\ncharacterization. With its lowest $J=1-0$ transitions around 87\\,GHz,\nC$_2$H has easily accessible spectral lines in several bands between\nthe 3\\,mm and 850\\,$\\mu$m. Furthermore, even the 349\\,GHz lines\npresented here have still relatively low upper level excitation\nenergies ($E_u/k\\sim42$\\,K), hence allowing to study cold cores even\nat sub-millimeter wavelengths. This prediction can further be proved\nvia high spectral and spatial resolution observations of different\nC$_2$H lines toward young IRDCs.\n\n\\acknowledgments{H.B. acknowledges financial support\n by the Emmy-Noether-Programm of the Deutsche Forschungsgemeinschaft\n (DFG, grant BE2578). }\n\n\n", "answers": ["The focus of the study was on the reactive radical ethynyl (C$_2$H)."], "length": 2115, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "b619fa683cbbeb8db3d575fc2e261940701a530452b14eef"} {"input": "What were the vaccines trialed against?", "context": "A special tribute to Del Bigtree (pictured) and his team at ICAN for his stunning 88 page letter to the HHS regarding vaccine safety. As Del reported - in the latest edition of Highwire - the letter, in response to an earlier reply from the then acting Director National Vaccine Program Office, Melinda Wharton, took virtually a year to compile, and is a meticulous piece of research. Most sensationally they researched the HHS claim through US government archives that at least some pediatric vaccines had been trialed against genuine placebo, and came to a negative conclusion. Not only that, they established that none of the vaccines those vaccines had been trialed against had ever been trialed against genuine placebo either. At the end of the line the toxic products were only being compared with other toxic products, rather than against saline.\nLeave aside the sceptics, for any believer in the vaccine program as a necessary intervention in public health, this should be a devastating finding. Fundamentally, the research into the safety of any of the products before marketing was simply not there. The manufacturers apparently had no faith that their proto-products could withstand this scrutiny, and for the rest they just did not care: under the alleged imperative of protecting the population it seems anything went. So even before all the sham monitoring procedures and reviews which Del and his team dismantle in forensic detail we are left with the proposition that none of the present products being given to US children – and frequently other children across most of the developed world – have any meaningful pre-marketing safety data all. If you are believer in the program you have been let down: if you wanted a program with any pretensions to safety - supposing such a thing to be possible - it looks like you would have to start from scratch. The manufacturers did this: the governments, the politicians and the regulators (internationally) let it happen.\nThis damning document is published simultaneously with a demand in the UK from the Royal Society for Public Health (which I had never heard of) to shut down comment about vaccines on the web. It echoes calls from Seth Berkley of GAVI, Heidi Larson of the Vaccine Confidence Project and the European Parliament. The pamphlet airily dismisses concerns that vaccines have side effects or that you could possibly have too many. It is pure public relations, and if the RSPH claims to be \"independent\" it also admits that the publication was paid for by Merck, a detail which was reported by British Medical Journal and the Guardian, but not true to form by the BBC. We have, in truth, been building to this moment for two decades: as the evidence piles up that every single aspect of the program lacks integrity or is simply rotten to the core all the perpetrators can do is call for the silencing of their critics, and maintain the products are safe because they say so.\nPlease help give the ICAN letter the widest possible distribution, particularly to politicians.\n\"The outcome of disease always depends both on the virulence of the pathogen and the health of the individual immune system.\"\nNope. This makes no sense. Lots of people who seemed vibrant will get a very severe case of the same illness that a vulnerable baby overcomes in a day.\nAnd under the germ theory it doesn't matter how strong your immune system *was*. Once it's been overcome by the pathogen it is every bit as weak as anybody else's with that pathogen.\nWhat you say makes no sense. There's no reason for me to reply to you again.\n\"Why do you think that within a few years (not many) of the introduction of the vaccines for them, pertussis, measles, mumps, rubella, tetanus, diphtheria, Hib disease, and chickenpox (and others) almost entirely disappeared?\"\nWhy do you keep asking this question when I've already provided the answer hundreds of times? Why are you so desperate to believe the people who you already recognize are harming our children?\nWhy would Walter Reed be any more trustworthy than Paul Offit or Senator Pan? Why would Jenner or Pasteur?\nAnd you went no way to explaining my arguments against germ theory. If we are attacked by billions of viruses every day then if even a tiny fraction of them are pathogenic then we couldn't possibly survive. And even if we could, we would already be immune rendering every vaccine pointless. Once we had survived our first few days on earth, then we could never get sick again.\nIf that's wrong then we must conclude that precisely 0% of germs are pathogenic.\nPlus your comment about the immune system completely misunderstood my point. The immune system does not allow us to overcome our math problem. In fact, it makes it worse.\nYou did provide one solitary example of a patient with what are presumably yellow fever symptoms but you didn't say whether they had been given any toxic medical treatments.\nAnd like I said before, the whole \"incubation period\" is more than a little suspicious. Clearly they never found what they thought they would and just rigged the results to tell them what they want to hear.\nLike every other germ theorist/vaccine promoter in history.\nMany kinds of bacteria are constantly evolving and changing, like flu viruses. Others are more stable over time, like the yellow fever virus. Those that change develop new ways of infiltrating the cells of the organism being attacked (from our point of view, from its unconscious point of view, it's just carrying out its need to replicate, which it can only do inside the cells of its host). The changes which allow it to better infiltrate are more successful and result in more viruses with those traits.\nOur immune system is designed to detect and destroy potentially dangerous invading pathogens. Many bacteria are usually harmless and absolutely necessary. The minority are dangerous, and most people's immune systems do a good job of analyzing them and killing them, often with no signs of disease. Others experience a clinical infection, and the immune system usually mounts a successful attack on them.\nThe outcome of disease always depends both on the virulence of the pathogen and the health of the individual immune system. Vaccines are usually effective in giving immunity to the targeted diseases. They also have many dangers which everyone should be aware of, and vaccines should be avoided whenever possible. But in the case of the most dangerous diseases, everyone should learn about them and think about what he wants to do to protect himself and his children from them, considering all the factors involved. And no one can have 100% certainty that he has made the right decision, but that's life. But if you live in the Congo and many people around you are currently dying of yellow fever, then that means that you yourself are at risk of being bitten by a loaded mosquito and getting, often dying, of yellow fever. The yellow fever vaccine is very effective at preventing yellow fever. From there, each person must make a choice.\nAt the end of this stage there is a remission of two or three days. About 80% of those with clinical disease recover at this point, with permanent immunity. The other 20% enter the toxic stage, with a return of the fever, black vomit (coffee-ground emesis), diarrhea, a slowing of the pulse (Faget's sign), jaundice, yellow eyes, yellow skin, and failure of the kidneys, liver, and heart. The patient gets a strange hiccup (like with Ebola, a related disease), falls into a coma, and dies. About half of those patients who enter the toxic stage dies, even now, even with the best of hospital care. The Faget's sign can also occur at the end of the first stage.\nYou asked specifically about the symptoms of the Americans on Dr. Reed's team who got yellow fever in Cuba in 1900. I'll give the passage from The American Plague (162-5), which describes the course of Jesse Lazear's illness. \"In his logbook, Lazear wrote an unusual entry on September 13. In all cases before those, page after page of records, Lazear had used the soldier's name and simply the date he was bitten, with no other attention to the mosquito. A one-line entry with a name and a date. On that day, however, in his elegant hand, Lazear did not write the soldier's name, but instead wrote 'Guinea Pig No. 1.' He went on to write that this guinea pig had been bitten by a mosquito that developed from an egg laid by a mosquito that developed from an egg laid by a mosquito that fed on a number of yellow fever cases: Suarez, Hernández, De Long, Ferández. It was a precise, detailed history that proved beyond doubt that the mosquito was loaded with the virus when it bit a healthy soldier...(If he had entered his name, then his death would have been considered medical suicide by the insurance company, and his wife and two children would not have gotten any payment.) For the next few days, Lazear's life continued much as it had over the last few months in Cuba. He fed and cared for the mosquitoes in the lab. ..Then he began to lose his appetite. He skipped a few meals in the mess hall. He didn't mention it to anyone, nor did he ask to see one of the yellow fever doctors; instead, he worked hard in the lab trying to ignore the oncoming headache.\n\"On September 18, he complained of feeling 'out of sorts,' and stayed in his officer's quarters. His head pounded and L. decided to write a letter. ..(he wrote to his mother, and referred to his one-year old son Houston and the baby his wife Mabel was about to have: they were staying with his mother in the US). ..That night, L. started to feel chilled as the fever came on. He never went to sleep but worked at his desk all through the night, trying to get all the information about the mosquitoes organized. By morning, he showed all the signs of a severe attack of yellow fever. The camp doctors made the diagnosis, and L. agreed to go to the yellow fever ward. ..L. was carried by litter out of the two-room, white pine board house in which he had lived since he and Mabel first arrived in Cuba. ..(In the yellow fever ward, in a separate one-room building), Lena Warner (the immune nurse who had survived the yellow fever in 1878, when she was nine, and was found in her boarded-up house by a former slave who first thought she was dead, and carried her to safety) nursed J.L., recording his vitals. (I put up a link to his case record and vital signs last week. The surgeon general required that this record be made for every yellow fever patient.)... (On September 25,) Lena Warner braced L's arms with all of her weight, shouting for help. Still he bolted from the bed, darting around the small frame-wood room as wildly as a trapped insect beating against glass. Two soldiers ran into the ward, pinning L to his bed, tying restraints around his wrists and elbows. ..Warner sponged his body with iced whiskey and water. She recorded his temperature, which had held at 104 degrees for days, on the chart beside his bed. ..(Warner watched him sleep.) But the quiet did not last. L's body began to lurch, and black vomit rolled from his mouth; through the bar hanging above his hospital cot. He writhed in the bed, and his skin grew deep yellow. His 104 temperature slowly fell, leveling out 99 degrees, and JL died at 8:45 p.m. at the age of thirty-four.\"\nAs is obvious, there are many problems with vaccines. But, that being said, most of them usually work for a period of time to prevent the targeted diseases. The basic science behind vaccines is correct. Why do you think that within a few years (not many) of the introduction of the vaccines for them, pertussis, measles, mumps, rubella, tetanus, diphtheria, Hib disease, and chickenpox (and others) almost entirely disappeared? In the case of the routine childhood diseases, this was a bad thing, but it is a true thing.\nVaccines usually don't cause any obvious reactions. While they usually prevent the diseases, and that's why people continue to get them. With the increasing vaccination schedule, more and more are severely and permanently damaged, and it is immoral to mandate any vaccine for anyone for this reason. But it would also be immoral to prohibit vaccines for those who want them enough to take the risk.\nYour article said as though it had any probative value that 90% of those who get pertussis had been vaxxed. The old DPT vaccine was MUCH more effective at preventing pertussis, but it was so dangerous (again, not to most, but to many), that developed countries replaced it with the acellular version, DTaP. From the beginning about twenty years ago, it was clear that it was not very effective and that huge numbers of vaxxed people got pertussis anyway, including my daughter who got pertussis at eight month old after having gotten three DTaPs. The pertussis vaccine continues to be very dangerous, and I do not recommend that anyone get it. It used to be a killer disease, but evolved to become much milder, to the extent that the disease is very rarely dangerous (usually only to newborns under three months old), while the vaccine is very dangerous. And they're trying to see how they can go back to the old DPT. This does not show that vaccine science has collapsed, but rather that the vaccine they developed to replace the DPT turned out to be much less effective than they first thought, while continuing to be much more dangerous than they first thought.\nYour article extrapolated from that that modern medical science in general has collapsed, but that, again, is going too far. A older woman in Mexico City who is like my mother to me had a pacemaker inserted about two months ago to aid her failing heart, and it has restored her to optimism and energy, when she was despondent, weak, and close to death. I took my daughter to the dentist yesterday, who said she has three wisdom teeth coming in and that she said that the lower right one was sore. So, although I am cautious about X-rays, I made an appointment for a panoramic X-ray in a month to assess the wisdom teeth, and, if it seems appropriate, I'll take her to an oral surgeon to have one or more extracted under IV sedation, in his office, if possible (the dentist thought that it would be). And I am confident that there will be no serious problems, but this is thanks to technology and training in modern medicine that haven't been available for that long.\nI think that everyone should inform himself on all medical procedures before agreeing to anything, but I also think that he should have access to any medical procedure which is reasonable (and opinions can differ as to that).\nOne problem is that you have not said how you think people should protect themselves against tetanus, bacterial meningitis, and yellow fever in the relevant cases, for example. These are diseases which healthy, well-nourished people used to die from very readily.\nIf most people stopped vaxxing and the mortality from these diseases rose to something like pre-vaccine levels, do you think they should just accept dying from them?\nI put that in a separate paragraph because it is the crucial issue.\nbalinaheuchter Air Traffic Control You Tube - Colin Campbell example of - How to \"Fudge a Nudge\" -\"Deal\" or \"No Deal\" \"Not in a month of Sundays\" \"No exceptions/no compromise?\" -make a trade off -do an exception- everyone get's a good deal /good outcome!\nHans, you are right that we are looking at one of the biggest crimes in all history. When I read the story of that poor girl who was so healthy and is now confined to a wheelchair after getting her third Gardasil shot I could not believe that Merck could produce such a toxic vaccine and give it out to girls like it was something they absolutely had to have only to be mislead and made into cripples. Merck should be prosecuted for the damage they have done to so many girls who got the Gardasil vaccine and were physically debilitated for life. There is a place for the people who perpetrated this crime on young girls and women and it is called hell. They have destroyed people's lives and gotten away with it. My heart goes out to those who have suffered this damage for no damn good reason except to help make huge profits for Merck!\nHere is the reason that the germ theory is nonsense.\n1) Everyday we are bombarded with billions of germs. Presumably at least some of them are of the kind that germ theorists believe are dangerous (otherwise we would have to conclude that none of them are dangerous). So how do we survive?\n2) Let's just say that we ignore 1 and imagine that, by way of magic, none of the billions of viruses we get bombarded with are pathogenic but all those that are are tucked away somewhere. Ok. But presumably they reside in sick people right? So where are there lots of sick people? Doctor offices and hospitals! So everybody must be dying the moment they enter these places right?\n3) I love this one because I have never seen anybody else ever raise it. Under the germ theory there are no negative feedbacks. This makes a stable biological system by definition impossible. The immune system is *not* a negative feedback it is the opposite. It actually reinforces our math problem because the immune system will weaken as the number of pathogens increase.\nThere is no way of resolving this problem without a discontinuity. A Deus ex Machina as The Almighty Pill so beautifully put it. So the germ theory is quite literally, mathematically impossible.\nThere is as much chance of it being true as 2+2 = 5.\nThere are plenty of other massive problems with germ theory such as why did things like SARS and bird flu magically disappear? Why do we have the symptoms that we do? Is our body controlling the symptoms to help fight the germs and if so, why would suppressing the symptoms with antibiotics or Tamiflu be considered a good idea? If the virus is causing the symptoms then why would it cause these kinds of things?", "answers": ["Other toxic products."], "length": 3141, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "74334862b5d8e2a02deb3c24aa90d5339e443fbc412453b8"} {"input": "What types of data did the authors use in their experiments?", "context": "\\section{Introduction}\n\\label{sec:introduction}\n\nProbabilistic models have proven to be very useful in a lot of applications in signal processing where signal estimation is needed \\cite{rabiner1989tutorial,arulampalam2002tutorial,ji2008bayesian}. Some of their advantages are that 1) they force the designer to specify all the assumptions of the model, 2) they provide a clear separation between the model and the algorithm used to solve it, and 3) they usually provide some measure of uncertainty about the estimation.\n\nOn the other hand, adaptive filtering is a standard approach in estimation problems when the input is received as a stream of data that is potentially non-stationary. This approach is widely understood and applied to several problems such as echo cancellation \\cite{gilloire1992adaptive}, noise cancellation \\cite{nelson1991active}, and channel equalization \\cite{falconer2002frequency}.\n\nAlthough these two approaches share some underlying relations, there are very few connections in the literature. The first important attempt in the signal processing community to relate these two fields was the connection between a linear Gaussian state-space model (i.e. Kalman filter) and the RLS filter, by Sayed and Kailath \\cite{sayed1994state} and then by Haykin \\emph{et al.} \\cite{haykin1997adaptive}. The RLS adaptive filtering algorithm emerges naturally when one defines a particular state-space model (SSM) and then performs exact inference in that model. This approach was later exploited in \\cite{van2012kernel} to design a kernel RLS algorithm based on Gaussian processes.\n\nA first attempt to approximate the LMS filter from a probabilistic perspective was presented in \\cite{park2014probabilistic}, focusing on a kernel-based implementation. The algorithm of \\cite{park2014probabilistic} makes use of a Maximum a Posteriori (MAP) estimate as an approximation for the predictive step. However, this approximation does not preserve the estimate of the uncertainty in each step, therefore degrading the performance of the algorithm.\n\nIn this work, we provide a similar connection between state-space models and least-mean-squares (LMS). Our approach is based on approximating the posterior distribution with an isotropic Gaussian distribution. We show how the computation of this approximated posterior leads to a linear-complexity algorithm, comparable to the standard LMS. Similar approaches have already been developed for a variety of problems such as channel equalization using recurrent RBF neural networks \\cite{cid1994recurrent}, or Bayesian forecasting \\cite{harrison1999bayesian}. Here, we show the usefulness of this probabilistic approach for adaptive filtering.\n\nThe probabilistic perspective we adopt throughout this work presents two main advantages. Firstly, a novel LMS algorithm with adaptable step size emerges naturally with this approach, making it suitable for both stationary and non-stationary environments. The proposed algorithm has less free parameters than previous LMS algorithms with variable step size \\cite{kwong1992variable,aboulnasr1997robust,shin2004variable}, and its parameters are easier to be tuned w.r.t. these algorithms and standard LMS. Secondly, the use of a probabilistic model provides us with an estimate of the error variance, which is useful in many applications.\n\nExperiments with simulated and real data show the advantages of the presented approach with respect to previous works. However, we remark that the main contribution of this paper is that it opens the door to introduce more Bayesian machine learning techniques, such as variational inference and Monte Carlo sampling methods \\cite{barber2012bayesian}, to adaptive filtering.\\\\\n\n\n\\section{Probabilistic Model}\n\nThroughout this work, we assume the observation model to be linear-Gaussian with the following distribution,\n\n\\begin{equation}\np(y_k|{\\bf w}_k) = \\mathcal{N}(y_k;{\\bf x}_k^T {\\bf w}_k , \\sigma_n^2),\n\\label{eq:mess_eq}\n\\end{equation}\nwhere $\\sigma_n^2$ is the variance of the observation noise, ${\\bf x}_k$ is the regression vector and ${\\bf w}_k$ is the parameter vector to be sequentially estimated, both $M$-dimensional column vectors.\n\n\nIn a non-stationary scenario, ${\\bf w}_k$ follows a dynamic process. In particular, we consider a diffusion process (random-walk model) with variance $\\sigma_d^2$ for this parameter vector:\n\n\n\\begin{equation}\np({\\bf w}_k|{\\bf w}_{k-1})= \\mathcal{N}({\\bf w}_k;{\\bf w}_{k-1}, \\sigma_d^2 {\\bf I}),\n\\label{eq:trans_eq}\n\\end{equation}\nwhere $\\bf I$ denotes the identity matrix. In order to initiate the recursion, we assume the following prior distribution on ${\\bf w}_k$\n\n\\begin{equation}\np({\\bf w}_0)= \\mathcal{N}({\\bf w}_0;0, \\sigma_d^2{\\bf I}).\\nonumber\n\\end{equation}\n\n\\section{Exact inference in this model: Revisiting the RLS filter}\n\nGiven the described probabilistic SSM, we would like to infer the posterior probability distribution $p({\\bf w}_k|y_{1:k})$.\nSince all involved distributions are Gaussian, one can perform exact inference, leveraging the probability rules in a straightforward manner. The resulting probability distribution is\n\\begin{equation}\np({\\bf w}_k|y_{1:k}) = \\mathcal{N}({\\bf w}_k;{\\bf\\boldsymbol\\mu}_{k}, \\boldsymbol\\Sigma_{k}), \\nonumber\n\\end{equation}\nin which the mean vector ${\\bf\\boldsymbol\\mu}_{k}$ is given by\n\\begin{equation}\n{\\bf\\boldsymbol\\mu}_k = {\\bf\\boldsymbol\\mu}_{k-1} + {\\bf K}_k (y_k - {\\bf x}_k^T {\\bf\\boldsymbol\\mu}_{k-1}){\\bf x}_k, \\nonumber\n\\end{equation}\nwhere we have introduced the auxiliary variable\n\\begin{equation}\n{\\bf K}_k = \\frac{ \\left(\\boldsymbol\\Sigma_{k-1} + \\sigma_d^2 {\\bf I}\\right)}{{\\bf x}_k^T \\left(\\boldsymbol\\Sigma_{k-1} + \\sigma_d^2 {\\bf I}\\right) {\\bf x}_k + \\sigma_n^2}, \\nonumber\n\\end{equation}\nand the covariance matrix $\\boldsymbol\\Sigma_k$ is obtained as\n\\begin{equation}\n\\boldsymbol\\Sigma_k = \\left( {\\bf I} - {\\bf K}_k{\\bf x}_k {\\bf x}_k^T \\right) ( \\boldsymbol\\Sigma_{k-1} +\\sigma_d^2), \\nonumber\n\\end{equation}\nNote that the mode of $p({\\bf w}_k|y_{1:k})$, i.e. the maximum-a-posteriori estimate (MAP), coincides with the RLS adaptive rule\n\\begin{equation}\n{{\\bf w}}_k^{(RLS)} = {{\\bf w}}_{k-1}^{(RLS)} + {\\bf K}_k (y_k - {\\bf x}_k^T {{\\bf w}}_{k-1}^{(RLS)}){\\bf x}_k .\n\\label{eq:prob_rls}\n\\end{equation}\nThis rule is similar to the one introduced in \\cite{haykin1997adaptive}.\n\nFinally, note that the covariance matrix $\\boldsymbol\\Sigma_k$ is a measure of the uncertainty of the estimate ${\\bf w}_k$ conditioned on the observed data $y_{1:k}$. Nevertheless, for many applications a single scalar summarizing the variance of the estimate could prove to be sufficiently useful. In the next section, we show how such a scalar is obtained naturally when $p({\\bf w}_k|y_{1:k})$ is approximated with an isotropic Gaussian distribution. We also show that this approximation leads to an LMS-like estimation.\n \n\n\n\\section{Approximating the posterior distribution: LMS filter }\n\nThe proposed approach consists in approximating the posterior distribution $p({\\bf w}_k|y_{1:k})$, in general a multivariate Gaussian distribution with a full covariance matrix, by an isotropic spherical Gaussian distribution \n\n\\begin{equation}\n\\label{eq:aprox_post}\n\\hat{p}({\\bf w}_{k}|y_{1:k})=\\mathcal{N}({\\bf w}_{k};{\\bf \\hat{\\boldsymbol\\mu}}_{k}, \\hat{\\sigma}_{k}^2 {\\bf I} ).\n\\end{equation}\n\nIn order to estimate the mean and covariance of the approximate distribution $\\hat{p}({\\bf w}_{k}|y_{1:k})$, we propose to select those that minimize the Kullback-Leibler divergence with respect to the original distribution, i.e., \n\n\\begin{equation}\n\\{\\hat{\\boldsymbol\\mu}_k,\\hat{\\sigma}_k\\}=\\arg \\displaystyle{ \\min_{\\hat{\\boldsymbol\\mu}_k,\\hat{\\sigma}_k}} \\{ D_{KL}\\left(p({\\bf w}_{k}|y_{1:k}))\\| \\hat{p}({\\bf w}_{k}|y_{1:k})\\right) \\}. \\nonumber\n\\end{equation}\n\nThe derivation of the corresponding minimization problem can be found in Appendix A. In particular, the optimal mean and the covariance are found as\n\\begin{equation}\n{\\hat{\\boldsymbol\\mu}}_{k} = {\\boldsymbol\\mu}_{k};~~~~~~ \\hat{\\sigma}_{k}^2 = \\frac{{\\sf Tr}\\{ \\boldsymbol\\Sigma_k\\} }{M}.\n\\label{eq:sigma_hat}\n\\end{equation}\n\n\nWe now show that by using \\eqref{eq:aprox_post} in the recursive predictive and filtering expressions we obtain an LMS-like adaptive rule. First, let us assume that we have an approximate posterior distribution at $k-1$, $\\hat{p}({\\bf w}_{k-1}|y_{1:k-1}) = \\mathcal{N}({\\bf w}_{k-1};\\hat{\\bf\\boldsymbol\\mu}_{k-1}, \\hat{\\sigma}_{k-1}^2 {\\bf I} )$. Since all involved distributions are Gaussian, the predictive distribution\nis obtained as %\n\\begin{eqnarray}\n\\hat{p}({\\bf w}_k|y_{1:k-1}) &=& \\int p({\\bf w}_k|{\\bf w}_{k-1}) \\hat{p}({\\bf w}_{k-1}|y_{1:k-1}) d{\\bf w}_{k-1} \\nonumber\\\\\n&=& \\mathcal{N}({\\bf w}_k;{\\bf\\boldsymbol\\mu}_{k|k-1}, \\boldsymbol\\Sigma_{k|k-1}), \n\\label{eq:approx_pred}\n\\end{eqnarray}\nwhere the mean vector and covariance matrix are given by\n\\begin{eqnarray}\n\\hat{\\bf\\boldsymbol\\mu}_{k|k-1} &=& \\hat{\\bf\\boldsymbol\\mu}_{k-1} \\nonumber \\\\\n\\hat{\\boldsymbol\\Sigma}_{k|k-1} &=& (\\hat{\\sigma}_{k-1}^2 + \\sigma_d^2 ){\\bf I}\\nonumber.\n\\end{eqnarray}\n\nFrom \\eqref{eq:approx_pred}, the posterior distribution at time $k$ can be computed using Bayes' Theorem and standard Gaussian manipulations (see for instance \\cite[Ch. 4]{murphy2012machine}). Then, we approximate the posterior $p({\\bf w}_k|y_{1:k})$ with an isotropic Gaussian,\n\\begin{equation}\n\\hat{p}({\\bf w}_k|y_{1:k}) = \\mathcal{N}({\\bf w}_k ; {\\hat{\\boldsymbol\\mu}}_{k}, \\hat{\\sigma}_k^2 {\\bf I} ),\\nonumber\n\\end{equation}\nwhere \n\\begin{eqnarray}\n{\\hat{\\boldsymbol\\mu}}_{k} &= & {\\hat{\\boldsymbol\\mu}}_{k-1}+ \\frac{ (\\hat{\\sigma}_{k-1}^2+ \\sigma_d^2) }{(\\hat{\\sigma}_{k-1}^2+ \\sigma_d^2) \\|{\\bf x}_k\\|^2 + \\sigma_n^2} (y_k - {\\bf x}_k^T {\\hat{\\boldsymbol\\mu}}_{k-1}){\\bf x}_k \\nonumber \\\\\n&=& {\\hat{\\boldsymbol\\mu}}_{k-1}+ \\eta_k (y_k - {\\bf x}_k^T {\\hat{\\boldsymbol\\mu}}_{k-1}){\\bf x}_k . \n\\label{eq:prob_lms}\n\\end{eqnarray}\nNote that, instead of a gain matrix ${\\bf K}_k$ as in Eq.~\\eqref{eq:prob_rls}, we now have a scalar gain $\\eta_k$ that operates as a variable step size.\n\n\nFinally, to obtain the posterior variance, which is our measure of uncertainty, we apply \\eqref{eq:sigma_hat} and the trick ${\\sf Tr}\\{{\\bf x}_k{\\bf x}_k^T\\}= {\\bf x}_k^T{\\bf x}_k= \\|{\\bf x}_k \\|^2$,\n\n\\begin{eqnarray}\n\\hat{\\sigma}_k^2 &=& \\frac{{\\sf Tr}(\\boldsymbol\\Sigma_k)}{M} \\\\\n&=& \\frac{1}{M}{\\sf Tr}\\left\\{ \\left( {\\bf I} - \\eta_k {\\bf x}_k {\\bf x}_k^T \\right) (\\hat{\\sigma}_{k-1}^2 +\\sigma_d^2)\\right\\} \\\\\n&=& \\left(1 - \\frac{\\eta_k \\|{\\bf x}_k\\|^2}{M}\\right)(\\hat{\\sigma}_{k-1}^2 +\\sigma_d^2).\n\\label{eq:sig_k}\n\\end{eqnarray}\nIf MAP estimation is performed, we obtain an adaptable step-size LMS estimation\n\n\\begin{equation}\n{\\bf w}_{k}^{(LMS)} = {\\bf w}_{k-1}^{(LMS)} + \\eta_k (y_k - {\\bf x}_k^T {\\bf w}_{k-1}^{(LMS)}){\\bf x}_k, \t\n\\label{eq:lms}\n\\end{equation}\nwith\n\\begin{equation}\n\\eta_k = \\frac{ (\\hat{\\sigma}_{k-1}^2+ \\sigma_d^2) }{(\\hat{\\sigma}_{k-1}^2+ \\sigma_d^2) \\|{\\bf x}_k\\|^2 + \\sigma_n^2}.\\nonumber\n\\end{equation}\nAt this point, several interesting remarks can be made:\n\n\\begin{itemize}\n\n\\item The adaptive rule \\eqref{eq:lms} has linear complexity since it does not require us to compute the full matrix $\\boldsymbol\\Sigma_k$.\n\n\\item For a stationary model, we have $\\sigma_d^2=0$ in \\eqref{eq:prob_lms} and \\eqref{eq:sig_k}. In this case, the algorithm remains valid and both the step size and the error variance, $\\hat{\\sigma}_{k}$, vanish over time $k$. \n\n\\item Finally, the proposed adaptable step-size LMS has only two parameters, $\\sigma_d^2$ and $\\sigma_n^2$, (and only one, $\\sigma_n^2$, in stationary scenarios) in contrast to other variable step-size algorithms \\cite{kwong1992variable,aboulnasr1997robust,shin2004variable}. More interestingly, both $\\sigma_d^2$ and $\\sigma_n^2$ have a clear underlying physical meaning, and they can be estimated in many cases. We will comment more about this in the next section. \n\\end{itemize}\n\n\n\n\\section{Experiments}\n\\label{sec:experiments}\n\nWe evaluate the performance of the proposed algorithm in both stationary and tracking experiments. In the first experiment, we estimate a fixed vector ${\\bf w}^{o}$ of dimension $M=50$. The entries of the vector are independently and uniformly chosen in the range $[-1,1]$. Then, the vector is normalized so that $\\|{\\bf w}^o\\|=1$. Regressors $\\boldsymbol{x}_{k}$ are zero-mean Gaussian vectors with identity covariance matrix. The additive noise variance is such that the SNR is $20$ dB. We compare our algorithm with standard RLS and three other LMS-based algorithms: LMS, NLMS \\cite{sayed2008adaptive}, VSS-LMS \\cite{shin2004variable}.\\footnote{The used parameters for each algorithm are: for RLS $\\lambda=1$, $\\epsilon^{-1}=0.01$; for LMS $\\mu=0.01$; for NLMS $\\mu=0.5$; and for VSS-LMS $\\mu_{max}=1$, $\\alpha=0.95$, $C=1e-4$.} The probabilistic LMS algorithm in \\cite{park2014probabilistic} is not simulated because it is not suitable for stationary environments.\n\nIn stationary environments, the proposed algorithm has only one parameter, $\\sigma^2_n$. We simulate both the scenario where we have perfectly knowledge of the amount of noise (probLMS1) and the case where the value $\\sigma^2_n$ is $100$ times smaller than the actual value (probLMS2). The Mean-Square Deviation (${\\sf MSD} = {\\mathbb E} \\| {\\bf w}_0 - {\\bf w}_k \\|^2$), averaged out over $50$ independent simulations, is presented in Fig. \\ref{fig:msd_statationary}.\n\n\n\n\\begin{figure}[htb]\n\\centering\n\\begin{minipage}[b]{\\linewidth}\n \\centering\n \\centerline{\\includegraphics[width=\\textwidth]{results_stationary_MSD}}\n\\end{minipage}\n\\caption{Performance in terms of MSD of probabilistic LMS with both optimal (probLMS1) and suboptimal (probLMS2) compared to LMS, NLMS, VS-LMS, and RLS.}\n\\label{fig:msd_statationary}\n\\end{figure}\n\nThe performance of probabilistic LMS is close to RLS (obviously at a much lower computational cost) and largely outperforms previous variable step-size LMS algorithms proposed in the literature. Note that, when the model is stationary, i.e. $\\sigma^2_d=0$ in \\eqref{eq:trans_eq}, both the uncertainty $\\hat{\\sigma}^2_k$, and the adaptive step size $\\eta_k$, vanish over time. This implies that the error tends to zero when $k$ goes to infinity. Fig. \\ref{fig:msd_statationary} also shows that the proposed approach is not very sensitive to a bad choice of its only parameter, as demonstrated by the good results of probLMS2, which uses a $\\sigma^2_n$ that is $100$ times smaller than the optimal value. \n\n\n\\begin{figure}[htb]\n\\centering\n\\begin{minipage}[b]{\\linewidth}\n \\centering\n \\centerline{\\includegraphics[width=\\textwidth]{fig2_final}}\n\\end{minipage}\n\\caption{Real part of one coefficient of the measured and estimated channel in experiment two. The shaded area represents two standard deviations from the prediction {(the mean of the posterior distribution)}.}\n\\label{fig_2}\n\\end{figure}\n\n\n\\begin{table}[ht]\n\\begin{footnotesize}\n\\setlength{\\tabcolsep}{2pt}\n\\def1.5mm{1.5mm}\n\\begin{center}\n\\begin{tabular}{|l@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|c@{\\hspace{1.5mm}}|}\n\\hline\nMethod & LMS & NLMS & LMS-2013 & VSSNLMS & probLMS & RLS \\\\\n\\hline\n\\hline\nMSD (dB) &-28.45 &-21.07 &-14.36 &-26.90 &-28.36 &-25.97\\\\\n\\hline \n\\end{tabular}\n\\end{center}\n\\caption{Steady-state MSD of the different algorithms for the tracking of a real MISO channel.}\n\\label{tab:table_MSD}\n\\end{footnotesize}\n\n\\end{table}\n\\newpage\nIn a second experiment, we test the tracking capabilities of the proposed algorithm with {real} data of a wireless MISO channel acquired in a realistic indoor scenario. More details on the setup can be found in \\cite{gutierrez2011frequency}. Fig. \\ref{fig_2} shows the real part of one of the channels, and the estimate of the proposed algorithm. The shaded area represents the estimated uncertainty for each prediction, i.e. $\\hat{\\mu}_k\\pm2\\hat{\\sigma}_k$. Since the experimental setup does not allow us to obtain the optimal values for the parameters, we fix these parameters to their values that optimize the steady-state mean square deviation (MSD). \\hbox{Table \\ref{tab:table_MSD}} shows this steady-state MSD of the estimate of the MISO channel with different methods. As can be seen, the best tracking performance is obtained by standard LMS and the proposed method. \n\n\n\n\n\n\\section{Conclusions and Opened Extensions}\n\\label{sec:conclusions}\n\n{We have presented a probabilistic interpretation of the least-mean-square filter. The resulting algorithm is an adaptable step-size LMS that performs well both in stationary and tracking scenarios. Moreover, it has fewer free parameters than previous approaches and these parameters have a clear physical meaning. Finally, as stated in the introduction, one of the advantages of having a probabilistic model is that it is easily extensible:}\n\n\\begin{itemize}\n\\item If, instead of using an isotropic Gaussian distribution in the approximation, we used a Gaussian with diagonal covariance matrix, we would obtain a similar algorithm with different step sizes and measures of uncertainty, for each component of ${\\bf w}_k$. Although this model can be more descriptive, it needs more parameters to be tuned, and the parallelism with LMS vanishes.\n\\item Similarly, if we substitute the transition model of \\eqref{eq:trans_eq} by an Ornstein-Uhlenbeck process, \n\n\\begin{equation}\np({\\bf w}_k|{\\bf w}_{k-1})= \\mathcal{N}({\\bf w}_k;\\lambda {\\bf w}_{k-1}, \\sigma_d^2), \\nonumber\n\\label{eq:trans_eq_lambda}\n\\end{equation}\na similar algorithm is obtained but with a forgetting factor $\\lambda$ multiplying ${\\bf w}_{k-1}^{(LMS)}$ in \\eqref{eq:lms}. This algorithm may have improved performance under such a kind of autoregresive dynamics of ${\\bf w}_{k}$, though, again, the connection with standard LMS becomes dimmer.\n\n\\item As in \\cite{park2014probabilistic}, the measurement model \\eqref{eq:mess_eq} can be changed to obtain similar adaptive algorithms for classification, ordinal regression, and Dirichlet regression for compositional data. \n\n\\item A similar approximation technique could be applied to more complex dynamical models, i.e. switching dynamical models \\cite{barber2010graphical}. The derivation of efficient adaptive algorithms that explicitly take into account a switch in the dynamics of the parameters of interest is a non-trivial and open problem, though the proposed approach could be useful.\n\n\\item Finally, like standard LMS, this algorithm can be kernelized for its application in estimation under non-linear scenarios.\n\n\\end{itemize}\n\n\n\\begin{appendices}\n\n\\section{KL divergence between a general gaussian distribution and an isotropic gaussian}\n\\label{sec:kl}\n\n We want to approximate $p_{{\\bf x}_1}(x) = \\mathcal{N}({\\bf x}; \\boldsymbol\\mu_1,\\boldsymbol\\Sigma_1)$ by $p_{{\\bf x}_2}({\\bf x}) = \\mathcal{N}({\\bf x}; \\boldsymbol\\mu_2,\\sigma_2^2 {\\bf I})$. In order to do so, we have to compute the parameters of $p_{{\\bf x}_2}({\\bf x})$, $\\boldsymbol\\mu_2$ and $\\sigma_2^2$, that minimize the following Kullback-Leibler divergence,\n\n\\begin{eqnarray}\nD_{KL}(p_{{\\bf x}_1}\\| p_{{\\bf x}_2}) &=&\\int_{-\\infty}^{\\infty} p_{{\\bf x}_1}({\\bf x}) \\ln{\\frac{p_{{\\bf x}_1}({\\bf x})}{p_{{\\bf x}_2}({\\bf x})}}d{\\bf x} \\nonumber \\\\\n&= & \\frac{1}{2} \\{ -M + {\\sf Tr}(\\sigma_2^{-2} {\\bf I}\\cdot \\boldsymbol\\Sigma_1^{-1}) \\nonumber \\\\\n & & + (\\boldsymbol\\mu_2 - \\boldsymbol\\mu_1 )^T \\sigma^{-2}_2{\\bf I} (\\boldsymbol\\mu_2 - \\boldsymbol\\mu_1 ) \\nonumber \\\\\n & & + \\ln \\frac{{\\sigma_2^2}^M}{\\det\\boldsymbol\\Sigma_1} \\}. \n\\label{eq:divergence}\n\\end{eqnarray}\nUsing symmetry arguments, we obtain \n\\begin{equation}\n\\boldsymbol\\mu_2^{*} =\\arg \\displaystyle{ \\min_{\\boldsymbol\\mu_2}} \\{ D_{KL}(p_{{\\bf x}_1}\\| p_{{\\bf x}_2}) \\} = \\boldsymbol\\mu_1.\n\\end{equation}\nThen, \\eqref{eq:divergence} gets simplified into \n\n\\begin{eqnarray}\nD_{KL}(p_{{\\bf x}_1}\\| p_{{\\bf x}_2}) = \\frac{1}{2}\\lbrace { -M + {\\sf Tr}(\\frac{\\boldsymbol\\Sigma_1}{\\sigma_2^{2}}) + \\ln \\frac{\\sigma_2^{2M}}{\\det\\boldsymbol\\Sigma_1}}\\rbrace.\n\\end{eqnarray}\nThe variance $\\sigma_2^2$ is computed in order to minimize this Kullback-Leibler divergence as\n\n\\begin{eqnarray}\n\\sigma_2^{2*} &=& \\arg\\min_{\\sigma_2^2} D_{KL}(P_{x_1}\\| P_{x_2}) \\nonumber \\\\\n &=& \\arg\\min_{\\sigma_2^2}\\{ \\sigma_2^{-2}{\\sf Tr}\\{\\boldsymbol\\Sigma_1\\} + M\\ln \\sigma_2^{2} \\} .\n\\end{eqnarray}\nDeriving and making it equal zero leads to\n\n\\begin{equation}\n\\frac{\\partial}{\\partial \\sigma_2^2} \\left[ \\frac{{\\sf Tr}\\{\\boldsymbol\\Sigma_1\\}}{\\sigma_2^{2}} + M \\ln \\sigma_2^{2} \\right] = \\left. {\\frac{M}{\\sigma_2^{2}}-\\frac{{\\sf Tr}\\{\\boldsymbol\\Sigma_1\\}}{(\\sigma_2^{2})^2}}\\right|_{\\sigma_2^{2}=\\sigma_2^{2*}}\\left. =0 \\right. .\n\\nonumber\n\\end{equation}\nFinally, since the divergence has a single extremum in $R_+$,\n\\begin{equation}\n\\sigma_2^{2*} = \\frac{{\\sf Tr}\\{\\boldsymbol\\Sigma_1\\}}{M}.\n\\end{equation}\n\n\n\n\n\\end{appendices}\n\n\\vfill\n\\clearpage\n\n\\bibliographystyle{IEEEbib}\n", "answers": ["The authors used simulated data and real data from a wireless MISO channel."], "length": 2554, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "1f3b1e37c3d2ead7ab4fe7dd9d5cddd55b2c76d28e6bfc86"} {"input": "What is the future direction mentioned in the conclusion?", "context": "Paper Info\n\nTitle: Is In-hospital Meta-information Useful for Abstractive Discharge Summary Generation?\nPublish Date: 10 Mar 2023\nAuthor List: Mamoru Komachi (from Tokyo Metropolitan University), Takashi Okumura (from Kitami Institute of Technology), Hiromasa Horiguchi (from National Hospital Organization), Yuji Matsumoto\n\nFigure\n\nFig. 1.Example of part of a discharge summary which is a dummy we created.\nFig. 2. Overview of our proposed method.A new feature embedding layer encoding hospital, physician, disease, and length of stay is added to the standard transformer architecture.The figure shows an example of hospital embedding.\nStatistics of our data for experiment.\nof summarization models with different meta-information.The best results are highlighted in bold.Each score is the average of three models with different seeds.The BS and BR indicate BERTScore and BLEURT, respectively.\nStatistics on the number of cases handled by physicians.C/P denotes Cases/Physician, which indicates how many cases an individual physician has.Method of Grouping Physician IDs A most naive method of mapping physician IDs to features is without any grouping process.The data contains 4,846 physicians, so |M | was set to 4,846.However it caused our model's training to be unstable.This might be due to the many physician IDs appearing for the first time in the test time.Table\n\nabstract\n\nDuring the patient's hospitalization, the physician must record daily observations of the patient and summarize them into a brief document called \"discharge summary\" when the patient is discharged. Automated generation of discharge summary can greatly relieve the physicians' burden, and has been addressed recently in the research community.\nMost previous studies of discharge summary generation using the sequenceto-sequence architecture focus on only inpatient notes for input. However, electric health records (EHR) also have rich structured metadata (e.g., hospital, physician, disease, length of stay, etc.) that might be useful. This paper investigates the effectiveness of medical meta-information for summarization tasks.\nWe obtain four types of meta-information from the EHR systems and encode each meta-information into a sequence-to-sequence model. Using Japanese EHRs, meta-information encoded models increased ROUGE-1 by up to 4.45 points and BERTScore by 3.77 points over the vanilla Longformer. Also, we found that the encoded meta-information improves the precisions of its related terms in the outputs.\nOur results showed the benefit of the use of medical meta-information.\n\nINTRODUCTION\n\nClinical notes are written daily by physicians from their consults and are used for their own decision-making or coordination of treatment. They contain a large amount of important data for machine learning, such as conditions, laboratory tests, diagnoses, procedures, and treatments. While invaluable to physicians and researchers, the paperwork is burdensome for physicians , .\nDischarge summaries, a subset of these, also play a crucial role in patient care, and are used to share information between hospitals and physicians (see an example in Figure ). It is created by the physician as a summary of notes during hospitalization at the time of the patient's discharge, which is known to be very time-consuming.\nResearchers have begun to apply automatic summarization techniques to address this problem - . Previous studies used extractive or abstractive summarization methods, but most of them focused on only progress notes for inputs. Properly summarizing an admission of a patient is a quite complex task, and requires various meta-information such as the patient's age, gender, vital signs, laboratory values and background to specific diseases.\nTherefore, discharge summary generation needs more medical meta-information, than similar but narrower tasks such as radiology report generation. However, what kind of meta-information is important for summarization has not been investigated, even though it is critical not only for future research on medical summarization but also for the policy of data collection infrastructure.\nIn this paper, we first reveal the effects of meta-information on neural abstractive summarization on admissions. Our model is based on an encoder-decoder transformer with an additional feature embedding layer in the encoder (Figure ). Hospital, physician, disease, and length of stay are used as meta-information, and each feature is embedded in the vector space.\nFor experiments, we collect progress notes, discharge summaries and coded information from the electronic health record system, which are managed by a largest multi-hospital organization in Japan. Our main contributions are as follows: • We found that a transformer encoding meta-information generates higher quality summaries than the vanilla one, and clarified the benefit of using meta-information for medical summarization tasks.\n• We found that a model encoding disease information can produce proper disease and symptom words following the source. In addition, we found that the model using physician and hospital information can generate symbols that are commonly written in the summary. • We are the first to apply the abstractive summarization method to generate Japanese discharge summaries.\nIn the studies of summarization of medical documents, it is common to retrieve key information such as disease, examination result, or medication from EHRs - . Other researchs more similar to our study targeted to help physicians get the point of medical documents quickly by generating a few key sentences - .\nStudies generating contextualized summaries can be categorized by the type of model inputs and architectures. Some studies produced a whole discharge summary using structured data for input - The sensitivity of the gram stain for bacterial meningitis is about 60%, and the sensitivity of the culture is not high either.\nAlso, the glucose in the cerebrospinal fluid would have been slightly lower. Although no definitive diagnosis could be made, bacterial meningitis was the most suspicious disease. The causative organism was assumed to be MRSA, and vancomycin and meropenem (meningitis dose) were used to cover a wide range of enteric bacteria.\na whole discharge summary from free-form inpatient records - . The free-form data is more challenging since it is noisier than structured data. In inputting of the free-form data, extractive summarization methods, which extract sentences from the source, are commonly used , - . On the other hands, an encoder-decoder model was used for abstractive summarization , , with a limited number of studies.\nThe various issues in the abstractive generation of discharge summary would be studied in the future. Studies using medical meta-information have long been conducted on a lot of tasks - . In abstractive summarization on discharge summary, developed a model incorporating similarity of progress notes and information of the record author.\nThey presented an idea of integrating meta-information into the abstractive summarization model on medical documents, but did not reveal how meta-information would affect the quality of the summaries. Our method is based on the encoder-decoder transformer model. The transformer model is known for its high performance and has been widely used in recent studies, thus it is suitable for our purpose.\nAs shown in Figure , the standard input to a transformer's encoder is created by a token sequence T = [t 0 , t 1 , ..., t i ] and position sequence P = [p 0 , p 1 , ..., p i ], where i is the maximum input length. The token and position sequences are converted into token embeddings E T and positional embeddings E P by looking up the vocabulary tables.\nThe sum of E T and E P is input into the model. In this paper, we attempt to encode meta-information to feature embeddings. We follow the segment embeddings of BERT and the language embeddings of XLM , which provide additional information to the model. It is not a new idea but is suitable for our validation.\nOur method is formulated as follows: Let M be feature type, M ∈ {Vanilla, Hospital, Physician, Disease, Length of stay}, since we set five types of features. Feature embeddings E M is created by looking up the feature table where m j is featue value (e.g., pysician ID, disease code, etc.) and |M | is the maximum number of differences in a feature.\nIn our study, |M | is set to four different values depending on features. Specifically, they are as follows. a) Hospital: As shown in Table , the data includes five hospital records. They were obtained mechanically from the EHR system. b) Physician: Physicians are also managed by IDs in the EHR systems. We hashed the physician IDs into 485 groups containing 10 people each.\nSpecifically, as a naive strategy, we shuffled and listed the cases within each hospital, and hashed them into groups in the order of appearance of the physician IDs. So each group has the information about the relevance of the hospitals. The reason for employing a grouping strategy is described in Appendix A.\nc) Disease: Two types of disease information exist in our EHRs: disease names and disease codes called ICD-10 . We did not use any disease names in the inputs for our experiment. Instead, we encoded diseases with the first three letters of the ICD-10 code, because they represent well the higher level concept.\nThe initial three letters of the ICD-10 codes are arranged in the order of an alphabetic letter, a digit, and a digit, so there are a total of 2,600 ways to encode a disease. In our data, some ICD-10 codes were missing, although all disease names were systematically obtained from the EHR system. For such cases, we converted the disease names into ICD-10 codes using MeCab with the J-MeDic (MANBYO 201905) dictionary.\nAlso, diseases can be divided into primary and secondary diseases, but we only deal with the primary diseases. d) Length of stay: The length of stay can be obtained mechanically from the EHR system and the maximum value was set to 1,000 days. We set |M | for vanilla, hospital, physician, disease, and length of stay to 1, 5, 485, 2,600, and 1,000, respectively .\nThe vanilla embedding is prepared for the baseline in our experiment and to equalize the total number of parameters with the other models. The input to our model is the sum of E T , E P and E M . We also prepare an extra model with all features for our experiments. This takes all four feature embeddings (hospital, physician, disease, and length of stay) added to the encoder.\n\nDatasets and Metrics\n\nWe evaluated our proposed method on a subset of data from National Hospital Organization (NHO), the largest multiinstitutional organization in Japan. The statistics of our data are shown in Table , which includes 24,630 cases collected from five hospitals. Each case includes a discharge summary and progress notes for the days of stay.\nThe data are randomly split into 22,630, 1,000, and 1,000 for train, validation, and test, respectively. Summarization performances are reported in ROUGE-1, ROUGE-2, ROUGE-L and BERTScore in terms of F1. In addition, we also employed BLEURT , which models human judgment.\n\nArchitectures and Hyperparameters\n\nDue to our hardware constraints we need a model that is computationally efficient, so we employed the Longformer instead of the conventional transformer. Longformer can . In our model, number of layers, window size, dilation, input sequence length, output sequence length, batch size, learning rate and number of warmup steps are 8, 256, 1, 1024, 256, 4, 3e-5 and 1K, respectively.\nOther hyperparameters are the same as in the original Longformer, except for the maximum number of epochs is not fixed and the best epoch. It is selected for each training using the validation data based on ROUGE-1. Also, the original Longformer imports pretrained-BART parameters to initial values, but we do not use pre-trained Japanese BART in this study.\nWe used three GeForce RTX 2080 TI for our experiments. Our vocabulary for preparing input to Longformer is taken from UTH-BERT , which is pre-trained on the Japanese clinical records. Since the vocabulary of UTH-BERT is trained by WordPiece , we also tokenize our data with WordPiece. However, the vocabulary does not include white space and line breaks, which cannot be handled, so we add those two tokens to the vocabulary, resulting in a total size of 25,002.\nThe vocabulary has all tokens in full characters, so we normalized full-wdith characters by converting all alphanumeric and symbolic characters to half-width for byte fallback. , we found that all the models with encoded medical meta-information perform better in ROUGE-1, ROUGE-L and BLEURT than the vanilla Longformer.\nHowever, in BERTScore, only hospital and disease models outperform the vanilla. Specifically, disease information is most effective, improving ROUGE-1, ROUGE-2, ROUGE-L, BERTScore and BLEURT by 4.45, 0.73, 3.12, 3.77 and 0.21 points over the vanilla model, respectively. This seems to be because disease information and the ICD-10 ontology efficiently cluster groups with similar representations.\nIn contrast, in ROUGE-2 and ROUGE-L, the model with physician embedding is inferior to the vanilla model. This seems to be a negative effect of grouping physicians without any consideration of their relevance. It would be better to cluster them by department, physician attributes, similarity of progress notes, etc. Regarding low ROUGE-2 scores in all models, a previous study using the English data set also reported a low ROUGE-2 score of about 5%, which may indicate an inherent difficulty in discharge summary generation.\nIn BERTScore, the models with the physician and the length of stay did not reach the performance of the vanilla model, suggesting that the system's outputs are semantically inferior. The model with all features performed the lowest of all models in BERTScore. The reason for the low score of the model with all features seems to be that its number of parameters in feature embedding was four times larger than that of the model with the individual feature, and the amount of training data was insufficient.\nIn BLEURT, all models with meta-information outperform vanilla, which suggests that they are more natural to humans. To analyze the influence of encoded meta-information on the outputs, we evaluate the precisions of the generated text. Specifically, we measure the probability that the generated words are included in the gold summary to investigate if the proper words are generated.\nSome previous studies on faithfulness, which also analyze the output of summarization, have employed words or entities - . In this study, we focused on words, not entities, because we wanted to visualize expressions that are not only nouns. The words were segmented by MeCab with the J-MeDic. For each segmented word, the numeral and symbol labels were assigned as parts of speech by MeCab, the morphological analyzer, while the disease and symptom were assigned by the J-Medic dictionary.\nThe results, shown in Figure , indicate that the encoded disease information leads to generate more proper disease and symptom words. This indicates that the meta-information successfully learns disease-related expressions. The encoded hospital or physician information also improved the precision of symbols generation.\nThis suggests that different hospitals and physicians have different description habits (e.g., bullet points such as \"•\", \"*\" and \"-\", punctuation such as \"。\" and \".\", etc.), which can be grouped by meta-information. In this paper, we conducted a discharge summary generation experiment by adding four types of information to Longformer and verified the impact of the meta-information.\nThe results showed that all four types of information exceeded the performance of the vanilla Longformer model, with the highest performance achieved by encoding disease information. We found that meta-information is useful for abstractive summarization on discharge summaries. Our limitations are that we used Japanese EHR, the limited number of tested features and not performing human evaluations.\nAs for the efficacy of the meta-information, we believe that our results are applicable to non-Japanese, but it is left as Fig. . The precisions of words in the generated summaries. The vertical axis shows the probability that the words exist in the gold summary. a future work. Other meta-information may be worth verifying such as the patient's gender, age, race, religion and used EHR system, etc.\nIt is hard to collect a large amount of medical information and process it into meta-information, so we may need to develop a robust and flexible research infrastructure to conduct a more large scale cross-sectional study in the future. In the discharge summary generation task, which demands a high level of expertise, the human evaluation requires a lot of physicians' efforts and it is a very high cost which is unrealistic.\nThis is a general issue in tasks dealing with medical documents, and this study also could not perform human evaluations. On this research, informed consent and patient privacy are ensured in the following manner. Notices about their policy and the EHR data usage are posted at the hospitals. The patients who disagree with the policies can request opt-out and are excluded from the archive.\nIn case of minors and their parents, followed the same manner. In the case of minors and their parents are same. To conduct a research on the archive, researchers must submit their research proposals to the institutional review board. After the proposal is approved, the data is anonymized to build a dataset for analysis.\nThe data is accessible only in a secured room at the NHO headquarters, and only statistics are brought out of the secured room, for protection of patients' privacy. In the present research, the analysis was conducted under the IRB approval (IRB Approval No.: Wako3 2019-22) of the Institute of Physical and Chemical Research (RIKEN), Japan, which has a collaboration agreement with the National Hospital Organization.\nThis data is not publicly available due to privacy restrictions. shows the detailed number of cases handled by physicians. In all hospitals, there is a large difference between the median and the maximum of cases/physician. This indicates that a few physicians handle a large number of cases and many physicians handle fewer cases.\nIt is impossible to avoid physician IDs first seen at test time without some process that averages the number of cases a physician holds.", "answers": ["Verifying other meta-information such as patient's gender, age, race, etc."], "length": 2947, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "6caa98612cf0e2ddf58e1ba70daaf79a3ac616280fb48188"} {"input": "What are the three subsets into which the parameter space V is divided?", "context": "Paper Info\n\nTitle: An CUSUM Test with Observation-Adjusted Control Limits in Change Detection\nPublish Date: March 9, 2023\nAuthor List: Fuquan Tang (from Department of Statistics, Shanghai Jiao Tong University), Dong Han (from Department of Statistics, Shanghai Jiao Tong University)\n\nFigure\n\nexp{−cg(µ)(θ − x Hv (θ) + o(1))} for 1 ≤ k ≤ ac − 1, bc ≤ n ≤ m, where Zi = −g ′ (µ)(Z i − µ)/a and Hv (θ) = ln hv (θ) + ( ac k − 1) ln ĥv (θ), ĥv (θ) = E v (e θ Zi ).\ni < cg(µ)(1 + o(1))) exp{−cg(µ)θ * v (1 + o(1))} (A. 5) for ac ≤ k ≤ bc − 1, bc ≤ n ≤ m,andP v (\ni + g ′ (µ)a −1 Tc(g)−1 i=Tc(g)−ac (Z i − µ)] −→ µas c → ∞.By the uniform integrability of {T c (g)/c} and using Theorem A.1.1 in Gut's book(1988), we haveE v (T c (g)) = (1 + o(1)) cg(µ) µfor a large c.This completes the proof of Theorem 2.Proof of Theorem 4. Since g(x) < 0 for x > a * , a * ≤ µ * and µ * ≥ 0, it follows thatP v m Ẑm < cg( Ẑm ), Ẑm > a * ≤ P v ( Ẑm < µ * )andP v (T c (g) > m) = P v n i=n−k+1 Z i < cg( Ẑn ), 1 ≤ k ≤ n, 1 ≤ n ≤ m ≤ P v m Ẑm < cg( Ẑm ) = P v m Ẑm < cg( Ẑm ), Ẑm ≤ a * + P v m Ẑm < cg( Ẑm ), Ẑm > a * ≤ 2P v ( Ẑm < µ * ).Furthermore,P v ( Ẑm < µ * ) = P v ( m i −Z i > −mµ * ) = P v ( m i (µ − Z i ) > m(µ − µ * )) = P v (e θ m i (µ−Z i ) > e θm(µ−µ * ) ) ≤ e −m[θ(µ−µ * )−ln M (θ)] ,whereM(θ) = E v (e θ(µ−Z 1 )) and the last inequality follows from Chebychev's inequality.Note thath(θ) = θ(µ − µ * ) − ln M(θ) attains its maximum value h(θ * ) = θ * (µ − µ * ) − ln M(θ * ) > 0 at θ = θ * > 0, where h ′ (θ * ) = 0. So, E v (T c (g)) = 1 + ∞ m=1 P v (T c (g) > m) ≤ 1 + m=1 −m[θ * (µ−µ * )−ln M (θ * )] = e θ * (µ−µ * )−ln M (θ * ) + 1 e θ * (µ−µ * )−ln M (θ * ) − Let k > 1.It follows that E vk (T c (g) − k + 1) + = ∞ m=1 P vk (T c (g) > m + k − 1, T c (g) > k − 1) ≤ (a 0 + 1)(k − 1)P 0 (T c (g) > k − 1) + ∞ m≥(a 0 +1)(k−1) P vk (T c (g) > m + k − 1).Similarly, we haveP vk (T c (g) > m + k − 1) = P vk n i=n−k+1 Z i < cg( Ẑn ), 1 ≤ k ≤ n, 1 ≤ n ≤ m + k − 1 ≤ 2P vk ( Ẑm+k−1 < µ * ) − Z i ) > m(µ − µ * ) + (k − 1)(µ 0 − µ * ) ≤ 2 exp{−m θ * (µ − µ * ) − ln M(θ * ) + k − 1 m [µ 0 − µ * − ln M 0 (θ * )] } ≤ e −mb for m ≥ (a 0 + 1)(k − 1), since θ * (µ − µ * ) − ln M(θ * ) + k − 1 m [µ 0 − µ * − ln M 0 (θ * )] ≥ b for m ≥ (a 0 + 1)(k−1).Thus, E vk (T c (g) − k + 1) + ≤ (a 0 + 1)(k − 1)P 0 (T c (g) ≥ k) + m≥(a 0 +1)(k−1) e −mb ≤ (a 0 + 1)(k − 1)P 0 (T c (g) >≥ k) + 2e −(a 0 +1)(k−1)b 1 − e −b .\nSimulation of E τ i ,v and J ACE for detecting two mean shifts v = 0.1, v = 1.The parameters for T * M are k1=1, k2=150, r 1 = 5.2 * 10 −5 , r 2 = 1.1 * 10 −5 , and the expectation and standard deviation in both cases are 1717.06with 13459.80 and 3918.33 with 16893.25,respectively.\n\nabstract\n\nIn this paper, we not only propose an new optimal sequential test of sum of logarithmic likelihood ratio (SLR) but also present the CUSUM sequential test (control chart, stopping time) with the observation-adjusted control limits (CUSUM-OAL) for monitoring quickly and adaptively the change in distribution of a sequential observations.\nTwo limiting relationships between the optimal test and a series of the CUSUM-OAL tests are established. Moreover, we give the estimation of the in-control and the out-of-control average run lengths (ARLs) of the CUSUM-OAL test. The theoretical results are illustrated by numerical simulations in detecting mean shifts of the observations sequence.\n\nINTRODUCTION\n\nIn order to quickly detect a change in distribution of observations sequence without exceeding a certain false alarm rate, a great variety of sequential tests have been proposed, developed and applied to various fields since proposed a control chart method, see, for example, , , One of popular used sequential tests is the following upper-sided CUSUM test which was proposed by .\nwhere c > 0 is a constant control limit, Z i = log[p v 1 (X i )/p v 0 (X i )], p v 0 (x) and p v 1 (x) are prechange and post-change probability density functions respectively for a sequence of mutually independent observations {X i , i ≥ 1}, that is, there is a unknown change-point τ ≥ 1 such that X 1 , ..., X τ −1 have the probability density function p v 0 , whereas, X τ , X τ +1 , ... have the probability density function p v 1 .\nBy the renewal property of the CUSUM test T C we have , where E 1 (T C ) is the out-of-control average run length (ARL 1 ), P k and E k denote the probability and expectation respectively when the change from p v 0 to p v 1 occurs at the change-point τ = k for k ≥ 1. Though we know that the CUSUM test is optimal under Lorden's measure (see Moustakides 1986 and Ritov 1990), the out-of-control ARL 1 of the CUSUM test is not small, especially in detecting small mean shifts ( see Table in Section 4).\nIn other words, the CUSUM test is insensitive in detecting small mean shifts. Then, how to increase the sensitivity of the CUSUM test ? Note that the control limit in the CUSUM test is a constant c which does not depend on the observation samples. Intuitively, if the control limit of the CUSUM test can become low as the samples mean of the observation sequence increases, then the alarm time of detecting the increasing mean shifts will be greatly shortened.\nBased on this idea, by selecting a decreasing function g(x) we may define the ( upper-sided ) CUSUM chart T C (cg) with the observation-adjusted control limits cg( Ẑn ) ( abbreviated to the CUSUM-OAL chart ) in the following where c > 0 is a constant and Ẑn = n i=1 Z i /n. In other words, the control limits cg( Ẑn ) of the CUSUM-OAL test can be adjusted adaptively according to the observation information { Ẑn }.\nNote that the control limits cg( Ẑn ) may be negative. In the special case, the CUSUM-OAL chart T C (cg) becomes into the conventional CUSUM chart T C (c) in (1) when g ≡ 1. Similarly, we can define a down-sided CUSUM-OAL test. In this paper, we consider only the upper-sided CUSUM-OAL test since the properties of the down-sided CUSUM-OAL test can be obtained by the similar method.\nThe main purpose of the present paper is to show the good detection performance of the CUSUM-OAL test and to give the estimation of its the in-control and out-of-control ARLs. The paper is organized as follows. In Section 2, we first present an optimal SLR sequential test, then define two sequences of the CUSUM-OAL tests and prove that one of the two sequences of CUSUM-OAL tests converges to the optimal test, another sequences of CUSUM-OAL tests converges to a combination of the optimal test and the CUSUM test.\nThe estimation of the in-control and out-of-control ARLs of the CUSUM-OAL tests and their comparison are given in Section 3. The detection performances of the three CUSUM-OAL tests and the conventional CUSUM test are illustrated in Section 4 by comparing their numerical out-ofcontrol ARLs. Section 5 provides some concluding remarks.\nProofs of the theorems are given in the Appendix.\n\nAN OPTIMAL SLR TEST, TWO CUSUM-OAL TESTS AND THEIR LIMITING RELATIONSHIPS\n\nLet P 0 and E 0 denote the probability and the expectation respectively with the probability density p v 0 when there is no change for all the time. It is known that It follows from Proposition 2.38 in and (5.8)-(5.9) in Chow et al, P.108) that the following sequence test of sum of logarithmic likelihood ratio (SLR)\nfor B > 1, is optimal in the following sense min for P 0 (T SLR < ∞) = α, where c = log B and 0 < α < 1. In particular, if P 0 is the standard normal distribution with mean shift µ > 0 after changepoint, we have Z j − µ 0 = µX j , where µ 0 = −µ 2 /2. It follows from proposition 4 in that the SLR test T SLR in (4) is also optimal (minimal ARL 1 ) with the same false alarm probability P 0 (T < τ ).\nIt can be seen that the in-control average run length of T SLR is infinite, that is, ARL 0 = E 0 (T SLR ) = ∞. However, the minimal ARL 1 with finite ARL 0 is a widely used optimality criterion in statistical quality control (see ) and detection of abrupt changes (see . In order to get finite ARL 0 for T SLR , we replace the constant control limit c of T SLR in (3) or (4) with the dynamic control limit n(µ 0 − r) and obtain a modified SLR test T SLR (r) in the following\nfor r ≥ 0. For comparison, the in-control ARL 0 of all candidate sequential tests are constrained to be equal to the same desired level of type I error, the test with the lowest out-of-control ARL v has the highest power or the fastest monitoring (detection) speed. In the following example 1, the numerical simulations of the out-of-control ARLs of the CUSUM-OAL tests T C (cg u,0 ) in detecting the mean shifts of observations with normal distribution will be compared with that of the SLR tests T * (r) and T * (0), and that of the CUSUM-SLR test T C (c) ∧ T * (0) := min{T C (c), T * (0)} in the following Table .\nThese comparisons lead us to guess that there are some limiting relationships between T C (cg u,r ) and T * (r), and T C (c g u ) and T C (c) ∧ T * (0), respectively. Example 1. Let X 1 , X 2 , .... be mutually independent following the normal distribution N(0, 1) if there is no change. After the change-point τ = 1, the mean E µ (X k ) ( k ≥ 1 ) will change from v 0 = 0 to v = 0.1, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 3. Here, we let\n, where v 1 = 1 is a given reference value which for the CUSUM test is the magnitude of a shift in the process mean to be detected quickly. We conducted the numerical simulation based on 1,000,000 repetitions. The following Table lists the simulation results of the ARLs of the tests T C (c), T C (c g u ) for u = 1, 10, 10 2 , 10 3 , 10 4 , T * (0.0007), T C (c) ∧ T * (0) and T * (0) for detecting the mean shifts, where the mean shift 0.0 means that there is no change which corresponds to the in-control ARL 0 and all tests have the common ARL 0 ≈ 1000 except the test T * (0) which has ARL 0 = ∞.\nThe values in the parameters are the standard deviations of the tests. From the last row in Table , it's a little surprising that though the ARL 0 of T * (0) is infinite, that is, E 0 (T * (0)) = ∞, the detection speed of T * (0) is faster than that of the CUSUM chart T C for all mean shifts, in particular, for detecting the small mean shift 0.1, the speed of T * (0) is only 7.47 which is very faster than the speed, 439, of the CUSUM test.\nMoreover, both control charts T * (0.0007) and T C (11.9271) ∧ T * (0) not only have the nearly same detection performance as T * (0) but also can have the finite in-control ARL 0 . Note particularly that when the number u in g u is taken from 0 to 1, 10, 10 2 , 10 3 , 10 4 , the detection speed of T C (c g u ) is getting faster and faster, approaching to that of T C (c) ∧ T * (0).\nThis inspires us to prove the following theoretic results. Let τ = 1 and {X k , k ≥ 1} be an i.i.d. observations sequence with Theorem 2 shows that when the constant control limit c of the CUSUM test T C (c) is replaced with the observation-adjusted control limits {cg u,r ( Ẑn )} and {c g u ( Ẑn )} respectively, the corresponding two CUSUM-OAL tests {T C (cg u,r )} and {T C (c g u )} will converge to the optimal SLR test T * (r) and the CUSUM-SLR test T C (c) ∧ T * (0) as u → ∞, respectively.\nIn other words, the fastest alarm times that {T C (cg u,r )} and {T C (c g u )} can be reached are T * (r) and T C (c) ∧ T * (0), respectively. u ≥ 0} can be seen as two \"long bridges\" connecting T C (c) and T * (r), and T C (c) and T C (c) ∧ T * (0), respectively.\n\nESTIMATION AND COMPARISON OF ARL OF THE CUSUM-OAL TEST\n\nIn this section we will give an estimation of the ARLs of the following CUSUM-OAL test that can be written as where g(.) is a decreasing function, Ẑn (ac x] denotes the smallest integer greater than or equal to x. Here Ẑn (ac) is a sliding average of the statistics, Next we discuss on the the post-change probability distribution in order to estimate the ARLs of T C (cg).\nUsually we rarely know the post-change probability distribution P v of the observation process before it is detected. But the possible change domain and its boundary (including the size and form of the boundary) about v may be determined by engineering knowledge, practical experience or statistical data.\nSo we may assume that the region of parameter space V and a probability distribution Q on V are known. If we have no prior knowledge of the possible value of v after the change time τ , we may assume that v occurs equally on V , that is, the probability distribution Q is an equal probability distribution (or uniform distribution ) on V .\nFor example, let P v be the normal distribution and v = (µ, σ), where µ and σ denote the mean and standard deviation respectively, we can take the set V = {(µ, σ) : and Q is subject to the uniform distribution U(V ) on V if v occurs equally on V , where the numbers µ 1 , µ 2 , σ 1 and σ 2 are known. It means that we know the domain of the possible post-change distributions, P v , v ∈ V , i.e., the boundary ∂V of the parameter space V is known.\nNext we shall divide the parameter space V into three subsets V + , V 0 and V − by the Kullback-Leibler information distance. Let and are two Kullblak-Leibler information distances between P v , P v 0 and P v , P v 1 . Since I(p|q) = 0 if and only if p = q, where p and q are two probability measures, it follows that\n, it means that P v is closer to P v 0 than to P v 1 according to the Kullblak-Leibler information distance. There is a similar explanation for v ∈ V + or ∈ V 0 . Suppose the post-change distribution P v and the function g(x) satisfy the following conditions: (I) The probability P v is not a point mass at E v (Z 1 ) and P v (Z 1 > 0) > 0.\n(II) The moment-generating function h v (θ) = E v (e θZ 1 ) satisfies h v (θ) < ∞ for some θ > 0. (III) The function g(x) is decreasing, its second order derivative function g ′′ (x) is continuous and bounded, and there is a positive number x * such that g(x * ) = 0. ) and and therefore, Θ ′ (θ(u)) = −H(θ(u)) = −H(θ * v ) = 0, Θ ′ (θ(1/x)) > 0 for x > 1/u and Θ ′ (θ(1/x)) < 0 for x > 1/u.\nHence, there exists a positive number b defined in (??). It can be seen, the main part of ARL v (T c (g)) will be an exponential function, square function, and linear function of c when the process {Z k : k ≥ 0} has no change or a \"small change\", a \"medium change\" and a \"large change\" from P v 0 to P v , respectively.\nHere, the \"small change\" (v ∈ V − ) means that P v is closer to P v 0 than to P v 1 , i.e., I(P v |P v 0 ) < I(P v |P v 1 ), and the \"large change\" is just the opposite. The \"medium change\" (v ∈ V 0 ) corresponds to In this paper, we will use another method to prove Theorem 3 since Wald's identity and the martingale method do not hold or can not work for showing the ARLs estimation of the test T c (g) when g is not constant.\nNext we compare the detection performance of the CUSUM-OAL test (ARL v (T c ′ (g))) with that of the CUSUM test (ARL v (T C (c))) by using (??) in Theorem 4.1. ) when µ 0 < µ < 0 and for θ * v 0 > g(µ)/g(µ 0 ) when µ ≥ 0. This means that ARL v (T c (g)) can be smaller than ARL v (T C (c)) as long as g(µ)/g(µ 0 ) is small for all µ > µ 0 .\n\nNUMERICAL SIMULATION AND A REAL EX-AMPLE ILLUSTRATION\n\n4.1 Numerical Simulation of ARLs for τ ≥ 1 By the simulation results of ARLs in Table , we see that the detection performance of T * (r), T C (c)∧T * (0), T * (0) and T C (c g u ) for large u is much better than that of the conventional CUSUM test T C for τ = 1. The following Table illustrates the simulation values of E τ i ,v and J ACE of nine tests in detecting two mean shifts v = 0.1 and v = 1 after six change-points, τ i , 1 ≤ i ≤ 6 with ARL 0 (T ) = E 0 (T ) ≈ 500.\nNote that H v (θ) is a convex function and H ′ v (0) = µ < 0. This means that there is a unique positive number . It follows from (A.9) that for a large c. Taking θ ց θ * v and u ′ ց u, we have for a large c. Thus, by (A.11) we have as c → ∞. By the properties of exponential distribution, we have for a large c.\nTo prove the downward inequality of (A.10), let where b is defined in (??) and without loss of generality, we assume that b > a. Obviously, Let k = xcg(µ). By Chebyshev's inequality, we have Since Hv (θ) and H v (θ) are two convex functions and Let m = tcg(µ)θ * v /bc for t > 0. By (A.13), (A.14), (A.15) and Theorem 5.1 in Esary, Proschan and Walkup (1967) we have\nFinally, as c → +∞, where θ 0 > 0 satisfies h v (θ 0 ) = 1. Thus as c → ∞. This implies that for a large c. This completes the proof of (A.10). Let v ∈ V 0 . Let m 1 = (cg(0)) 2 /σ 2 . It follows that Note that for a large c, where A = |g ′ (0)|/a, and , where Φ(.) is the standard normal distribution. Let m 2 = (cg(0)) 2 /(8σ 2 ln c).\nNote that as c → ∞, where the third inequality comes from Theorem 5.1 in Esary, Proschan and Walkup (1967). Thus, we have Let v ∈ V + and let The uniform integrability of {T c (g)/c} for c ≥ 1, follows from the well-known uniform integrability of {T 0 /c} (see Gut (1988)).", "answers": ["The three subsets are V+, V0, and V-, determined by the Kullback-Leibler information distance."], "length": 3737, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "60c66572f28b071899547f0aa0b4a271969c3b153e96406b"} {"input": "What are the stability conditions for a solution of $-\\Delta u = f(u)$?", "context": "\\section{Introduction and main results}\n\n\nIn this note we are interested in the existence versus non-existence of stable sub- and super-solutions of equations of the form\n\\begin{equation} \\label{eq1}\n-div( \\omega_1(x) \\nabla u ) = \\omega_2(x) f(u) \\qquad \\mbox{in $ {\\mathbb{R}}^N$,}\n\\end{equation} where $f(u)$ is one of the following non-linearities: $e^u$, $ u^p$ where $ p>1$ and $ -u^{-p}$ where $ p>0$. We assume that $ \\omega_1(x)$ and $ \\omega_2(x)$, which we call \\emph{weights}, are smooth positive functions (we allow $ \\omega_2$ to be zero at say a point) and which satisfy various growth conditions at $ \\infty$. Recall that we say that a solution $ u $ of $ -\\Delta u = f(u)$ in $ {\\mathbb{R}}^N$ is stable provided\n\\[ \\int f'(u) \\psi^2 \\le \\int | \\nabla \\psi|^2, \\qquad \\forall \\psi \\in C_c^2,\\] where $ C_c^2$ is the set of $ C^2$ functions defined on $ {\\mathbb{R}}^N$ with compact support. Note that the stability of $u$ is just saying that the second variation at $u$ of the energy associated with the equation is non-negative. In our setting this becomes: We say a $C^2$ sub/super-solution $u$ of (\\ref{eq1}) is \\emph{stable} provided\n\\begin{equation} \\label{stable}\n\\int \\omega_2 f'(u) \\psi^2 \\le \\int \\omega_1 | \\nabla \\psi|^2 \\qquad \\forall \\psi \\in C_c^2.\n\\end{equation}\nOne should note that (\\ref{eq1}) can be re-written as\n\\begin{equation*}\n- \\Delta u + \\nabla \\gamma(x) \\cdot \\nabla u ={ \\omega_2}/{\\omega_1}\\ f(u) \\qquad \\text{ in $ \\mathbb{R}^N$},\n\\end{equation*}\nwhere\n$\\gamma = - \\log( \\omega_1)$ and on occasion we shall take this point of view.\n\n\n\\begin{remark} \\label{triv} Note that if $ \\omega_1$ has enough integrability then it is immediate that if $u$ is a stable solution of (\\ref{eq1}) we have $ \\int \\omega_2 f'(u) =0 $ (provided $f$ is increasing). To see this let $ 0 \\le \\psi \\le 1$ be supported in a ball of radius $2R$ centered at the origin ($B_{2R}$) with $ \\psi =1$ on $ B_R$ and such that $ | \\nabla \\psi | \\le \\frac{C}{R}$ where $ C>0$ is independent of $ R$. Putting this $ \\psi$ into $ (\\ref{stable})$ one obtains\n\\[ \\int_{B_R} \\omega_2 f'(u) \\le \\frac{C}{R^2} \\int_{R < |x| <2R} \\omega_1,\\] and so if the right hand side goes to zero as $ R \\rightarrow \\infty$ we have the desired result.\n\n\\end{remark}\n\n\n\n\n\nThe existence versus non-existence of stable solutions of $ -\\Delta u = f(u)$ in $ {\\mathbb{R}}^N$ or $ -\\Delta u = g(x) f(u)$ in $ {\\mathbb{R}}^N$ is now quite well understood, see \\cite{dancer1, farina1, egg, zz, f2, f3, wei, ces, e1, e2}. We remark that some of these results are examining the case where $ \\Delta $ is replaced with $ \\Delta_p$ (the $p$-Laplacian) and also in many cases the authors are interested in finite Morse index solutions or solutions which are stable outside a compact set.\n Much of the interest in these Liouville type theorems stems from the fact that the non-existence of a stable solution is related to the existence of a priori estimates for stable solutions of a related equation on a bounded domain.\n\n\n\n\n In \\cite{Ni} equations similar to $ -\\Delta u = |x|^\\alpha u^p$ where examined on the unit ball in $ {\\mathbb{R}}^N$ with zero Dirichlet boundary conditions. There it was shown that for $ \\alpha >0$ that one can obtain positive solutions for $ p $ supercritical with respect to Sobolev embedding and so one can view that the term $ |x|^\\alpha$ is restoring some compactness. A similar feature happens for equations of the form\n\\[ -\\Delta u = |x|^\\alpha f(u) \\qquad \\mbox{in $ {\\mathbb{R}}^N$};\\] the value of $ \\alpha$ can vastly alter the existence versus non-existence of a stable solution, see \\cite{e1, ces, e2, zz, egg}.\n\nWe now come to our main results and for this we need to define a few quantities:\n\n\\begin{eqnarray*}\nI_G&:=& R^{-4t-2} \\int_{ R < |x|<2R} \\frac{ \\omega_1^{2t+1}}{\\omega_2^{2t}}dx , \\\\\n J_G&:=& R^{-2t-1} \\int_{R < |x| <2R} \\frac{| \\nabla \\omega_1|^{2t+1} }{\\omega_2^{2t}} dx ,\\\\I_L&:=& R^\\frac{-2(2t+p-1)}{p-1} \\int_{R<|x|<2R }{ \\left( \\frac{w_1^{p+2t-1}}{w_2^{2t}} \\right)^{\\frac{1}{p-1} } } dx,\\\\ J_L&:= &R^{-\\frac{p+2t-1}{p-1} } \\int_{R<|x|<2R }{ \\left( \\frac{|\\nabla w_1|^{p+2t-1}}{w_2^{2t}} \\right)^{\\frac{1}{p-1} } } dx,\\\\\nI_M &:=& R^{-2\\frac{p+2t+1}{p+1} } \\int_{R<|x|<2R }{ \\left( \\frac{w_1^{p+2t+1}}{w_2^{2t}} \\right)^{\\frac{1}{p+1} } } \\ dx, \\\\\nJ_M &:= & R^{-\\frac{p+2t+1}{p+1} } \\int_{R<|x|<2R }{ \\left( \\frac{|\\nabla w_1|^{p+2t+1}}{w_2^{2t}} \\right)^{\\frac{1}{p+1} } } dx.\n\\end{eqnarray*}\n\n\nThe three equations we examine are\n\\[ -div( \\omega_1 \\nabla u ) = \\omega_2 e^u \\qquad \\mbox{ in $ {\\mathbb{R}}^N$ } \\quad (G), \\]\n\\[ -div( \\omega_1 \\nabla u ) = \\omega_2 u^p \\qquad \\mbox{ in $ {\\mathbb{R}}^N$ } \\quad (L), \\]\n\\[ -div( \\omega_1 \\nabla u ) = - \\omega_2 u^{-p} \\qquad \\mbox{ in $ {\\mathbb{R}}^N$ } \\quad (M),\\] and where we restrict $(L)$ to the case $ p>1$ and $(M)$ to $ p>0$. By solution we always mean a $C^2$ solution. We now come to our main results in terms of abstract $ \\omega_1 $ and $ \\omega_2$. We remark that our approach to non-existence of stable solutions is the approach due to Farina, see \\cite{f2,f3,farina1}.\n\n\\begin{thm} \\label{main_non_exist} \\begin{enumerate}\n\n\n\\item There is no stable sub-solution of $(G)$ if $ I_G, J_G \\rightarrow 0$ as $ R \\rightarrow \\infty$ for some $0 0$.\n\n \\item If $N+\\alpha-2<4(\\beta-\\alpha+2)$ then there is no stable sub-solution for $ (G)$.\n\n\\item If $N+\\alpha-2<\\frac{ 2(\\beta-\\alpha+2) }{p-1} \\left( p+\\sqrt{p(p-1)} \\right)$ then there is no positive stable sub-solution of $(L)$.\n\n\\item If $N+\\alpha-2<\\frac{2(\\beta-\\alpha+2) }{p+1} \\left( p+\\sqrt{p(p+1)} \\right)$ then there is no positive stable super-solution of $(M)$.\n\n\\item Further more 2,3,4 are optimal in the sense if $ N + \\alpha -2 > 0$ and the remaining inequality is not satisfied (and in addition we assume we don't have equality in the inequality) then we can find a suitable function $ g(x)$ which satisfies the above properties and a stable sub/super-solution $u$ for the appropriate equation.\n\n\\end{enumerate}\n\n\\end{thm}\n\n\\begin{remark} Many of the above results can be extended to the case of equality in either the $ N + \\alpha - 2 \\ge 0$ and also the other inequality which depends on the equation we are examining. We omit the details because one cannot prove the results in a unified way.\n\n\\end{remark}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nIn showing that an explicit solution is stable we will need the weighted Hardy inequality given in \\cite{craig}.\n\\begin{lemma} \\label{Har}\nSuppose $ E>0$ is a smooth function. Then one has\n\\[ (\\tau-\\frac{1}{2})^2 \\int E^{2\\tau-2} | \\nabla E|^2 \\phi^2 + (\\frac{1}{2}-\\tau) \\int (-\\Delta E) E^{2\\tau-1} \\phi^2 \\le \\int E^{2\\tau} | \\nabla \\phi|^2,\\] for all $ \\phi \\in C_c^\\infty({\\mathbb{R}}^N)$ and $ \\tau \\in {\\mathbb{R}}$.\n\\end{lemma} By picking an appropriate function $E$ this gives,\n\n\\begin{cor} \\label{Hardy}\nFor all $ \\phi \\in C_c^\\infty$ and $ t , \\alpha \\in {\\mathbb{R}}$. We have\n \\begin{eqnarray*}\n\\int (1+|x|^2)^\\frac{\\alpha}{2} |\\nabla\\phi|^2 &\\ge& (t+\\frac{\\alpha}{2})^2 \\int |x|^2 (1+|x|^2)^{-2+\\frac{\\alpha}{2}}\\phi^2\\\\\n&&+(t+\\frac{\\alpha}{2})\\int (N-2(t+1) \\frac{|x|^2}{1+|x|^2}) (1+|x|^2)^{-1+\\frac{\\alpha} {2}} \\phi^2.\n\\end{eqnarray*}\n \\end{cor}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Proof of main results}\n\n\\textbf{ Proof of Theorem \\ref{main_non_exist}.} (1). Suppose $ u$ is a stable sub-solution of $(G)$ with $ I_G,J_G \\rightarrow 0$ as $ R \\rightarrow \\infty$ and let $ 0 \\le \\phi \\le 1$ denote a smooth compactly supported function. Put $ \\psi:= e^{tu} \\phi$ into (\\ref{stable}), where $ 0 0$ is independent of $ R$. Putting this choice of $ \\phi$ we obtain\n \\begin{equation} \\label{four}\n \\int \\omega_1 e^{2tu} \\phi^{2m-2} | \\nabla \\phi |^2 \\le \\left( \\int \\omega_2 e^{(2t+1)u} \\phi^{2m} \\right)^\\frac{2t}{2t+1} I_G^\\frac{1}{2t+1}.\n \\end{equation} One similarly shows that\n \\[ \\int \\omega_1 e^{2tu} \\phi^{2m-1} | \\Delta \\phi| \\le \\left( \\int \\omega_2 e^{(2t+1)u} \\phi^{2m} \\right)^\\frac{2t}{2t+1} I_G^\\frac{1}{2t+1}.\\]\n So, combining the results we obtain\n\n \\begin{eqnarray} \\label{last} \\nonumber \\frac{(2-t)}{2} \\int \\omega_2 e^{(2t+1) u} \\phi^{2m} &\\le& C_m \\left( \\int \\omega_2 e^{(2t+1) u} \\phi^{2m} dx \\right)^\\frac{2t}{2t+1} I_G^\\frac{1}{2t+1}\\\\\n &&- D_m \\int e^{2tu} \\phi^{2m-1} \\nabla \\omega_1 \\cdot \\nabla \\phi.\n \\end{eqnarray}\n We now estimate this last term. A similar argument using H\\\"{o}lder's inequality shows that\n \\[ \\int e^{2tu} \\phi^{2m-1} | \\nabla \\omega_1| | \\nabla \\phi| \\le \\left( \\int \\omega_2 \\phi^{2m} e^{(2t+1) u} dx \\right)^\\frac{2t}{2t+1} J_G^\\frac{1}{2t+1}. \\] Combining the results gives that\n\\begin{equation} \\label{last}\n(2-t) \\left( \\int \\omega_2 e^{(2t+1) u} \\phi^{2m} dx \\right)^\\frac{1}{2t+1} \\le I_G^\\frac{1}{2t+1} + J_G^\\frac{1}{2t+1},\n\\end{equation} and now we send $ R \\rightarrow \\infty$ and use the fact that $ I_G, J_G \\rightarrow 0$ as $ R \\rightarrow \\infty$ to see that\n\\[ \\int \\omega_2 e^{(2t+1) u} =0, \\] which is clearly a contradiction. Hence there is no stable sub-solution of $(G)$.\n\n(2). Suppose that $u >0$ is a stable sub-solution (super-solution) of $(L)$. Then a similar calculation as in (1) shows that for $ p - \\sqrt{p(p-1)} \\frac{1}{2}$ or $ t < \\frac{1}{2}$ is a result from the sign change of $ 2t-1$ at $ t = \\frac{1}{2}$. We leave the details for the reader.\n\n\n(3). This case is also similar to (1) and (2).\n\n\n\\hfill $ \\Box$\n\n \\textbf{Proof of Theorem \\ref{mono}.} (1). Again we suppose there is a stable sub-solution $u$ of $(G)$. Our starting point is (\\ref{start_1}) and we wish to be able to drop the term\n \\[ - D_m \\int e^{2tu} \\phi^{2m-1} \\nabla \\omega_1 \\cdot \\nabla \\phi, \\] from (\\ref{start_1}). We can choose $ \\phi$ as in the proof of Theorem \\ref{main_non_exist} but also such that $ \\nabla \\phi(x) = - C(x) x$ where $ C(x) \\ge 0$. So if we assume that $ \\nabla \\omega_1 \\cdot x \\le 0$ for big $x$ then we see that this last term is non-positive and hence we can drop the term. The the proof is as before but now we only require that $ \\lim_{R \\rightarrow \\infty} I_G=0$.\n\n (2). Suppose that $ u >0$ is a stable sub-solution of $(L)$ and so (\\ref{shit}) holds for all $ p - \\sqrt{p(p-1)} 0$. Note that the monotonicity of $ \\omega_1$ changes when $ \\alpha $ changes sign and hence one would think that we need to consider separate cases if we hope to utilize the monotonicity results. But a computation shows that in fact $ I$ and $J$ are just multiples of each other in all three cases so it suffices to show, say, that $ \\lim_{R \\rightarrow \\infty} I =0$. \\\\\n(2). Note that for $ R >1$ one has\n\\begin{eqnarray*}\nI_G & \\le & \\frac{C}{R^{4t+2}} \\int_{R <|x| < 2R} |x|^{ \\alpha (2t+1) - 2t \\beta} \\\\\n& \\le & \\frac{C}{R^{4t+2}} R^{N + \\alpha (2t+1) - 2t \\beta},\n\\end{eqnarray*} and so to show the non-existence we want to find some $ 0 N + \\alpha(2t+1) - 2 t \\beta$, which is equivalent to $ 2t ( \\beta - \\alpha +2) > (N + \\alpha -2)$. Now recall that we are assuming that $ 0 < N + \\alpha -2 < 4 ( \\beta - \\alpha +2) $ and hence we have the desired result by taking $ t <2$ but sufficiently close.\nThe proof of the non-existence results for\n(3) and (4) are similar and we omit the details. \\\\\n(5). We now assume that $N+\\alpha-2>0$. In showing the existence of stable sub/super-solutions we need to consider $ \\beta - \\alpha + 2 <0$ and $ \\beta - \\alpha +2 >0$ separately.\n\n\\begin{itemize} \\item $(\\beta - \\alpha + 2 <0)$ Here we take $ u(x)=0$ in the case of $(G)$ and $ u=1$ in the case of $(L)$ and $(M)$. In addition we take $ g(x)=\\E$. It is clear that in all cases $u$ is the appropriate sub or super-solution. The only thing one needs to check is the stability. In all cases this reduces to trying to show that we have\n\\[ \\sigma \\int (1+|x|^2)^{\\frac{\\alpha}{2} -1} \\phi^2 \\le \\int (1+|x|^2)^{\\frac{\\alpha}{2}} | \\nabla\\phi |^2,\\] for all $ \\phi \\in C_c^\\infty$ where $ \\sigma $ is some small positive constant; its either $ \\E$ or $ p \\E$ depending on which equation were are examining.\nTo show this we use the result from Corollary \\ref{Hardy} and we drop a few positive terms to arrive at\n\\begin{equation*}\n\\int (1+|x|^2)^\\frac{\\alpha}{2} |\\nabla\\phi|^2\\ge (t+\\frac{\\alpha}{2})\\int \\left (N-2(t+1) \\frac{|x|^2}{1+|x|^2}\\right) (1+|x|^2)^{-1+\\frac{\\alpha} {2}}\n\\end{equation*} which holds for all $ \\phi \\in C_c^\\infty$ and $ t,\\alpha \\in {\\mathbb{R}}$.\n Now, since $N+\\alpha-2>0$, we can choose $t$ such that $-\\frac{\\alpha}{2}0$) In the case of $(G)$ we take $u(x)=-\\frac{\\beta-\\alpha+2}{2} \\ln(1+|x|^2)$ and $g(x):= (\\beta-\\alpha+2)(N+(\\alpha-2)\\frac{|x|^2}{1+|x|^2})$. By a computation one sees that $u$ is a sub-solution of $(G)$ and hence we need now to only show the stability, which amounts to showing that\n\\begin{equation*}\n\\int \\frac{g(x)\\psi^2}{(1+|x|^{2 })^{-\\frac{\\alpha}{2}+1}}\\le \\int\\frac{|\\nabla\\psi|^2}{ (1+|x|^2)^{-\\frac{\\alpha}{2}} },\n\\end{equation*} for all $ \\psi \\in C_c^\\infty$. To show this we use Corollary \\ref{Hardy}. So we need to choose an appropriate $t$ in $-\\frac{\\alpha}{2}\\le t\\le\\frac{N-2}{2}$ such that for all $x\\in {\\mathbb{R}}^N$ we have\n \\begin{eqnarray*}\n (\\beta-\\alpha+2)\\left( N+ (\\alpha-2)\\frac{|x|^2}{1+|x|^2}\\right) &\\le& (t+\\frac{\\alpha}{2})^2 \\frac{ |x|^2 }{(1+|x|^2}\\\\\n&&+(t+\\frac{\\alpha}{2}) \\left(N-2(t+1) \\frac{|x|^2}{1+|x|^2}\\right).\n\\end{eqnarray*}\nWith a simple calculation one sees we need just to have\n \\begin{eqnarray*}\n (\\beta-\\alpha+2)&\\le& (t+\\frac{\\alpha}{2}) \\\\\n (\\beta-\\alpha+2) \\left( N+ \\alpha-2\\right) & \\le& (t+\\frac{\\alpha}{2}) \\left(N-t-2+\\frac{\\alpha}{2}) \\right).\n \\end{eqnarray*} If one takes $ t= \\frac{N-2}{2}$ in the case where $ N \\neq 2$ and $ t $ close to zero in the case for $ N=2$ one easily sees the above inequalities both hold, after considering all the constraints on $ \\alpha,\\beta$ and $N$.\n\n We now consider the case of $(L)$. Here one takes $g(x):=\\frac {\\beta-\\alpha+2}{p-1}( N+ (\\alpha-2-\\frac{\\beta-\\alpha+2}{p-1})\n\\frac{|x|^2}{1+|x|^2})$ and $ u(x)=(1+|x|^2)^{ -\\frac {\\beta-\\alpha+2}{2(p-1)} }$. Using essentially the same approach as in $(G)$ one shows that $u$ is a stable sub-solution of $(L)$ with this choice of $g$. \\\\\nFor the case of $(M)$ we take $u(x)=(1+|x|^2)^{ \\frac {\\beta-\\alpha+2}{2(p+1)} }$ and $g(x):=\\frac {\\beta-\\alpha+2}{p+1}( N+ (\\alpha-2+\\frac{\\beta-\\alpha+2}{p+1})\n\\frac{|x|^2}{1+|x|^2})$.\n\n\n\n\\end{itemize}\n\n\n\\hfill $ \\Box$\n\n\n\n\n\n\n\n\n", "answers": ["$\\int f'(u) \\psi^2 \\le \\int | \\nabla \\psi|^2, \\forall \\psi \\in C_c^2$."], "length": 3743, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "135b82aecda3c83cfd73d1447e8f92a8a44de50ffdb09444"} {"input": "How many massive star-forming regions were studied?", "context": "\\section{Introduction}\n\nSpectral line surveys have revealed that high-mass star-forming\nregions are rich reservoirs of molecules from simple diatomic species\nto complex and larger molecules (e.g.,\n\\citealt{schilke1997b,hatchell1998b,comito2005,bisschop2007}).\nHowever, there have been rarely studies undertaken to investigate the\nchemical evolution during massive star formation from the earliest\nevolutionary stages, i.e., from High-Mass Starless Cores (HMSCs) and\nHigh-Mass Cores with embedded low- to intermediate-mass protostars\ndestined to become massive stars, via High-Mass Protostellar Objects\n(HMPOs) to the final stars that are able to produce Ultracompact H{\\sc\n ii} regions (UCH{\\sc ii}s, see \\citealt{beuther2006b} for a recent\ndescription of the evolutionary sequence). The first two evolutionary\nstages are found within so-called Infrared Dark Clouds (IRDCs). While\nfor low-mass stars the chemical evolution from early molecular\nfreeze-out to more evolved protostellar cores is well studied (e.g.,\n\\citealt{bergin1997,dutrey1997,pavlyuchenkov2006,joergensen2007}),\nit is far from clear whether similar evolutionary patterns are present\nduring massive star formation.\n\nTo better understand the chemical evolution of high-mass star-forming\nregions we initiated a program to investigate the chemical properties\nfrom IRDCs to UCH{\\sc ii}s from an observational and theoretical\nperspective. We start with single-dish line surveys toward a large\nsample obtaining their basic characteristics, and then perform\ndetailed studies of selected sources using interferometers on smaller\nscales. These observations are accompanied by theoretical modeling of\nthe chemical processes. Long-term goals are the chemical\ncharacterization of the evolutionary sequence in massive star\nformation, the development of chemical clocks, and the identification\nof molecules as astrophysical tools to study the physical processes\nduring different evolutionary stages. Here, we present an initial\nstudy of the reactive radical ethynyl (C$_2$H) combining single-dish\nand interferometer observations with chemical modeling. Although\nC$_2$H was previously observed in low-mass cores and Photon Dominated\nRegions (e.g., \\citealt{millar1984,jansen1995}), so far it was not\nsystematically investigated in the framework of high-mass star\nformation.\n\n\\section{Observations}\n\\label{obs}\n\nThe 21 massive star-forming regions were observed with the Atacama\nPathfinder Experiment (APEX) in the 875\\,$\\mu$m window in fall 2006.\nWe observed 1\\,GHz from 338 to 339\\,GHz and 1\\,GHz in the image\nsideband from 349 to 350\\,GHz. The spectral resolution was\n0.1\\,km\\,s$^{-1}$, but we smoothed the data to\n$\\sim$0.9\\,km\\,s$^{-1}$. The average system temperatures were around\n200\\,K, each source had on-source integration times between 5 and 16\nmin. The data were converted to main-beam temperatures with forward\nand beam efficiencies of 0.97 and 0.73, respectively\n\\citep{belloche2006}. The average $1\\sigma$ rms was 0.4\\,K. The main\nspectral features of interest are the C$_2$H lines around 349.4\\,GHz\nwith upper level excitation energies $E_u/k$ of 42\\,K (line blends of\nC$_2$H$(4_{5,5}-3_{4,4})$ \\& C$_2$H$(4_{5,4}-3_{4,3})$ at\n349.338\\,GHz, and C$_2$H$(4_{4,4}-3_{3,3})$ \\&\nC$_2$H$(4_{4,3}-3_{3,2})$ at 349.399\\,GHz). The beam size was $\\sim\n18''$.\n\nThe original Submillimeter Array (SMA) C$_2$H data toward the\nHMPO\\,18089-1732 were first presented in \\citet{beuther2005c}. There\nwe used the compact and extended configurations resulting in good\nimages for all spectral lines except of C$_2$H. For this project, we\nre-worked on these data only using the compact configuration. Because\nthe C$_2$H emission is distributed on larger scales (see\n\\S\\ref{results}), we were now able to derive a C$_2$H image. The\nintegration range was from 32 to 35\\,km\\,s$^{-1}$, and the achieved\n$1\\sigma$ rms of the C$_2$H image was 450\\,mJy\\,beam$^{-1}$. For more\ndetails on these observations see \\citet{beuther2005c}.\n\n\\section{Results}\n\\label{results}\n\nThe sources were selected to cover all evolutionary stages from IRDCs\nvia HMPOs to UCH{\\sc ii}s. We derived our target list from the samples\nof \\citet{klein2005,fontani2005,hill2005,beltran2006}. Table\n\\ref{sample} lists the observed sources, their coordinates, distances,\nluminosities and a first order classification into the evolutionary\nsub-groups IRDCs, HMPOs and UCH{\\sc ii}s based on the previously\navailable data. Although this classification is only based on a\nlimited set of data, here we are just interested in general\nevolutionary trends. Hence, the division into the three main classes\nis sufficient.\n\nFigure \\ref{spectra} presents sample spectra toward one source of each\nevolutionary group. While we see several CH$_3$OH lines as well as\nSO$_2$ and H$_2$CS toward some of the HMPOs and UCH{\\sc ii}s but not\ntoward the IRDCs, the surprising result of this comparison is the\npresence of the C$_2$H lines around 349.4\\,GHz toward all source types\nfrom young IRDCs via the HMPOs to evolved UCH{\\sc ii}s. Table\n\\ref{sample} lists the peak brightness temperatures, the integrated\nintensities and the FWHM line-widths of the C$_2$H line blend at\n349.399\\,GHz. The separation of the two lines of 1.375\\,MHz already\ncorresponds to a line-width of 1.2\\,km\\,s$^{-1}$. We have three C$_2$H\nnon-detections (2 IRDCs and 1 HMPO), however, with no clear trend with\nrespect to the distances or the luminosities (the latter comparison is\nonly possible for the HMPOs). While IRDCs are on average colder than\nmore evolved sources, and have lower brightness temperatures, the\nnon-detections are more probable due to the relatively low sensitivity\nof the short observations (\\S\\ref{obs}). Hence, the data indicate\nthat the C$_2$H lines are detected independent of the evolutionary\nstage of the sources in contrast to the situation with other\nmolecules. When comparing the line-widths between the different\nsub-groups, one finds only a marginal difference between the IRDCs and\nthe HMPOs (the average $\\Delta v$ of the two groups are 2.8 and\n3.1\\,km\\,s$^{-1}$). However, the UCH{\\sc ii}s exhibit significantly\nbroader line-widths with an average value of 5.5\\,km\\,s$^{-1}$.\n\nIntrigued by this finding, we wanted to understand the C$_2$H spatial\nstructure during the different evolutionary stages. Therefore, we\nwent back to a dataset obtained with the Submillimeter Array toward\nthe hypercompact H{\\sc ii} region IRAS\\,18089-1732 with a much higher\nspatial resolution of $\\sim 1''$ \\citep{beuther2005c}. Albeit this\nhypercompact H{\\sc ii} region belongs to the class of HMPOs, it is\nalready in a relatively evolved stage and has formed a hot core with a\nrich molecular spectrum. \\citet{beuther2005c} showed the spectral\ndetection of the C$_2$H lines toward this source, but they did not\npresent any spatially resolved images. To recover large-scale\nstructure, we restricted the data to those from the compact SMA\nconfiguration (\\S\\ref{obs}). With this refinement, we were able to\nproduce a spatially resolved C$_2$H map of the line blend at\n349.338\\,GHz with an angular resolution of $2.9''\\times 1.4''$\n(corresponding to an average linear resolution of 7700\\,AU at the\ngiven distance of 3.6\\,kpc). Figure \\ref{18089} presents the\nintegrated C$_2$H emission with a contour overlay of the 860\\,$\\mu$m\ncontinuum source outlining the position of the massive protostar. In\ncontrast to almost all other molecular lines that peak along with the\ndust continuum \\citep{beuther2005c}, the C$_2$H emission surrounds the\ncontinuum peak in a shell-like fashion.\n\n\\section{Discussion and Conclusions}\n\nTo understand the observations, we conducted a simple chemical\nmodeling of massive star-forming regions. A 1D cloud model with a mass\nof 1200\\,M$_\\sun$, an outer radius of 0.36\\,pc and a power-law density\nprofile ($\\rho\\propto r^p$ with $p=-1.5$) is the initially assumed\nconfiguration. Three cases are studied: (1) a cold isothermal cloud\nwith $T=10$\\,K, (2) $T=50$\\,K, and (3) a warm model with a temperature\nprofile $T\\propto r^q$ with $q=-0.4$ and a temperature at the outer\nradius of 44\\,K. The cloud is illuminated by the interstellar UV\nradiation field (IRSF, \\citealt{draine1978}) and by cosmic ray\nparticles (CRP). The ISRF attenuation by single-sized $0.1\\mu$m\nsilicate grains at a given radius is calculated in a plane-parallel\ngeometry following \\citet{vandishoeck1988}. The CRP ionization rate is\nassumed to be $1.3\\times 10^{-17}$~s$^{-1}$ \\citep{spitzer1968}. The\ngas-grain chemical model by \\citet{vasyunin2008} with the desorption\nenergies and surface reactions from \\citet{garrod2006} is used.\nGas-phase reaction rates are taken from RATE\\,06 \\citep{woodall2007},\ninitial abundances, were adopted from the ``low metal'' set of\n\\citet{lee1998}.\n\nFigure \\ref{model} presents the C$_2$H abundances for the three models\nat two different time steps: (a) 100\\,yr, and (b) in a more evolved\nstage after $5\\times10^4$\\,yr. The C$_2$H abundance is high toward the\ncore center right from the beginning of the evolution, similar to\nprevious models (e.g., \\citealt{millar1985,herbst1986,turner1999}).\nDuring the evolution, the C$_2$H abundance stays approximately\nconstant at the outer core edges, whereas it decreases by more than\nthree orders of magnitude in the center, except for the cold $T=10$~K\nmodel. The C$_2$H abundance profiles for all three models show\nsimilar behavior.\n\nThe chemical evolution of ethynyl is determined by relative removal\nrates of carbon and oxygen atoms or ions into molecules like CO, OH,\nH$_2$O. Light ionized hydrocarbons CH$^+_{\\rm n}$ (n=2..5) are quickly\nformed by radiative association of C$^+$ with H$_2$ and hydrogen\naddition reactions: C$^+$ $\\rightarrow$ CH$_2^+$ $\\rightarrow$\nCH$_3^+$ $\\rightarrow$ CH$_5^+$. The protonated methane reacts with\nelectrons, CO, C, OH, and more complex species at later stage and\nforms methane. The CH$_4$ molecules undergo reactive collisions with\nC$^+$, producing C$_2$H$_2^+$ and C$_2$H$_3^+$. An alternative way to\nproduce C$_2$H$_2^+$ is the dissociative recombination of CH$_5^+$\ninto CH$_3$ followed by reactions with C$^+$. Finally, C$_2$H$_2^+$\nand C$_2$H$_3^+$ dissociatively recombine into CH, C$_2$H, and\nC$_2$H$_2$. The major removal for C$_2$H is either the direct\nneutral-neutral reaction with O that forms CO, or the same reaction\nbut with heavier carbon chain ions that are formed from C$_2$H by\nsubsequent insertion of carbon. At later times, depletion and\ngas-phase reactions with more complex species may enter into this\ncycle. At the cloud edge the interstellar UV radiation\ninstantaneously dissociates CO despite its self-shielding,\nre-enriching the gas with elemental carbon.\n\nThe transformation of C$_2$H into CO and other species proceeds\nefficiently in dense regions, in particular in the ``warm'' model\nwhere endothermic reactions result in rich molecular complexity of the\ngas (see Fig.~\\ref{model}). In contrast, in the ``cold'' 10\\,K model\ngas-grain interactions and surface reactions become important. As a\nresult, a large fraction of oxygen is locked in water ice that is hard\nto desorb ($E_{\\rm des} \\sim 5500$~K), while half of the elemental\ncarbon goes to volatile methane ice ($E_{\\rm des} \\sim 1300$~K). Upon\nCRP heating of dust grains, this leads to much higher gas-phase\nabundance of C$_2$H in the cloud core for the cold model compared to\nthe warm model. The effect is not that strong for less dense regions\nat larger radii from the center.\n\nSince the C$_2$H emission is anti-correlated with the dust continuum\nemission in the case of IRAS\\,18089-1732 (Fig.\\,\\ref{18089}), we do\nnot have the H$_2$ column densities to quantitatively compare the\nabundance profiles of IRAS\\,18089-1732 with our model. However, data\nand model allow a qualitative comparison of the spatial structures.\nEstimating an exact evolutionary time for IRAS\\,18089-1732 is hardly\npossible, but based on the strong molecular line emission, its high\ncentral gas temperatures and the observed outflow-disk system\n\\citep{beuther2004a,beuther2004b,beuther2005c}, an approximate age of\n$5\\times10^4$\\,yr appears reasonable. Although dynamical and chemical\ntimes are not necessarily exactly the same, in high-mass star\nformation they should not differ to much: Following the models by\n\\citet{mckee2003} or \\citet{krumholz2006b}, the luminosity rises\nstrongly right from the onset of collapse which can be considered as a\nstarting point for the chemical evolution. At the same time disks and\noutflows evolve, which should hence have similar time-scales. The\ndiameter of the shell-like C$_2$H structure in IRAS\\,18089-1732 is\n$\\sim 5''$ (Fig.\\,\\ref{18089}), or $\\sim$9000\\,AU in radius at the\ngiven distance of 3.6\\,kpc. This value is well matched by the modeled\nregion with decreased C$_2$H abundance (Fig.\\,\\ref{model}). Although\nin principle optical depths and/or excitation effects could mimic the\nC$_2$H morphology, we consider this as unlikely because the other\nobserved molecules with many different transitions all peak toward the\ncentral submm continuum emission in IRAS\\,18089-1732\n\\citep{beuther2005c}. Since C$_2$H is the only exception in that rich\ndataset, chemical effects appear the more plausible explanation.\n\nThe fact that we see C$_2$H at the earliest and the later evolutionary\nstages can be explained by the reactive nature of C$_2$H: it is\nproduced quickly early on and gets replenished at the core edges by\nthe UV photodissociation of CO. The inner ``chemical'' hole observed\ntoward IRAS\\,18089-1732 can be explained by C$_2$H being consumed in\nthe chemical network forming CO and more complex molecules like larger\ncarbon-hydrogen complexes and/or depletion.\n\nThe data show that C$_2$H is not suited to investigate the central gas\ncores in more evolved sources, however, our analysis indicates that\nC$_2$H may be a suitable tracer of the earliest stages of (massive)\nstar formation, like N$_2$H$^+$ or NH$_3$ (e.g.,\n\\citealt{bergin2002,tafalla2004,beuther2005a,pillai2006}). While a\nspatial analysis of the line emission will give insights into the\nkinematics of the gas and also the evolutionary stage from chemical\nmodels, multiple C$_2$H lines will even allow a temperature\ncharacterization. With its lowest $J=1-0$ transitions around 87\\,GHz,\nC$_2$H has easily accessible spectral lines in several bands between\nthe 3\\,mm and 850\\,$\\mu$m. Furthermore, even the 349\\,GHz lines\npresented here have still relatively low upper level excitation\nenergies ($E_u/k\\sim42$\\,K), hence allowing to study cold cores even\nat sub-millimeter wavelengths. This prediction can further be proved\nvia high spectral and spatial resolution observations of different\nC$_2$H lines toward young IRDCs.\n\n\\acknowledgments{H.B. acknowledges financial support\n by the Emmy-Noether-Programm of the Deutsche Forschungsgemeinschaft\n (DFG, grant BE2578). }\n\n\n", "answers": ["21."], "length": 2103, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "238c4efe738cecd8346abfdc57707996aef30f9b43d1a577"} {"input": "对于PD3.0协议,FS312BH支持的最高诱骗电压是多少?", "context": "'无锡速芯微电子有限公司是一家集芯片 研发,销售和服务于一体的国家高新技 术企业,为客户提供高性能,高集成 度,极致体验的全协议快充芯片。 无锡速芯微电子有限公司 FastSOC Microelectronics Co.,Ltd. 销售联系方式: 联系人:顾先生 手机:1800 185 3071 邮箱:gpp@fastsoc.com 网址:www.fastsoc.com 地址:无锡市新吴区菱湖大道200号中国物联网国际创新园E-503室 顾工微信号 速芯微公众号 免责声明:本文所述方法、方案均供客户参考,用于提示或者展示芯片应用的一种或者多种方式,不作为最终产品的实际方案。文中所描述的功能和性能指标在实 验室环境下测试得到,部分可以提供第三方测试报告,但是不保证客户产品上能获得相同的数据。本文信息只作为芯片使用的指导,不授权用户使用本公司或者其 他公司的知识产权。本文信息只作为芯片使用的指导,不承担因为客户自身应用不当而造成的任何损失。 **文中信息仅供参考,详情请联系我司获取最新资料” 无锡速芯微电子有限公司 FastSOC Microelectronics Co.,Ltd. 产品手册 2023年 \n新品快览 FS312A:PD3.0 诱骗- FS312A支持PD2.0/PD3.0最高诱骗电压:20V - FS312AE支持PD2.0/PD3.0 最高诱骗电压:20V支持Emarker模拟功能 - 封装:SOT23-5 VBUS CC1 CC2 DM DP 用电电路 4.7K 0.47uF R C C 1 V D D F U N C C C 2F S 3 1 2 B D M D P EP GND 应用图 FS8628:A+C快充协议CC2 CC1 VBUS CC2 CC1 FS312A FUNC GND VDD 4.7K GND R 用电电路 1uF GND 应用图 多口极简方案 FS8611SP*2+CCM-8611SP-A+7533B-T 双C智能降功率方案 FS8611S USB-C AC-DC 双变压器 7533B-T CCM-8611SP-A FS8611S USB-C 采用2颗FS8611SP搭配CCM-8611SP-A (MCU),7533B-T配合工作 - 支持多种协议 - 支持I2C控制 - 任意单 C 的为 35W - 双 插 降 功 率 , 三 档 功 率 智 能 配 置:27.4W+7.4W;17.4W+17.4W; 27.4W - BOM极简,成本低 FS312B:PD3.1 诱骗FS8611K*2+CCM-8611K-A+7550B-T 双C方案 - FS312BL支持PD2.0/PD3.0/PD3.1/第三方协议最高诱骗电压:20V - FS312BLE支持PD2.0/PD3.0/PD3.1/第三方协议最高诱骗电压:20V支持Emarker模拟功能 - FS312BH支持PD2.0/PD3.0/PD3.1/第三方协议最高诱骗电压:48V - FS312BHE支持PD2.0/PD3.0/PD3.1/第三方协议最高诱骗电压:48V 支持Emarker模拟功能 - 封装:DFN2x2-6L - 兼容兼容BC1.2、Apple2.4A、 QC2.0 Class A、QC3.0 Class A/B、 FCP、SCP、AFC、低压直充等 - 兼容Type-C PD2.0、Type-C PD3.0、 Type-C PD3.0 PPS、QC4.0协议 - 支持两路DP/DM - 支持CV/CC(分段CC)功能 - 支持定制PDO - 支持A+C双口工作,电压自动回5V - 支持FB/OPTO反馈 - 封装:QFN3x3-20L VPWR FB PowerSystem 100K GND R1 GND 19 VIN 17 FB FUNC1 FUNC2 20 15 18 13 PLUGIND VFB FS8628 QFN3x3-20L AGATE 47K 7.5K 47K 7.5K 1 16 8 7 3 4 5 6 10 9 11 CGATE CVBUS CC2 CC1 CDP CDM AVBUS DM DP ISP ISN 12 应用图 2 V3P3 100Ω 1u EP GND GND CVBUS TYPE- C CC2 CC1 CDP CDM CGND TYPE-A AVBUS DM DP 10n 200 AGND 5mΩ GND FS8611K USB-C AC-DC DC-DC 7550B-T CCM-8611K-A FS8611K USB-C 采用2颗FS8611K搭配CCM-8611K-A (MCU)工作,7550B-T配合工作 - 支持PD2.0/PD3.0/QC2.0/AFC/FCP - 支持PDO定制 - 任意单 C 的为 35W(可定制) - 双插18W(可定制15W/20W) - BOM极简,成本低 FS212C+ACM-212C-A+7550B-T 双C方案 FS212C USB-C AC-DC DC-DC 7550B-T ACM-212C-A FS8623B-A+C方案 AC-DC DC-DC FS8623B USB-A USB-C USB-A 采 用 1 颗 F S 2 1 2 C 搭 配 ACM-212C-A 工 作,7550B-T配合工作 - 支持PD2.0/PD3.0 - 支持PDO定制 - 任意单 C 的为20W - 双插7.5W回5V - BOM极简,成本低 采用一颗FS8623B实现A+C方案 - 兼容兼容Apple2.4A/低压直充 QC2.0 Class A/QC3.0 Class A/B/ FCP/SCP等 - 兼 容Type -C PD2.0 / PD3.0 / PD3.0PPS/QC4.0协议 - 支持PDO定制 - 双插回5V \n多口方案选型 产品选型 受电端芯片选型 速芯微现有多种多口的方案选择:A+C,C+C,C+C+A,C+C+C,C+C+A+A等方案。对于 A+C的方案,可使用1颗芯片实现,也可用多颗芯片来实现。 速芯微现有多种受电端诱骗芯片,客户可根据应用需求进行选择。 受电端诱骗芯片应用领域 筋膜枪 无线充 线材 无人机 产品型号 PD2.0 PD3.0 PD3.1 第三方协议 诱骗电压(V) 控制方式 内置Emarker 定制 封装 FS312A √ √ 5/9/12/15/20 电阻阻值 可变电压策略 SOT23-5 FS312AE √ √ 5/9/12/15/20 电阻阻值 √ (公头专用) 可变电压策略 SOT23-5 FS312BL √ √ √ √ 5/9/12/15/20 电阻阻值 可变电压策略 DFN2x2-6 FS312BLE √ √ √ √ 5/9/12/15/20 电阻阻值 √ (公头专用) 可变电压策略 DFN2x2-6 FS312BH √ √ √ √ 5/20/28/36/48 电阻阻值 可变电压策略 DFN2x2-6 FS312BHE √ √ √ √ 5/20/28/36/48 电阻阻值 √ (公头专用) 可变电压策略 DFN2x2-6 FS312LC √ √ √ 5/9/12 电阻阻值 可变第三方 协议 SSOP10 FS312HC √ √ √ 5/9/12/15/20 电阻阻值 可变第三方 协议 SSOP10 FS2711Q √ √ √ 任意设置 I2C √ QFN3x3-16 FS2711P √ √ √ 任意设置 I2C √ QFN3x3-16 FS2711PA √ √ 全协议 任意设置 I2C √ SSOP10 FS2711SW √ √ 全协议 SSOP10 FS512 √ √ 全协议 任意设置 I2C √ SSOP10 方案 类型 产品型号 单C 单A 双插 A+C方案 FS8623 20W(PPS)(可定制) A口全协议18w 5V共享3A FS8623B 20W(PPS)(可定制) A口全协议18w 5V共享3A FS8628 20W(PPS)(可定制) A口全协议18w 5V共享3A FS8611RPC+FS116DB 65W(PPS)(可定制) A口全协议18w A口:5V/2.4A C口:45W FS8628RC+FS116DB 35W(可定制) A口全协议18w A口:5V(BC1.2,Apple 2.4) C口:20W 方案类型 产品型号 单C1 单C2 C1/C2 C+C方案 FS8611RPB*2 30W(可定制) 30W(可定制) C1/C2:5V/3A(或5V/2.4A) FS8611GH*2 35W(可定制) 35W(可定制) C1/C2:18W(可定制) FS8628P*2 35W(可定制) 35W(可定制) C1/C2:17.4W可定制) FS8611KL*2 20W(可定制) 20W(可定制) C1/C2:5V/1.5 A FS8611PC*2 35W 35W C1/C2:18W FS8611BH*2 65W(可定制) 65W(可定制) C1:45W(可定制)C2:20W(可定制) FS8628RPC+FS8611RB 45W(可定制)) 36W (可定制)) C1:30W(可定制)C2:5V/1.5A(可定制) 方案类型 产品型号 单C1 单C2 单A C1+C2 C1/C2+A C1+C2+A C+C+A FS8611S*2+FS116DB 65W(可定制) 65W( 可定制)) A口全协议18w 智能分配功率 45W+18W C1/C2:智能分配功率 A:18W(或5V1.5A) FS8612C+FS8628P 100W(可定制) 35W (可定制)) 20W C1:65W C2:20W C1+A:65W+20W C2+A:7.5W+7.5W C1:65W C2:7.5W A:7.5W 其他 \nSource-TYPE C协议芯片选型 Source-TYPE A协议芯片选型 速芯微现有多种TYPE-C的快充协议芯片,支持多种协议,支持客户定制,多样化,满 足客户对TYPE C的各种快充需求。 速芯微现有多种TYPE A快充协议芯片,支持全协议,支持定制,满足客户对A口协议的各种需 求。速芯微的TYPE-A快充协议芯片的协议丰富,FS112系列拥有多种的型号;FS116D 系列带插入指示,可搭配TYPE-C快充协议芯片,实现A+C,A+C+C,A+A+C+C等多口方 案,协议丰富,其中FS116A一般用于插入指示使用 Source-TYPE A协议芯片引脚封装图 D+ VSS FB 1 2 3 FS112 6 5 4 D- VDD FUNC GATE VIN FUNC FB LED/PLUG_IN 1 2 3 4 5 FS116D 10 DM 9 8 7 6 DP CSP CSN VSS速芯微的各TYPE-C快充协议芯片之间可搭配使用,实现多口方案,更多详情请咨 询我司工作人员。 多口降功率专用快充协议芯片:FS8611RB,FS8611RC,FS8611RPB,FS8611RPC, FS8612CP。 带I2C快充协议芯片:FS8611S,FS8611SP 产品型号 BC1.2 Apple 2.4 QC2.0 QC3.0 AFC FCP SCP HISCP 大电流直充 封装 FS112 √ √ √ √ √ √ √ SOT23-6 FS112H √ √ √ √ √ √ √ √ √ SOT23-6 FS113 √ v √ √ √ √ √ √ √ SOT23-6 FS116DP √ √ √ √ √ √ √ √ SSOP10 FS116DB √ √ √ √ √ √ √ √ SSOP10 FS116E √ √ √ √ √ √ √ √ √ SSOP10 FS116A √ √ SSOP10 其他 可定制 PD2.0 PD3.0 PD3.0 PPS 第三方协议 反馈方式 MOS CV/CC 定制 封装 FS212C √ √ FB √ SOT23-6 FS212CM √ √ FB PMOS(可省) √ SOT23-6 FS212D √ √ √ FB √ SOT23-6 FS212DH √ √ √ FB √ SOT23-6 FS212DP √ √ √ FB PMOS √ SOT23-6 FS212DG √ √ √ FB PMOS √ SOT23-6 FS8611G √ √ FB PMOS(可省) √ SOP-8 FS8611K √ √ QC2.0/AFC/FCP FB PMOS(可省) √ SOP8 FS8611J √ √ √ 全协议 FB PMOS(可省) √ SOP8 FS8611B √ √ √ 全协议 FB PMOS(可省) √ SSOP10 FS8611RB √ √ 全协议 FB PMOS √ SSOP10 FS8611RC √ √ 全协议 FB PMOS √ SSOP10 FS8611S √ √ √ 全协议 FB PMOS √ SSOP10 FS8611PP √ √ √ 全协议 FB PMOS √ SSOP10 FS8611BP √ √ √ 全协议 FB PMOS(可省) √ SSOP10 FS8611RPB √ √ √ 全协议 FB PMOS √ SSOP10 FS8611RPC √ √ √ 全协议 FB PMOS √ SSOP10 FS8611SP √ √ √ 全协议 FB PMOS(可省) SSOP10 FS8612 √ √ √ 全协议 OPTO PMOS √ √ SSOP16 FS8612B √ √ √ 全协议 FB PMOS √ √ SSOP16 FS8612BP √ √ √ 全协议 FB PMOS √ √ SSOP16 FS8612C √ √ √ 全协议 FB/OPTO PMOS √ √ QFN4x4-16 FS8612CP √ √ √ 全协议 FB/OPTO PMOS √ √ QFN4x4-16 \n'", "answers": ["48V."], "length": 898, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "910c9a02ee857c1019702818b6fa2d5c25ed432d08385ba8"} {"input": "In which electorate was Simon English elected to the New Zealand Parliament?", "context": "Sir Simon William English (born 30 December 1961) is a New Zealand former National Party politician who served as the 39th prime minister of New Zealand from 2016 to 2017. He had previously served as the 17th deputy prime minister of New Zealand and minister of finance from 2008 to 2016 under John Key and the Fifth National Government.\n\nA farmer and public servant before entering politics, English was elected to the New Zealand Parliament in as the National Party's candidate in the Wallace electorate. He was elevated to Cabinet in 1996 and in 1999 was made minister of finance, although he served for less than a year due to his party's loss at the 1999 general election. In October 2001, English replaced Jenny Shipley as the leader of the National Party (and consequently as Leader of the Opposition). He led the party to its worst defeat at the 2002 general election, and as a consequence, in October 2003 he was replaced as leader by Don Brash.\n\nIn November 2006, after Brash's resignation, English became deputy leader under John Key. After National's victory at the 2008 general election, he became deputy prime minister and was also made minister of finance for the second time. Under English's direction New Zealand's economy maintained steady growth during National's three terms of government. He became a list-only MP after stepping down as an electorate MP at the 2014 general election.\n\nJohn Key resigned as leader of the National Party and prime minister in December 2016. English won the resulting leadership election unopposed and was sworn in as prime minister on 12 December 2016. His tenure was only ten months, and included a three-month election campaign. In the 2017 general election, National won the largest number of seats but fell short of a majority. The parties holding the balance of power declined to support the existing government, and English was subsequently replaced as prime minister by Jacinda Ardern, leader of the Labour Party. English initially continued on as Leader of the Opposition, but resigned as leader of the National Party on 27 February 2018 and left parliament two weeks later.\n\nEarly life\nEnglish was born on 30 December 1961 at Lumsden Maternity Centre in Lumsden. He is the eleventh of twelve children of Mervyn English and Norah (née O'Brien) English. His parents purchased Rosedale, a mixed sheep and cropping farm in Dipton, Southland from Mervyn's uncle, Vincent English, a bachelor, in 1944. English was born in the maternity unit at Lumsden.\n\nEnglish attended St Thomas's School in Winton, then boarded at St. Patrick's College in Upper Hutt, where he became head boy. He played in the first XV of the school's rugby team. English went on to study commerce at the University of Otago, where he was a resident at Selwyn College, and then completed an honours degree in English literature at Victoria University of Wellington.\n\nAfter finishing his studies, English returned to Dipton and farmed for a few years. From 1987 to 1989, he worked in Wellington as a policy analyst for the New Zealand Treasury, at a time when the free market policies favoured by Labour's finance minister Roger Douglas (known collectively as \"Rogernomics\") were being implemented.\n\nEnglish joined the National Party in 1980, while at Victoria University. He served for a period as chairman of the Southland branch of the Young Nationals, and became a member of the Wallace electorate committee. After moving to Wellington, he served for periods on the Island Bay and Miramar electorate committees, respectively.\n\nFourth National Government (1990–1999)\n\nAt the 1990 general election, English stood as the National candidate in Wallace, replacing the retiring Derek Angus, and was elected with a large majority. He would hold this seat, renamed Clutha-Southland in 1996, until 2014. He and three other newly elected National MPs (Tony Ryall, Nick Smith, and Roger Sowry) were soon identified as rising stars in New Zealand politics, and at various points were dubbed the \"brat pack\", the \"gang of four\", and the \"young Turks\". In his first term in parliament, English chaired a select committee into social services. He was made a parliamentary under-secretary in 1993, serving under the Minister of Health.\n\nFirst period in cabinet (1996–1999)\nIn early 1996, English was elevated to cabinet by Prime Minister Jim Bolger, becoming the Minister for Crown Health Enterprises and Associate Minister of Education (to Wyatt Creech). He was 34 at the time, becoming the cabinet's youngest member. After the 1996 general election, the National Party was forced into a coalition with New Zealand First to retain government. In the resulting cabinet reshuffle, English emerged as Minister of Health. However, as a condition of the coalition agreement, NZ First's Neil Kirton (a first-term MP) was made Associate Minister of Health, effectively becoming English's deputy. This arrangement was described in the press as a \"shotgun marriage\", and there were frequent differences of opinion between the two ministers. After their relationship became unworkable, Kirton was sacked from the role in August 1997, with the agreement of NZ First leader Winston Peters.\n\nAs Minister of Health, English was responsible for continuing the reforms to the public health system that National had begun after the 1990 general election. The reforms were unpopular, and health was perceived as one of the government's weaknesses, with the health portfolio consequently being viewed as a challenge. English believed that the unpopularity of the reforms was in part due to a failure in messaging, and encouraged his National colleagues to avoid bureaucratic and money-focused language (such as references to \"balance sheets\" and \"user charges\") and instead talk about the improvements to services the government's reforms would bring. He also rejected the idea that public hospitals could be run as commercial enterprises, a view which some of his colleagues had previously promoted.\n\nBy early 1997, as dissatisfaction with Bolger's leadership began to grow, English was being touted as a potential successor, along with Jenny Shipley and Doug Graham. His age (35) was viewed as the main impediment to a successful leadership run. National's leadership troubles were resolved in December 1997, when Bolger resigned and Shipley was elected to the leadership unopposed. English had been a supporter of Bolger as leader, but Shipley reappointed him Minister of Health in her new cabinet.\n\nEnglish was promoted to Minister of Finance in a reshuffle in January 1999, a position which was at the time subordinate to the Treasurer, Bill Birch. After a few months, the pair switched positions as part of Birch's transition to retirement, with English assuming the senior portfolio. In early interviews, he emphasised his wish to be seen as a pragmatist rather than an ideologue, and said that the initiatives of some of his predecessors (Roger Douglas's \"Rogernomics\" and Ruth Richardson's \"Ruthanasia\") had focused on \"fruitless, theoretical debates\" when \"people just want to see problems solved\".\n\nOpposition (1999–2008)\n\nAfter the National Party lost the 1999 election to Helen Clark's Labour Party, English continued on in the shadow cabinet as National's spokesperson for finance. He was elected deputy leader of the party in February 2001, following the resignation of Wyatt Creech, with Gerry Brownlee being his unsuccessful opponent.\n\nLeader of the Opposition\nIn October 2001, after months of speculation, Jenny Shipley resigned as leader of the National Party after being told she no longer had the support of the party caucus. English was elected as her replacement unopposed (with Roger Sowry as his deputy), and consequently became Leader of the Opposition. However, he did not openly organise against Shipley, and according to The Southland Times \"there was almost an element of 'aw, shucks, I'll do it then' about Mr English's ascension\".\n\nAged 39 when he was elected, English became the second-youngest leader in the National Party's history, after Jim McLay (who was 38 when elected in 1984). He also became only the third Southlander to lead a major New Zealand political party, after Joseph Ward and Adam Hamilton. However, English failed to improve the party's performance. In the 2002 election, National suffered its worst electoral defeat ever, gaining barely more than twenty percent of the vote. English described it as \"the worst day of my political life\". Both party insiders and the general public were split as to how much to blame him for the loss, but most of the party believed that English would be able to rebuild National's support.\n\nBy late 2003, however, National's performance in opinion polls remained poor. The party had briefly increased its popularity in the year following the election, but by October its support had fallen to levels only slightly better than what it achieved in the last ballot. English also appeared in a boxing match for a charity against entertainer Ted Clarke. This did not boost his polling or that of the National party either, with suggestions that it devalued his image as a serious politician. Don Brash, former governor of the Reserve Bank and a relative newcomer to politics, began to build up support to replace English. On 28 October, Brash gained sufficient backing in Caucus to defeat English in a leadership contest.\n\nShadow cabinet roles and deputy leader\nOn 2 November 2003, when Brash changed responsibilities for certain MPs, English became National's spokesman for education, ranked at fifth place in the party's parliamentary hierarchy. He remained in parliament after the 2005 election. In his new shadow education portfolio, English performed strongly, and remained a party favourite despite his election defeat as leader in 2002, eventually being returned to the finance portfolio in August 2004 as deputy spokesman (while still retaining responsibility for education).\n\nIn November 2006, Brash resigned as leader. English was considered as a potential replacement leader (running against John Key) or deputy leader (against incumbent Gerry Brownlee) in the ensuing leadership election. However, a contest was avoided when the MPs agreed a Key/English ticket would run unopposed in a display of party unity. English took over the deputy leadership and the finance portfolio in the Key shadow cabinet.\n\nFifth National Government (2008–2017)\n\nDeputy Prime Minister and Minister of Finance (2008–2016)\n\nAt the 2008 election, English was re-elected by his electorate, winning by a margin of about 15,500 votes. He became Deputy Prime Minister of New Zealand and Minister of Finance in the fifth National Government, being sworn into office on 19 November 2008 and continued to serve in those roles until becoming Prime Minister on 12 December 2014. He was also made Minister of Infrastructure in National's first term of government and Minister responsible for Housing New Zealand Corporation and minister responsible for the New Zealand flag consideration process in its third.\n\nHe was comfortably re-elected in Clutha-Southland in the 2011 election but opted to run as a party-list candidate in 2014. \n\nThe pairing of John Key as leader of the National Party and English as his deputy has been compared to that of Bob Hawke and Paul Keating (in Australia) and Tony Blair and Gordon Brown (in the UK).\n\nEnglish acceded to the role of Finance Minister in the continuing wake of the financial crisis. In response to New Zealand's rising debt, English made budget deficit-reduction his main priority. His first budget outlined three focuses in New Zealand's financial recovery: \"improving the business environment and removing roadblocks to growth; investment in productive infrastructure; and improving the way government works\". One of his first acts was creating the National Infrastructure Unit, charged with formulating a plan for infrastructure projects and investments. He commissioned a government-wide spending review, with an aim to reducing government expenditure—with the exceptions of a two-year stimulus package and long-term increases on infrastructure spending.\n\nIn April 2011, the Opposition criticised English for suggesting that New Zealand businesses could use New Zealand's low wages to help it compete with Australia. The National Government campaigned for re-election in 2011 on its economic record. The Government boasted growth for five consecutive quarters up to mid-2010, totalling 1.6% of real GDP.\n\nStrong growth resulted in a surplus of $473 million for the 2015/16 financial year, projected to rise to $8.5 billion by 2020/21. In his 2016 Economic and Fiscal Update address, English stated that reducing debt and tackling the costs of the 2016 Kaikōura earthquake were higher priorities than reducing rates of tax.\n\nAllowances issue\nIn 2009, the media, including TVNZ and TV3 revealed that English was receiving about NZ$900 a week as part of a living allowance for ministers, to live in his own NZ$1.2 million Wellington home. At the time, English also received $276,200 in his annual salary as Deputy Prime Minister. It was also revealed other ministers with homes in the capital city were also claiming accommodation allowances. On 3 August 2009, Prime Minister John Key started a review of the housing allowances claimed by cabinet ministers. English subsequently paid back $12,000 and only claimed about $24,000 a year in living allowances. The Auditor-General's office said in September 2009 that they were making \"preliminary enquiries\" into parliamentary housing expenses in response to a letter of complaint from Progressive party leader Jim Anderton. Two days later English stated that he would no longer take up any housing allowance and had paid back all the allowance he had received since the November 2008 election.\n\nPrime Minister (2016–2017)\n\nJohn Key resigned on 12 December, and endorsed English as his successor in the resulting leadership election. Following the drop-out of both Judith Collins and Jonathan Coleman from the leadership election, English was sworn in as the 39th Prime Minister of New Zealand on 12 December 2016.\n\nEnglish appointed his first cabinet on 18 December. In a reshuffle, he appointed Steven Joyce to succeed him as Finance Minister, while most ministerial portfolios remained the same.\n\nIn February 2017, English did not attend Waitangi Day commemorations at the historic treaty grounds, reportedly in response to the Ngāpuhi iwi's decision to stop the Prime Minister from speaking at the marae. Ngāpuhi have protested the Government's negotiation of the Trans Pacific Partnership Agreement (TPPA), which the iwi believe infringes upon Māori sovereignty, and thus does not adhere to the Treaty of Waitangi. English had been invited to attend in an official capacity; his non-attendance was criticised by a Ngāpuhi elder and Opposition leader Andrew Little.\n\nIn his first overseas trip as Prime Minister, English travelled to Europe to discuss trade ties, including a prospective New Zealand–European Union free trade agreement. He first travelled to London on 13 January 2017 to meet British Prime Minister Theresa May. Discussing trade relations, English said the two nations were \"natural partners\" and would \"continue to forge ties\" after the UK's withdrawal from the EU. He also arranged to meet with London Mayor Sadiq Khan, Belgian Prime Minister Charles Michel and German Chancellor Angela Merkel. In a meeting with Merkel, English received crucial backing from Germany for a trade deal with the EU. On 16 January, English stated that his government would continue to promote TPPA, despite the United States' decision to withdraw from the agreement. He explained that Southeast Asian countries would now be treated as a priority in negotiations—he also asserted that the United States was ceding influence to China by its rejection of the trade pact.\n\nAt a press conference at the Beehive on 1 February 2017, English announced that the 2017 general election would be held on 23 September. The Prime Minister later confirmed that his party would approach ACT, United Future and the Māori Party if confidence and supply agreements were required to form a government following the election. In his second cabinet reshuffle on 24 April, English appointed Gerry Brownlee as his new Foreign Affairs Minister; he also promoted Nikki Kaye to the portfolio of Education Minister, and moved Mark Mitchell into the cabinet to become Defence Minister. The reshuffle was perceived as an election preparation.\n\nOn 13 February 2017, English welcomed Australian Prime Minister Malcolm Turnbull to Wellington. The two leaders reaffirmed their shared trade agenda, and discussed changes to the Australian citizenship pathway which will affect permanent residents originating from New Zealand.\n\nOn 19 June, it was reported that Todd Barclay, who succeeded English as MP for Clutha-Southland, had clandestinely recorded one of his employee's conversations the previous year, and that John Key's leaders' budget was used to pay a confidential settlement after the employee resigned. English admitted that he had been aware of the illegal recording and the settlement, and thus implicated in the scandal.\n\nDuring the 2017 National campaign launch, English introduced a $379 million social investment package including digital learning academies for high school students, more resources for mathematics, and boosting support for teaching second languages in schools, and maintaining National Standards in the school curriculum. Prime Minister English also sought to defend National's financial management and economic track record and claimed that the opposition Labour Party would raise taxes. Early opinion polling had forecast a poor showing in the election for the Labour Party, but in early August 37-year-old Jacinda Ardern took over as Labour leader and seemingly energised younger voters.\n\nAt the 2017 general election, National won the largest share of the party vote (44.4%) and the largest number of seats (56) in the House Representatives. However, National lacked enough seats to govern alone due to two of the party's support partners, the Māori Party and United Future, losing their parliamentary seats. In response, English stated that the party would be entering into talks to form a coalition with New Zealand First. Following talks with the two largest parties, New Zealand First entered a coalition arrangement with the Labour Party. English was succeeded as prime minister by Jacinda Ardern on 26 October.\n\nOpposition (2017–2018)\n\nLeader of the Opposition\nEnglish was re-elected as National Party leader on 24 October 2017. At the time of his re-election, English announced his intention to stay on as leader until the next general election. On 13 February 2018, however, he stood down as National Party leader due to personal reasons, and instructed the party to put into motion the processes to elect a new leader. He also retired from Parliament. English's resignation followed weeks of speculation that he would step aside for a new leader. On 27 February, he was succeeded as party leader by Simon Bridges as the result of the leadership election held that day.\n\nPost-premiership \nIn 2018, English joined the board of Australian conglomerate, Wesfarmers. English serves in Chairmanships of Mount Cook Alpine Salmon, Impact Lab Ltd and Manawanui Support Ltd. He is also a director of The Instillery, Centre for Independent Studies and The Todd Corporation Limited, and is a member of the Impact Advisory Group of Macquarie Infrastructure and Real Assets.\n\nPolitical and social views\n\nEnglish is regarded as more socially conservative than his predecessor, John Key. He has stated his opposition to voluntary euthanasia and physician-assisted suicide, same-sex civil unions, and the decriminalisation of prostitution. As Prime Minister he opposed any \"liberalisation\" of abortion law.\n\nIn 2004, English voted against a bill to establish civil unions for both same-sex and opposite-sex couples. In 2005, he voted for the Marriage (Gender Clarification) Amendment Bill, which would have amended the Marriage Act to define marriage as only between a man and a woman. English voted against the Marriage (Definition of Marriage) Amendment Bill, a bill that legalised same-sex marriage in New Zealand. However, in December 2016 he stated, \"I'd probably vote differently now on the gay marriage issue. I don't think that gay marriage is a threat to anyone else's marriage\".\n\nIn 2009, English voted against the Misuse of Drugs (Medicinal Cannabis) Amendment Bill, a bill aimed at amending the Misuse of Drugs Act so that cannabis could be used for medical purposes.\n\nPersonal life \nEnglish met his future wife, Mary Scanlon, at university. She was studying medicine at the time, and became a general practitioner. Both her parents were immigrants, her father being Samoan and her mother Italian, born on the island of Stromboli. They have six children: a daughter and five sons.\n\nEnglish is a practising Roman Catholic, but has stated that he considers his religious beliefs personal and thus separate from politics.\n\nIn June 2002, English took part in TV3's Fight For Life, a celebrity boxing fundraiser to raise money for the Yellow Ribbon anti-youth-suicide campaign, influenced by the death of a teenage nephew in 1997. He lost a split decision to former university colleague Ted Clarke.\n\nHonours\nIn the 2018 Queen's Birthday Honours, English was appointed a Knight Companion of the New Zealand Order of Merit, for services of over 27 years to the State.\n\nSee also\n\nList of New Zealand governments\nPolitics of New Zealand\n\nReferences\n\nExternal links\n\nProfile at National Party \nProfile on Parliament.nz\nReleases and speeches at Beehive.govt.nz\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n1961 births\n21st-century New Zealand politicians\nCandidates in the 2017 New Zealand general election\nDeputy Prime Ministers of New Zealand\nLeaders of the Opposition (New Zealand)\nLiving people\nMembers of the Cabinet of New Zealand\nMembers of the New Zealand House of Representatives\nNew Zealand farmers\nNew Zealand finance ministers\nNew Zealand list MPs\nNew Zealand MPs for South Island electorates\nNew Zealand National Party MPs\nNew Zealand National Party leaders\nNew Zealand Roman Catholics\nNew Zealand people of Irish descent\nPeople educated at St. Patrick's College, Silverstream\nPeople from Dipton, New Zealand\nPeople from Lumsden, New Zealand\nPrime Ministers of New Zealand\nUniversity of Otago alumni\nVictoria University of Wellington alumni\nKnights Companion of the New Zealand Order of Merit\nNew Zealand politicians awarded knighthoods", "answers": ["The Wallace electorate."], "length": 3597, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "462011b6f9beb976aaf7b38082bcf7d70a91c3c74d2c6e95"} {"input": "When was the paper published?", "context": "Paper Info\n\nTitle: Interpretable reduced-order modeling with time-scale separation Interpretable reduced-order modeling with time-scale separation\nPublish Date: 7 March 2023\nAuthor List: Sebastian Kaltenbach (from CSE-Lab, ETH Zurich, Harvard SEAS), Phaedon-Stelios Koutsourelakis (from CSE-Lab, ETH Zurich, Harvard SEAS), Petros Koumoutsakos (from CSE-Lab, ETH Zurich, Harvard SEAS), Harvard Seas (from CSE-Lab, ETH Zurich, Harvard SEAS)\n\nFigure\n\nFIG. 5. Comparison between the phase-space of the reference solution (left) and the phase-space of the predictions\nFIG. 7. Comparison between predictions and reference solutions for a new initial condition fort = 1.25, 3.75, 7.5, 12.5, 20, 30 (from left to right and top to down).We note that with longer prediction time the uncertainty bounds increases.Despite the chaotic nature of the KS equation, the predictive posterior mean is close to the reference solution for t ≤ 12.5\n\nabstract\n\nPartial Differential Equations (PDEs) with high dimensionality are commonly encountered in computational physics and engineering. However, finding solutions for these PDEs can be computationally expensive, making model-order reduction crucial. We propose such a data-driven scheme that automates the identification of the time-scales involved and, can produce stable predictions forward in time as well as under different initial conditions not included in the training data.\nTo this end, we combine a non-linear autoencoder architecture with a time-continuous model for the latent dynamics in the complex space. It readily allows for the inclusion of sparse and irregularly sampled training data. The learned, latent dynamics are interpretable and reveal the different temporal scales involved.\nWe show that this data-driven scheme can automatically learn the independent processes that decompose a system of linear ODEs along the eigenvectors of the system's matrix. Apart from this, we demonstrate the applicability of the proposed framework in a hidden Markov Model and the (discretized) Kuramoto-Shivashinsky (KS) equation.\nAdditionally, we propose a probabilistic version, which captures predictive uncertainties and further improves upon the results of the deterministic framework.\n\nINTRODUCTION\n\nHigh-fidelity simulations of critical phenomena such as ocean dynamics and epidemics have become essential for decision-making. They are based on physically-motivated PDEs expressing system dynamics that span multiple spatiotemporal scales and which necessitate cumbersome computations . In recent years there is increased attention to the development of data-driven models that can accelerate the solution of these PDEs as well as reveal salient, lower-dimensional features that control the long-term evolution.\nIn most cases, data-driven reduced-order models are not interpretable. In particular, models based on neural networks despite good predictive capabilities , they offer a black-box description of the system dynamics. A possible remedy is applying a symbolic regression to the learned neural network representation , but this adds additional computational cost due to the two-step procedure.\nA number of frameworks such as SINDy allows to learn interpretable dynamics but it relies on the a-priori availability of lower-dimensional descriptors and of time-derivatives which can be very noisy for both simulation and experimental data. Other frameworks are tailored to specific problems such as molecular dynamics .\nHere, we present a framework that only needs the value of the observables, and not their derivatives, as training data and is capable of identifying interpretable latent dynamics. The deployment of interpretable latent dynamics ensures that conservation of important properties of that are reflected in the reduced-order model .\nThe present method is related to approaches based on the Koopman-operator extended Dynamic Mode Decomposition (eDMD) but uses continuous complex-valued latent space dynamics and only requires one scalar variable per latent dimension to describe the latent space dynamics. Therefore we do not have to enforce any parametrizations on the Koopman matrix .\nThe time-continuous formulation moreover allows to incorporate sparse and irregularly sampled training data and fast generation of predictions after the training phase. By using a complex-valued latent space we can also incorporate harmonic effects and reduce the number of latent variables needed. Linear and non-linear autoencoders are used to map the observed, high-dimensional time-series to the lower-dimensional, latent representation and we identify simultaneously the autoencoder as well as the latent dynamics by optimizing a combined loss function.\nHence the to tasks of dimensionality reduction and discovery of the reduced dynamics are unified while other frameworks treat the two parts separately . Apart from using an architecture based on autoencoders to identify the latent space, projection-based methods could also be employed . We are also proposing a probabilistic version of our algorithm ) that makes use of probabilistic Slow Feature Analysis .\nThis allows for a latent representation that arart from being time-continuous, can quantify the predictive uncertainty and hierarchically decompose the dynamics into their pertinent scales while promoting the discovery of slow processes that control the system's evolution over long time horizons. The rest of the paper is structured as follows: We introduce the methodological framework as well as algorithmic details in section II.\nParticular focus is paid on the interpretability of the inferred lower-dimensional dynamics. In section III we present three numerical illustrations, i.e. a system of linear ODEs, a hidden Markov Model and the discretized KS-equation. We then present in section IV the probabilistic extension of the framework and apply it to the KS-equation.\nWe conclude with a summary and a short discussion about possible next steps. We introduce the autoencoders deployed in this work, followed by the interpretable latent space dynamic and discuss the training process. We consider data from high-dimensional time series x n ∈ R f with n = 1, ..., T . We remark that the intervals between the different states do not need to be uniformly spaced.\n\nAutoencoder\n\nA core assumption of the method is that each high-dimensional state x n can be compressed to a lower-dimensional representation z n ∈ C c with c << f . We identify this lower-dimensional representation by an autoencoder consisiting of a parameterized encoder and decoder. The encoder maps the high-dimensional representation to the latent space as:\nThe latent space is complex-valued. The decoder reconstructs the high-dimensional representation based on the latent variables as: We denote the parameters of the encoder as well as the decoder by θ. As discussed later in Section II C, both set of parameters are optimized simultaneously during training and therefore there is no need for differentiating them.\n\nInterpretable Latent Space Dynamics\n\nWe employ a propagator in the latent space to capture the reduced-order dynamics of the system. In contrast to other time-extended variational autoencoder frameworks, our representation uses complex valued latent variables. In addition the latent variables are treated independently. The latter feature enables us to have an interpretable latent dynamics as well as a model that is especially suitable for being trained in the Small Data regime due to the small number of required parameters.\nThis is in contrast to temporal propagators such as LSTMs . For each dimension i of the latent variable z we are using the following continuous ODE in the complex plane: By solving this ODE, we can define the operator: Interpretable reduced-order modeling with time-scale separation Here, λ is a vector containing all the individual λ's and ∆t n indicates the time-step between the latent states.\nThe symbol is used to indicate a component-wise multiplication. We remark that the latent variables and the parameter governing the temporal evolution are complex numbers and their role in describing the system dynamics is similar to that of an eigenvalue. The real part is associated with growth and decay whereas the imaginary part is representing the periodic component.\nThis approach has similarities with the Koopman-operator based methods and the extended dynamic mode decomposition . In contrast to the methods mentioned before we are using a continuous formulation in the latent space that allows us to incorporate scarce and irregularly sampled training data and directly rely on complex numbers in the latent space.\n\nTraining and Predictions\n\nWe optimize a loss function that combines both a reconstruction loss as well as a loss associated with the error of our learned propagator in the latent space: (5) We note that we could directly incorporate mini-batch training by only taking the summation over a subset of the N available training data.\nFor new predictions of unseen states, we use the encoder to generate a latent representations which is then advanced in time by the learned propagator. At a designated time step we are using the decoder to reconstruct the high-dimensional solution. We applied our algorithm to three systems. First, we show that the algorithm is capable of exactly reproducing the solution of a linear ODE and to identify its eigenvalues.\nAfterwards we are applying the framework to a high-dimensional process generated by a complex latent dynamics, which is correctly identified. As a final test case, we are applying the algorithm to a Kuramoto Shivashinski (KS) equation. Interpretable reduced-order modeling with time-scale separation\n\nLinear ODE\n\nWe are considering a two-dimensional ODE system for x = y 1 y 2 : Based on the obtained training data we run our algorithm using a linear encoder and decoder structure as well as two latent variables z. The loss function was optimized using the Adam algorithm . As we consider a linear ODE we can analytically compute the eigenvalues involved and compare it with the parameters λ identified by our algorithm.\nWe observe in Figure that the algorithm was able to recover the correct values, i.e. the eigenvalues 7 and 3 of the given linear ODE. The system does not have a periodic component and the two imaginary parts correctly go to zero, whereas the real parts converge to the reference value. Moreover we are also able to identify for the linear mapping between our latent variables z and the training data a matrix consisting of a multiple of the eigenvectors (1,1) and (1,-1) and thus the correct solution.\nThis example was chosen to show that the algorithm is able to quickly identify the exact solution of a linear ODE in terms of its linearly independent components.\n\nHidden multiscale dynamics\n\nWe consider eight-dimensional synthetic time series data produced by an underlying twodimensional complex valued process. In particular, the data points x are generated by first solving for the temporal evolution for the two complex-valued processes p 1 and p 2 and than mapping to the eight-dimensional space by using a randomly sampled linear mapping W .\nOne of the two processes used to generate the data is chosen to be much slower than the other one and both processes have a periodic component. dp 2 dt = (−0.9 + 1.5i)p 2 (8) As training data we consider 40 time series with 150 data points each, obtained by simulating the described processes for a maximum of t = 15 s and then sampling from the obtained data points.\nHence the training data consists of: • 40 time-series • with each consisting 150 observations of the x at a uniform time-step ∆t = 0.0025 The autoencoder obtained consists of one linear layer for both the decoder as well as the encoder. The model is trained for 5000 iterations using the Adam optimizer and a learning rate of 10 −3 .\nThe results for the convergence of the parameters λ 1 and λ 2 can be found in Figure . We note that the process which is slower decaying and thus more responsible for the long-term evolution of the system has a higher convergence rate than the faster process. With the obtained parameters λ as well as the trained autoencoder, we compute predictions based on the last time step used for training, i.e. we apply the encoder to obtain our latent representation and than use the latent dynamics to advance the latent representation in time.\nAfterwards, we employ the decoder to reconstruct the full high-dimensional system. The results can be found in Figure and show very good agreement between predictions and reference data. This example shows that our model is successfully able to carry out dimensionality reduction and moreover indicates that the convergence rate between latent processes can be different.\nThe latter is relevant when training models as for accurate predictions all latent processes and their dynamics should be converged.\n\nKuramoto-Sivashinsky\n\nFinally, we applied our algorithm to the KS equation and aim to identify a reduced-order model for the solution u(y, t): We employed periodic boundary conditions, µ = 1 and a domain size y ∈ [0, 22]. For this domain-size, the KS-equation exhibits a structurally stable chaotic attractor as discussed in The black lines divides the area for which training data was given from the area without raining data.\n; . The equation is discretized in space using a discretization step of 22 64 resulting in a state vector x of dimension 64 and a nonlinear system of coupled ODEs. This is solved using a stiff fourth-order solver We employed a non-linear encoder and decoder with four fully-connected layers each and ReLU-activation functions as well as Dropout Layers between the fully-connected layers.\nWe trained the model for 200000 iterations using Adam and a learning rate of 5 • 10 4 and assuming a five-dimensional latent space. We obtained the λ's in Figure . Four latent variables have λ's close to zero and thus a slow temporal dynamic that is responsible for the long-term evolution whereas one latent variable is quickly decaying.\nBased on the obtained parameters, we do predictions based on an unseen initial condition not contained in the training data. We are able to reconstruct the correct phase space based on our predictions despite only using a very limited amount of training data. The results for the phase space can be seen in Figure .\nAlthough the small-scale fluctuations in the temporal dynamics are not well captured, the model identifies the correct manifold which has a good accuracy compared to the reference solution. All phase-spaces were obtained by using a finite-difference operator on the data or predictions. These results are in accordance Interpretable reduced-order modeling with time-scale separation with whose LSTM-based temporal dynamic model was also able to find the correct phase space but not to track the actual dynamics for long-term predictions.\nOur model is not able to account for noise in the temporal evolution and thus dealing with chaotic, small-scale fluctuations is challenging. We believe that a probabilistic version of our algorithm could be advantageous here. This section contains a fully probabilistic formulation for the deterministic model discussed before.\nWe replace the Autoencoder with a Variational Autoencoder and the ODE in the latent space with a SDE. The loss function which we optimize is the Evidence-Lower Bound (ELBO).\n\nModel Structure\n\nWe postulate the following relations for our probabilistic model using an Ornstein-Uhlenbeck (OU) for each dimension i of the latent space and a Wiener process W t in the latent space: We again assume that the latent variables z t are complex-valued and a priori independent. Complex variables were chosen as their evolution includes a harmonic components which are observed in many physical systems.\nWe assume an initial conditions z 0,i ∼ CN (0, σ 2 0,i ). The total parameters associated with the latent space dynamics of our model are thus {σ 2 0,i , σ 2 i , λ i } c i=1 and will be denoted by θ together with all parameters responsible for the decoder mapping G (see next section). These parameters along with the state variables z t have to be inferred from the data x t .\nBased on probabilistic Slow Feature Analysis (SFA) , we set σ 2 i = 2; (λ j ) and σ 2 0,i = 1. As a consequence, a priori, the latent dynamics are stationary. A derivation and reasoning for this choice can be found in Appendix A. Hence the only independent parameters are the λ i , the imaginary part of which can account for periodic effects in the latent dynamics.\n\nVariational Autoencoder\n\nWe employ a variational autoencoder to account for a probabilistic mappings from the lower-dimensional representation z n to the high-dimensional system x n . In particular we are employing a probabilistic decoder The encoder is used to infer the state variables z based on the given data and thus defined in the inference and learning section.\n\nInference and Learning\n\nGiven the probabilistic relations , our goal is to infer the latent variables z 0:T as well as all model parameters θ. We follow a hybrid Bayesian approach in which the posterior of the state variables is approximated using amortized Variational Inference and Maximum-A-Posteriori (MAP) point-estimates for θ are computed.\nThe application of Bayes' rule for each data sequence x 0:T leads to the following posterior: where p(θ) denotes the prior on the model parameters. In the context of variational inference, we use the following factorization of the approximate posterior i.e. we infer only the mean µ and variance σ for each state variable based on the given data points.\nThis conditional density used for inference is the encoder-counterpart to the probabilistic decoder defined in the section before. It can be readily shown that the optimal parameter values are found by maximizing the Evidence Lower Bound (ELBO) F(q φ (z 0:T ), θ) which is derived in Appendix B. We compute Monte Carlo estimates of the gradient of the ELBO with respect to φ and θ with the help of the reparametrization trick and carry out stochastic optimization with the ADAM algorithm .\n\nResults for the probabilistic extension\n\nWe applied our probabilistic version to the KS-equation. We used the same settings as for the deterministic approach but considered up to 10 complex latent variables. The obtained λ's are in Figure . The probabilistic model allows us to quantify the uncertainty in predictions. In Figure predictions for various time-steps and the respective uncertainty bounds are shown for an unseen initial condition.\nDue to the chaotic nature of the KS-equation and the small amount of training data, the underlying linear dynamic of our model is only able to capture the full dynamics for a limited time horizon. Fortunately, due to the probabilistic approach the model is capable of capturing chaotic fluctuations with increasingly wide uncertainty bounds.\nWe also computed the phase space representation for the KS-equation based on the predictions obtained by our model and compare it with the reference solution. The probabilistic model identifies the correct manifold with a better accuracy than the deterministic model. As some of the small-scale fluctuations are accounted as noise, the resulting manifold is more concentrated at the origin and the obtained values are slightly smaller than the reference manifold although their shape is very similar.", "answers": ["The paper was published on 7 March 2023."], "length": 3080, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "46b15f1200c46251053ec3dfa806dbdf515eb34053a5e0d1"} {"input": "When was Weep Not, Child first published?", "context": "Weep Not, Child is a 1964 novel by Kenyan author Ngũgĩ wa Thiong'o. It was his first novel, published in 1964 under the name James Ngugi. It was among the African Writers Series. It was the first English language|English novel to be published by an East African. Thiong'o's works deal with the relationship between Africans and white settlers in colonial Kenya, and are heavily critical of colonial rule. Specifically, Weep Not, Child deals with the Mau Mau Uprising, and \"the bewildering dispossession of an entire people from their ancestral land.\" Ngũgĩ wrote the novel while he was a student at Makerere University.\n\nThe book is divided into two parts and eighteen chapters. Part one deals mostly with the education of Njoroge, while part two deals with the rising Mau Mau movement.\n\nPlot summary\n\nNjoroge, a little boy, is urged to attend school by his mother. He is the first one of his family able to go to school. His family lives on the land of Jacobo, an African made rich by his dealings with white settlers, namely Mr. Howlands, the most powerful land owner in the area. Njoroge's brother Kamau works as an apprentice to a carpenter, while Boro, the eldest living son, is troubled by his experiences while in forced service during World War II, including witnessing the death of his elder brother. Ngotho, Njoroge's father and a respected man in the surrounding area, tends Mr. Howlands' crops, but is motivated by his passion to preserve his ancestral land, rather than for any compensation or loyalty.\n\nOne day, black workers call for a strike to obtain higher wages. Ngotho is ambivalent about participating in the strike because he fears he will lose his job. However, he decides to go to the gathering, even though his two wives do not agree. At the demonstration, there are calls for higher wages. Suddenly, the white police inspector brings Jacobo to the gathering to pacify the native people. Jacobo tries to put an end to the strike. Ngotho attacks Jacobo, and the result is a riot where two people are killed. Jacobo survives and swears revenge. Ngotho loses his job and Njoroge’s family is forced to move. Njoroge’s brothers fund his education and seem to lose respect for their father.\n\nMwihaki, Jacobo's daughter and Njoroge's best friend, enters a girls' only boarding school, leaving Njoroge relatively alone. He reflects upon her leaving, and realizes that he was embarrassed by his father's actions towards Jacobo. For this reason, Njoroge is not upset by her exit and their separation. Njoroge switches to another school.\n\nFor a time, everyone's attention is focused on the upcoming trial of Jomo Kenyatta – a revered leader of the movement. Many blacks think that he is going to bring forth Kenya’s independence. But Jomo loses the trial and is imprisoned. This results in further protests and greater suppression of the black population.\n\nJacobo and a white landowner, Mr. Howlands, fight against the rising activities of the Mau Mau, an organization striving for Kenyan economic, political, and cultural independence. Jacobo accuses Ngotho of being the leader of the Mau Mau and tries to imprison the whole family. Meanwhile, the situation in the country is deteriorating. Six black men are taken out of their houses and executed in the woods.\n\nOne day Njoroge meets Mwihaki again, who has returned from boarding school. Although Njoroge had planned to avoid her due to the conflict between their fathers, their friendship is unaffected. Njoroge passes an important exam that allows him to advance to High School. His village is proud of him, and collects money to pay Njoroge's High School tuition.\n\nSeveral months later, Jacobo is murdered in his office by a member of the Mau Mau. Mr. Howlands has Njoroge removed from school for questioning. Both father and son are brutally beaten before release and Ngotho is left barely alive. Although there doesn't seem to be a connection between Njoroge's family and the murder, it is eventually revealed that Njoroge's brothers are behind the assassination, and that Boro is the real leader of the Mau Mau. Ngotho soon dies from his injuries and Njoroge finds out that his father was protecting his brothers. Kamau has been imprisoned for life. Only Njoroge and his two mothers remain free, and Njoroge is left as the sole provider of his two mothers. Njoroge fears that he cannot make ends meet; he gives up hope of continuing in school and loses faith in God.\n\nNjoroge asks Mwihaki's for support, but she is angry because of her father’s death. When he finally pledges his love to her, she refuses to leave with him, realizing her obligation to Kenya and her mother. Njoroge decides to leave town and makes an attempt at suicide; however, he fails when his mothers find him before he is able to hang himself. The novel closes with Njoroge feeling hopeless, and ashamed of cowardice.\n\nCharacters in Weep Not, Child\n Njoroge: the main character of the book whose main goal throughout the book is to become as educated as possible.\n Ngotho: Njoroge's father. He works for Mr.Howlands and is respected by him until he attacks Jacobo at a workers strike. He is fired and the family is forced to move to another section of the country. Over the course of the book his position as the central power of the family weakened, to the point where his self-realization that he has spent his whole life waiting for the prophecy (that proclaims the blacks will be returned their land) to come true rather than fighting for Kenyan independence, leads to his depression.\n Nyokabi and Njeri: the two wives of Ngotho. Njeri is Ngotho's first wife, and mother of Boro, Kamau, and Kori. Nyokabi is his second wife, and the mother of Njoroge and Mwangi.\n Njoroge has four brothers: Boro, Kamau, Kori and Mwangi (who is Njoroge's only full brother, who died in World War II).\n Boro: Son of Njeri who fights for the Allies in World War II. Upon returning his anger against the colonial government is compounded by their confiscation of the his land. Boro's anger and position as eldest son leads him to question and ridicule Ngotho, which eventually defeats their father's will (upon realizing his life was wasted waiting and not acting). It is eventually revealed that Boro is the leader of the Mau Mau (earlier alluded to as \"entering politics\") and murders Mr.Howlands. He is caught by police immediately after and is scheduled to be executed by the book's end. It is highly likely that it is also Boro who kills Jacobo.\n Mwihaki: Njoroge's best friend (and later develops into his love interest). Daughter of Jacobo. When it is revealed that his family killed Jacobo (most likely Boro), Mwihaki distances herself from Njoroge, asking for time to mourn her father and care for her mother.\n Jacobo: Mwihaki's father and an important landowner. Chief of the village.\n Mr. Howlands: A white settler who emigrated to colonial Kenya and now owns a farm made up of land that originally belonged to Ngotho's ancestors. Has three children: Peter who died in World War II before the book's beginning, a daughter who becomes a missionary, and Stephen who met Njoroge while the two were in high school.\n\nThemes and motifs\nWeep Not, Child integrates Gikuyu mythology and the ideology of nationalism that serves as catalyst for much of the novel's action. The novel explores the negative aspects of colonial rule over Kenya. Njoroge's aspiration to attend university is frustrated by both the violence of the Mau Mau rebels and the violent response of the colonial government. This disappointment leads to his alienation from his family and ultimately his suicide attempt.\n\nThe novel also ponders the role of saviours and salvation. The author notes in his The River Between: \"Salvation shall come from the hills. From the blood that flows in me, I say from the same tree, a son shall rise. And his duty shall be to lead and save the people.\" Jomo Kenyatta, the first prime minister of Kenya, is immortalised in Weep Not, Child. The author says, \"Jomo had been his (Ngotho's) hope. Ngotho had come to think that it was Jomo who would drive away the white man. To him, Jomo stood for custom and traditions purified by grace of learning and much travel.\" Njoroge comes to view Jomo as a messiah who will win the struggle against the colonial government.\n\nSee also\n\nThings Fall Apart\nDeath and the King's Horseman\n\nReferences\n\nExternal links\nOfficial homepage of Ngũgĩ wa Thiong'o\nBBC profile of Ngũgĩ wa Thiong'o\nWeep Not, Child at Google Books\n\nBritish Empire in fiction\nNovels set in colonial Africa\nHistorical novels\nKenyan English-language novels\nNovels by Ngũgĩ wa Thiong'o\nNovels set in Kenya\n1964 novels\nHeinemann (publisher) books\nPostcolonial novels\nAfrican Writers Series\n1964 debut novels", "answers": ["Weep Not, Child was first published in 1964."], "length": 1489, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "25ebcea4435f88495b4175446d1d7d6dacb1034a8f861ca5"} {"input": "How is the vacuum processing system configured in terms of the arrangement of the vacuum processing apparatus?", "context": "PROBLEM TO BE SOLVED: To provide a vacuum processor which can suppress the rise of manufacture cost while coping with the enlargement of diameter of a sample, and besides is excellent in maintainability. SOLUTION: This vacuum processor comprises an air loader 1 which is equipped with a plurality of juxtaposed cassette stands 2a and 2b and a carrier 13 for carrying a wafer 20 from or to the cassette stands, a vacuum loader 7 which is equipped with etching treatment chambers 11 and 11b for processing the wafer and a vacuum carriage chamber 16 connected to them through gate valves 15a and 15b, and a lock device 6 consisting of a load lock chamber 6b and an unload lock chamber 6b equipped with gate valves 12a, 12b, 12c, and 12d for connecting the said carrier 13 with the vacuum carriage chamber 16. For the etching treatment chambers, two are provided only on the opposite side of the vacuum carriage chamber symmetrically about the axis line A passing the center, and besides the arrangement positions of the two etching treatment chambers are at an acute angle on the opposite side of the vacuum carriage chamber.\nエッチング,CVD(化学的気相成長),スパッタリング,アッシング,リンサ(水洗)等の枚葉処理をするのに好適な真空処理装置とそれを用いて半導体デバイスを製造する半導体製造ラインに関するものである。 Those etching, CVD (chemical vapor deposition), sputtering, ashing, to rinser (water washing) suitable vacuum processing apparatus for single wafer processing, such as a semiconductor manufacturing line for manufacturing a semiconductor device using the same .\n【従来の技術】試料を処理する真空処理装置は、大別すると、カセットブロックと真空処理ブロックから構成されており、カセットブロックは、半導体製造ラインのベイ通路に面して長手方向に伸びるフロントを有し、試料用のカセットや試料のオリエンテーションを合わせるアライメントユニットと、大気ロボットがある。 BACKGROUND OF THE INVENTION Vacuum processing apparatus for processing a sample, the roughly is constituted by a cassette block and a vacuum processing block, the cassette block, a front extending in the longitudinal direction facing the bay aisle of a semiconductor manufacturing line a, an alignment unit for aligning the cassette and sample orientation for sample, there is an atmospheric robot. 真空処理ブロックには、ロード側ロードロック室,アンロード側ロードロック室,真空処理室,後真空処理室,真空ポンプ及び真空ロボット等が設けられている。 The vacuum processing block, the load-side load lock chamber, the unload side load lock chamber, the vacuum processing chamber, a rear vacuum processing chamber, a vacuum pump and a vacuum robot and the like.\n【0003】これらの真空処理装置では、カセットブロックのカセットから取り出された試料が、大気ロボットにより真空処理ブロックのロードロック室まで搬送される。 In these vacuum processing apparatus, a sample taken from the cassette in the cassette block is transported to the load lock chamber in the vacuum processing block by the atmospheric robot. ロードロック室から真空ロボットによりさらに処理室に搬送され、電極構造体上にセットされた試料は、プラズマエッチング等の処理がなされる。 Is conveyed from the load lock chamber to the further processing chamber by the vacuum robot, the sample is set on an electrode structure, processing such as plasma etching is performed. その後、必要に応じて後真空処理室に搬送,処理される。 Thereafter, the conveyance to the rear vacuum processing chamber as necessary and processed. 処理済みの試料は、真空ロボット及び大気ロボットによりカセットブロックのカセットに搬送される。 Processed sample is conveyed to the cassette of the cassette block by the vacuum robot and the atmospheric robot.\n【0004】試料をプラズマエッチング処理する真空処理装置の例としては、例えば特公昭61−8153号公報,特開昭63−133532号公報,特公平6−30369号公報,特開平 Examples of the sample vacuum processing apparatus for plasma etching treatment, for example Japanese Patent Publication 61-8153, JP-Sho 63-133532 and JP Kokoku 6-30369, JP-A No.\n6−314729号公報,特開平6−314730号公報,米国特許第 6-314729, JP-A No. 6-314730, JP-U.S. Patent No.\n5,314,509号明細書および5,784,799号明細書に記載されたようなものがある。 There are such as described in Pat and 5,784,799 Pat 5,314,509.\n509号明細書に記載された装置は、真空処理ブロックの中央付近に真空ロボット、その周囲に3個の処理室が同心状に配置され、真空ロボットとカセットブロックの間に、ロード側ロードロック室,アンロード側ロードロック室が設けられている。 Device described in 509 Pat are vacuum robot in the vicinity of the center of the vacuum processing block, three process chambers around it are arranged concentrically, between the vacuum robot and the cassette block, the load-side load-lock chamber , unload side load lock chamber is provided. これらの装置では、大気ロボットや真空ロボットの搬送アームの回転角度が大きく従って装置全体の必要床面積が大きいという問題がある。 In these devices, there is a problem that needs floor space for the entire rotation angle is large therefore device of the transfer arm of the atmospheric robot and the vacuum robot is large.\n【0006】一方、真空処理装置の真空処理ブロック内の処理室や真空ポンプその他各種の配管機器については、定期,不定期に点検修理等のメンテナンスを行うことが必要である。 On the other hand, the processing chamber and the vacuum pump and other various piping components of the vacuum processing block of the vacuum processing apparatus, periodically, it is necessary to perform maintenance such as inspection and repair irregularly. そのため、一般に、真空処理ブロックの周囲には、扉が設けられており、この扉を開けることにより、ロードロック室,アンロードロック室,処理室,真空ロボット及び各種の配管機器の点検修理ができるようになっている。 Therefore, in general, around the vacuum processing block, the door is provided by opening the door, load lock chambers, unload lock chambers, processing chambers, the servicing of the vacuum robot and various piping devices It has become way.\nキャリアポッドが必要となるために、約350mm程度と大きくなり、複数のキャリアポッドを収納するカセットブロックの幅も大きくなる。 For carrier pod required, large as about 350 mm, the greater width of the cassette block for housing a plurality of carrier pods. この幅に合わせて真空処理ブロックの幅を決定すると、真空処理装置全体が大きなスペースを必要とすることになる。 When determining the width of the vacuum processing block in accordance with the this width, the entire vacuum processing apparatus requires a large space. 一例として、4個のキャリアポッドを収納するカセットブロックについて考えると、試料の直径dが8インチから12インチになった場合、カセットの幅は少なくとも約40cm以上大きくならざるを得ない。 As an example, considering the cassette block for accommodating four carriers pods, if the diameter d of the sample was a 12-inch 8 inch wide cassette inevitably increases at least about 40cm or more.\n【0008】一方、試料に各種の処理を行いながら大量の処理を行うために、一般の半導体製造ラインでは、同じ処理を行う複数の真空処理装置を同じベイに集め、各ベイ間の搬送を自動またはマニュアルで行っている。 On the other hand, in order to perform a lot of processing while performing various processes in the sample, in a general semiconductor manufacturing lines, gathering a plurality of vacuum processing apparatus for performing the same processing in the same bay, automatic conveyance between the bays or it is carried out manually. このような半導体製造ラインは、高いクリーン度を必要とするため、半導体製造ライン全体が大きなクリーンルーム内に設置される。 Such a semiconductor manufacturing line, requires a high degree of cleanliness, the whole semiconductor manufacturing line is placed in a large clean room. 試料の大口径化に伴う真空処理装置の大型化は、クリーンルーム占有面積の大型化を伴うが、これはもともと建設コストの高いクリーンルームの建設コストを一層増加させることになる。 Size of the vacuum processing apparatus due to the large diameter of the sample is accompanied by a large clean room area occupied, which will be further increased construction costs of the high construction cost clean room originally. もし、同じ面積のクリーンルームに占有面積の大きな真空処理装置を設置するとすれば、真空処理装置の全体の台数を減らすか、あるいは各真空処理装置間の間隔を狭くせざるを得ない。 If, if the clean room of the same area to install a large vacuum processing apparatus of the occupied area, reduce the overall number of the vacuum processing apparatus, or interval narrower forced between the vacuum processing apparatus. 同じ面積のクリーンルームにおける真空処理装置の設置台数減少は、必然的に半導体の製造ラインの生産性の低下ひいては半導体の製造コストの上昇を伴う。 Installed base reduction in the vacuum processing apparatus in a clean room having the same area is accompanied inevitably rise of the semiconductor decrease and thus the semiconductor manufacturing cost of productivity of the production line. 他方、各真空処理装置間の間隔を狭くすることは、点検修理のためのメンテナンススペースが不足し、真空処理装置のメンテナンス性を著しく阻害する。 On the other hand, to reduce the distance between each of the vacuum processing apparatus, the maintenance space for inspection and repair is insufficient to significantly inhibit the maintenance of the vacuum processing apparatus.\n【0009】本発明の目的は、試料の大口径化に対応しつつ、製造コストの上昇を抑えることのできる真空処理装置を提供することにある。 An object of the present invention, while corresponding to the large diameter of the sample, is to provide a vacuum processing apparatus capable of suppressing an increase in manufacturing cost.\n【0010】本発明の他の目的は、試料の大口径化に対応しつつ、メンテナンス性に優れた真空処理装置を提供することにある。 Another object of the present invention, while corresponding to the large diameter of the sample is to provide a vacuum processing apparatus having excellent maintainability.\n【0011】本発明の他の目的は、試料の大口径化に対応しつつ、真空処理装置の必要設置台数を確保して製造コストの上昇を抑え、かつ、メンテナンス性も損なわない半導体製造ラインを提供することにある。 Another object of the present invention, while corresponding to the large diameter of the sample, to ensure the necessary number of installed vacuum processing apparatus suppressing an increase in manufacturing cost, and a semiconductor manufacturing line is not impaired maintainability It is to provide.\n【課題を解決するための手段】本発明は、並設した複数のカセット台およびカセット台から、あるいはカセット台へウエハを搬送するための搬送装置を備えた大気ローダと、ウエハを処理するための真空処理室およびこれにゲート弁を介して連接された真空搬送室を備えた真空ローダと、前記搬送装置と前記真空搬送室とを連接するためのゲート弁を備えたロードロック室およびアンロードロック室からなるロック装置とを含んで構成される真空処理装置において、ウエハを処理するための真空処理室は、有磁場UHF帯電磁波放射放電方式リアクタ(以下、UHF−ECRリアクタという。)によって形成される真空処理室であり、該真空処理室には、分解可能な側壁インナーユニットおよびアンテナが設けられ、該真空処理室は、真空搬 The present invention SUMMARY OF THE INVENTION from a plurality of cassette tables and cassette stand juxtaposed, or the atmosphere loader having a conveying device for conveying the wafer to the cassette table, for processing a wafer the load lock chamber and the unload lock having a gate valve for connecting the vacuum loader vacuum processing chamber and having a vacuum transfer chamber which is connected via a gate valve to the said vacuum transfer chamber and the conveying device in the vacuum processing apparatus configured to include a lock device comprising a chamber, a vacuum processing chamber for processing a wafer it is formed by a magnetic field UHF band electromagnetic wave radiation discharge type reactor (hereinafter. referred UHF-ECR reactor) that a vacuum processing chamber, the vacuum processing chamber, degradable sidewall inner unit and an antenna are provided, the vacuum processing chamber, a vacuum transportable 室およびロック装置の中央を通る軸線に対して対称にして、かつ真空搬送室を中心にしてロック装置の反対側のみに2つ設けられ、かつ真空搬送室に対して前記2つの真空処理室の配置位置は鋭角をなしている真空処理装置を提供する。 And symmetrically with respect to the axis passing through the center of the chamber and the locking device, and the opposite side only two provided for to lock device around the vacuum transfer chamber, and the two vacuum processing chamber to the vacuum transfer chamber position is to provide a vacuum processing apparatus which forms an acute angle.\nCRリアクタによって形成される真空処理室であり、該真空処理室は、真空搬送室およびロック装置の中央を通る軸線に対して対称にして、かつ真空搬送室を中心にしてロック装置の反対側のみに2つ設けられ、かつ真空搬送室に対して前記2つの真空処理室の配置位置は鋭角をなしており、UHF−ECRのアンテナは、前記軸線に対して平行で、かつ前記真空搬送室とは反対側に開放される真空処理装置を提供する。 A vacuum processing chamber formed by a CR reactor, vacuum processing chamber, and symmetrically with respect to the axis passing through the center of the vacuum transfer chamber and the locking device, and the opposite side of the locking device around the vacuum transfer chamber only two provided, and positions of the two vacuum processing chamber to the vacuum transfer chamber is an acute angle, the UHF-ECR antennas, and parallel, and the vacuum transfer chamber with respect to said axis to provides a vacuum processing apparatus is opened to the opposite side.\nかつ真空搬送室を中心にしてロック装置の反対側のみに2つ設けられ、かつ真空搬送室に対して前記2つの真空処理室の配置位置は鋭角をなしており、大気ローダ,真空ローダおよびロック装置はT字配置とされた真空処理方法を提供する。 And around the vacuum transfer chamber opposite only two provided a locking device, and positions of the two vacuum processing chamber to the vacuum transfer chamber is an acute angle, atmospheric loader, vacuum loader and locking apparatus to provide a vacuum processing method which is a T-arrangement.\n【0015】本発明は、並設した複数のカセット台およびカセット台から、あるいはカセット台へウエハを搬送するための搬送装置を備えた大気ローダと、ウエハを処理するための真空処理室およびこれにゲート弁を介して連接された真空搬送室を備えた真空ローダと、前記搬送装置と前記真空搬送室とを連接するためのゲート弁を備えたロードロック室およびアンロードロック室からなるロック装置とを含んで構成される真空処理装置が平行に複数台並設された真空処理システムにおいて、ウエハを処理するための真空処理室は、UHF−ECRリアクタによって形成される真空処理室であり、該真空処理室は、真空搬送室およびロック装置の中央を通る軸線に対して対称にして、かつ真空搬送室を中心にしてロック装置の反対側のみに2 The present invention, a plurality of cassette tables and cassette stand juxtaposed, or the atmosphere loader having a conveying device for conveying the wafer to the cassette base, the vacuum processing chamber for processing the wafer and to a vacuum loader having a vacuum transfer chamber which is connected via a gate valve, the locking consisting of the conveying device and the load lock chamber and the unload lock chamber with gate valves for connecting the said vacuum transfer chamber apparatus and in the vacuum processing system vacuum processing apparatus is constituted with a plurality Tainami set in parallel include vacuum processing chamber for processing a wafer is vacuum processing chamber formed by the UHF-ECR reactor, vacuum processing chamber, and symmetrically with respect to the axis passing through the center of the vacuum transfer chamber and the locking device, and 2 only on the opposite side of the locking device around the vacuum transfer chamber 設けられ、かつ真空搬送室に対して前記2つの真空処理室の配置位置は鋭角をなしており、並設された複数の真空処理装置のすべての真空処理室に一直線上に配列される真空処理システムを提供する。 Provided, and positions of the two vacuum processing chamber to the vacuum transfer chamber is an acute angle, a vacuum process that is arranged in alignment with all of the vacuum processing chamber of a plurality of vacuum processing apparatus are arranged in parallel to provide a system.\n【発明の実施の形態】以下、本発明にかかる一実施例を図面に基づいて説明する。 BEST MODE FOR CARRYING OUT THE INVENTION Hereinafter, will be explained based on an embodiment according to the present invention with reference to the accompanying drawings.\n10B,10Cで示す。 10B, shown by 10C.\n【0018】図1に示す真空処理システムを説明する前に、図2から図4に基づいて真空処理装置を説明する。 Before describing the vacuum processing system shown in FIG. 1, illustrating a vacuum processing apparatus on the basis of FIGS. 2-4.\n5bを介して連接された真空搬送室16を備えた真空ローダ7と、前記搬送装置と前記真空搬送室とを連接するためのゲート弁を備えたロードロック室6aおよびアンロードロック室6bからなるロック装置6とを含んで構成される。 5b a vacuum loader 7 having a vacuum transfer chamber 16 which is connected via consist load lock chamber 6a and the unload lock chamber 6b has a gate valve for connecting and said vacuum transfer chamber and the conveying device configured to include a locking device 6.\n平行形に配置され、その位置および姿勢を変えることなく、装置への導入/払出しが可能な位置、すなわち、カセット1aないし1cを略水平の平面上で常に一定の位置に固定される。 Arranged parallel type, without changing its position and orientation, the introduction / dispensing of the locations of the device, i.e., always fixed to a constant position to free the cassette 1a 1c on a substantially horizontal plane. カセット台2aおよび2bは、平行に隣合わせて配置してある。 Cassette tables 2a and 2b, are arranged side by side in parallel. カセット台2cは、最右端に配置してある。 Cassette table 2c is, are arranged on the top right edge. カセット1aおよび1bは、処理を行うための末処理ウエハを収納したり、処理済みのウエハを回収するためのもので、複数枚(通常25枚)の被処理基板であるウエハ20が収納可能となっている。 Cassettes 1a and 1b, or storing the processed wafers end for processing, for recovering the processed wafer, the wafer 20 is a substrate to be processed in a plurality (25 pieces Normal) can be stored going on. カセット1cは、この場合、プラズマを用いたドライクリーニング(以下、「プラズマクリーニング」という。)を行うためのダミーウエハを収納したり、プラズマクリーニング後のダミーウエハを回収するためのもので、複数枚(通常25枚)のダミーウエハ30が収納可能となっている。 Cassette 1c in this case, dry cleaning using plasma (hereinafter, \"plasma cleaning\" hereinafter.) Or housing a dummy wafer for performing, for recovering the dummy wafers after plasma cleaning, a plurality (usually dummy wafer 30 of 25 sheets) has become can be stored.\nとカセット1a,1bとの間でウエハ20を、そしてロードロック室6aおよびアンロードロック室6bとカセット1cとの間でダミーウエハ30を授受可能に動作する。 A cassette 1a, the wafer 20 with the 1b, and operates to allow exchanging dummy wafer 30 between the load lock chamber 6a and the unload lock chamber 6b and the cassette 1c.\na,15bを介して真空処理室であるエッチング処理室11a,11bが設けてある。 a, etching chamber is a vacuum processing chamber through a 15b 11a, 11b are provided. 以下、エッチング処理室を例に取って説明する。 Hereinafter will be described taking the etching chamber as an example. 真空搬送室16内には、ロードロック室6a,アンロード室6bおよびエッチング処理室11a,11bとの間でウエハ20またはダミーウエハ30を授受可能に動作する搬送装置14が設けてある。 The vacuum transfer chamber 16, load lock chambers 6a, unload chamber 6b and etching chambers 11a, conveying device 14 is provided that operates to allow transferring the wafer 20 or dummy wafer 30 between 11b. 真空搬送室16は、独立に真空排気可能な真空排気装置17を装備している。 Vacuum transfer chamber 16 is equipped with a vacuum evacuable vacuum evacuation device 17 independently.\n【0022】UHF−ECRリアクタのエッチング処理室11a,11bは、この場合、同一の構成で対称配置とされてエッチング処理が行われるようになっている。 The UHF-ECR reactor etching chambers 11a, 11b is in this case are symmetrically arranged in the same configuration so that the etching process is performed.\nエッチング処理室11aを例に説明する。 The etching chamber 11a will be described as an example. エッチング処理室11aは、ウエハ20を配置するための試料台を有し、試料台8aの上部に放電部7aを形成するように放電室が設けてある。 Etching chamber 11a has a sample stage for placing the wafer 20, the discharge chamber so as to form a discharge portion 7a at the top of the sample table 8a is provided. エッチング処理室11aは、放電部7aへの処理ガス供給のためのガス導入装置10aを有するとともに、エッチング処理室11a内を所定圧力に減圧排気する真空排気装置9aを有し、放電部7aの処理ガスをプラズマ化するための、この場合、UHF波と磁場の発生手段を有している。 The etching chambers 11a, which has a gas introduction device 10a for processing the gas supply to the discharge portion 7a, having a vacuum exhaust device 9a for evacuating the etching chamber 11a to a predetermined pressure, the process of the discharge portion 7a for plasma gas, in this case, it has a generating means of the UHF wave and magnetic field.\n9は、センサ18からの計測値を所定値と比較して、エッチング処理室内のクリーニング時期を判断する。 9, the measured value from the sensor 18 is compared with a predetermined value to determine the cleaning time of the etching chamber. また、制御装置19は、真空搬送装置13および14を制御して、ダミーウエハ30をカセット1cおよびエッチング処理室11aないし11bの間で搬送制御する。 The control device 19 controls the vacuum transfer apparatus 13 and 14, to the dummy wafer 30 to the cassette 1c and the etching process chamber 11a carrying controlled between 11b.\nいずれかの方法によりウエハ処理またはプラズマクリーニングを実行する。 It executes the wafer processing or plasma cleaning by any method.\nによって、カセット1a内のウエハ20を下から順にエッチング処理室11a,11bに搬入し、それぞれのウエハ20をエッチング処理する。 Accordingly, the wafer 20 to an etching treatment chamber in order from the bottom 11a of the cassette 1a, and carried into 11b, and each of the wafer 20 is etched. 処理されたそれぞれのウエハ20は、真空搬送装置14および搬送装置13によって、カセット1a内の元の位置に収納する。 Each wafer 20 processed by vacuum transfer apparatus 14 and the carrier 13 is housed in its original position in the cassette 1a. この場合、運転開始から終了に至る間、カセットの位置および姿勢を変えることなく未処理のウエハを取り出し、そして処理済みのウエハを未処理のウエハが収納されていた元の位置に戻して収納する。 In this case, while leading to completion of the start of the operation, it takes out an unprocessed wafer without changing the position and posture of the cassettes, and the processed wafer unprocessed wafer is accommodated back into position which is housed . このようにすることで、生産ラインの自動化への対応が容易で、且つ、ゴミの発生によるウエハの汚染を低減でき、高い生産効率と高い製品歩留まりを実現できる。 In this way, it is easy to respond to automation of production lines, and can reduce the contamination of the wafer due to the generation of dust can achieve high production efficiency and high product yield.\nの処理が全て終り次のカセットの内のウエハ処理に移る前でも良い。 Of the process is good, even before moving on to the wafer processing of all end next cassette.\n【0027】プラズマクリーニングの実施にあたっては、次の順序で行われる。 The practice of plasma cleaning is performed in the following order. この場合、カセット1cに収納されたダミーウエハ30(この", "answers": ["Multiple vacuum processing apparatuses are arranged in parallel."], "length": 2355, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "5a18017502d827b72fc74d91f13f6f14193b4964c3420e47"} {"input": "How do the runtimes and iteration counts of NFPA and FPSA compare to GMRES and DSA in the numerical experiments?", "context": "\\section{Introduction}\\label{sec1}\n\\setcounter{equation}{0} \n\nTransport problems with highly forward-peaked scattering are prevalent in a variety of areas, including astrophysics, medical physics, and plasma physics \\cite{HGK,aristova,multiphysics}.\nFor these problems, solutions of the transport equation converge slowly when using conventional methods such as source iteration (SI) \\cite{adamslarsen} and the generalized minimal residual method (GMRES) \\cite{gmres}.\nMoreover, diffusion-based acceleration techniques like diffusion synthetic acceleration (DSA) \\cite{alcouffe} and nonlinear diffusion acceleration (NDA) \\cite{smithetall} are generally inefficient when tackling these problems, as they only accelerate up to the first moment of the angular flux \\cite{JapanFPSA}.\nIn fact, higher-order moments carry important information in problems with highly forward-peaked scattering and can be used to further accelerate convergence \\cite{japanDiss}.\n\nThis paper focuses on solution methods for the monoenergetic, steady-state transport equation in homogeneous slab geometry.\nUnder these conditions, the transport equation is given by\n\\begin{subequations}\\label[pluraleq]{eq1}\n\\begin{equation}\n\\label{t1}\n\\mu\\frac{\\partial}{\\partial x} \\psi(x,\\mu) + \\sigma_t \\psi(x,\\mu) = \\int_{-1}^{1} d\\mu' \\sigma_s(\\mu,\\mu') \\psi(x,\\mu') + Q(x, \\mu), \\,\\,\\, x\\in [0, X],-1\\leq\\mu\\leq 1 ,\\\\\n\\end{equation}\nwith boundary conditions\n\\begin{align}\n\\label{t2}\n\\psi(0,\\mu) &= \\psi_L(\\mu), \\quad \\mu > 0,\\\\\n\\label{t3}\n\\psi(X,\\mu) &= \\psi_R(\\mu), \\quad \\mu < 0.\n\\end{align}\n\\end{subequations}\nHere, $\\psi(x,\\mu)$ represents the angular flux at position $x$ and direction $\\mu$, $\\sigma_t$ is the macroscopic total cross section, $\\sigma_s(\\mu,\\mu')$ is the differential scattering cross section, and $Q$ is an internal source.\n\nNew innovations have paved the way to better solve this equation in systems with highly forward-peaked scattering.\nFor instance, work has been done on modified $P_L$ equations and modified scattering cross section moments to accelerate convergence of anisotropic neutron transport problems \\cite{khattab}.\nIn order to speed up the convergence of radiative transfer in clouds, a quasi-diffusion method has been developed \\cite{aristova}.\nIn addition, the DSA-multigrid method was developed to solve problems in electron transport more efficiently \\cite{trucksin}.\n\nOne of the most recent convergence methods developed is Fokker-Planck Synthetic Acceleration (FPSA) \\cite{JapanFPSA,japanDiss}.\nFPSA accelerates up to $N$ moments of the angular flux and has shown significant improvement in the convergence rate for the types of problems described above.\nThe method returns a speed-up of several orders of magnitude with respect to wall-clock time when compared to DSA \\cite{JapanFPSA}.\n\nIn this paper, we introduce a new acceleration technique, called \\textit{Nonlinear Fokker-Planck Acceleration} (NFPA).\nThis method returns a modified Fokker-Planck (FP) equation that preserves the angular moments of the flux given by the transport equation.\nThis preservation of moments is particularly appealing for applications to multiphysics problems \\cite{multiphysics}, in which the coupling between the transport physics and the other physics can be done through the (lower-order) FP equation.\nTo our knowledge, this is the first implementation of a numerical method that returns a Fokker-Planck-like equation that is discretely consistent with the linear Boltzmann equation.\n\nThis paper is organized as follows.\n\\Cref{sec2} starts with a brief description of FPSA.\nThen, we derive the NFPA scheme.\nIn \\cref{sec3}, we discuss the discretization schemes used in this work and present numerical results.\nThese are compared against standard acceleration techniques.\nWe conclude with a discussion in \\cref{sec4}.\n\n\\section{Fokker-Planck Acceleration}\\label{sec2}\n\\setcounter{equation}{0} \nIn this section we briefly outline the theory behind FPSA, describe NFPA for monoenergetic, steady-state transport problems in slab geometry, and present the numerical methodology behind NFPA.\nThe theory given here can be easily extended to higher-dimensional problems.\nMoreover, extending the method to energy-dependence shall not lead to significant additional theoretical difficulties.\n\nTo solve the transport problem given by \\cref{eq1} we approximate the in-scattering term in \\cref{t1} with a Legendre moment expansion:\n\\begin{equation}\n\\label{transport1}\n\\mu\\frac{\\partial}{\\partial x} \\psi(x,\\mu) + \\sigma_t \\psi(x,\\mu) = \\sum_{l=0}^L \\frac{(2l+1)}{2} P_l(\\mu) \\sigma_{s,l} \\phi_l(x) + Q(x, \\mu),\n\\end{equation}\nwith \n\\begin{equation}\n\\label{transport2}\n\\phi_l(x) = \\int_{-1}^{1} d\\mu P_l(\\mu) \\psi(x,\\mu).\n\\end{equation}\nHere, $\\phi_l$ is the $l^{th}$ Legendre moment of the angular flux, $ \\sigma_{s,l}$ is the $l^{th}$ Legendre coefficient of the differential scattering cross section, and $P_l$ is the $l^{th}$-order Legendre polynomial.\nFor simplicity, we will drop the notation $(x,\\mu)$ in the remainder of this section.\n\nThe solution to \\cref{transport1} converges asymptotically to the solution of the following Fokker-Planck equation in the forward-peaked limit \\cite{pomraning1}:\n\\begin{equation}\n\\label{fp1}\n\\mu\\frac{\\partial \\psi}{\\partial x} + \\sigma_a \\psi = \\frac{\\sigma_{tr}}{2}\\frac{\\partial }{\\partial \\mu} (1-\\mu^2) \\frac{\\partial \\psi}{\\partial \\mu} + Q\\,,\n\\end{equation}\nwhere $\\sigma_{tr}= \\sigma_{s,0} -\\sigma_{s,1}$ is the momentum transfer cross section and $\\sigma_a = \\sigma_t-\\sigma_{s,0}$ is the macroscopic absorption cross section.\n\nSource Iteration \\cite{adamslarsen} is generally used to solve \\cref{transport1}, which can be rewritten in operator notation:\n\\begin{equation}\n\\label{si1}\n\\mathcal{L} \\psi^{m+1} = \\mathcal{S} \\psi^{m} + Q\\,,\n\\end{equation}\nwhere \n\\begin{equation}\n\\mathcal{L} = \\mu \\frac{\\partial}{\\partial x} + \\sigma_t,\n \\quad\n\\mathcal{S} = \\sum_{l=0}^L \\frac{(2l+1)}{2} P_l(\\mu) \\sigma_{s,l} \\int_{-1}^{1}d\\mu P_l(\\mu) ,\n\\label{trans1}\n\\end{equation}\nand $m$ is the iteration index.\nThis equation is solved iteratively until a tolerance criterion is met. The FP approximation shown in \\cref{fp1} can be used to accelerate the convergence of \\cref{transport1}.\n\n\\subsection{FPSA: Fokker-Planck Synthetic Acceleration}\\label{FPSA}\n\nIn the FPSA scheme \\cite{JapanFPSA,japanDiss}, the FP approximation is used as a preconditioner to synthetically accelerate convergence when solving \\cref{transport1} (cf. \\cite{adamslarsen} for a detailed description of synthetic acceleration).\nWhen solving \\cref{si1}, the angular flux at each iteration $m$ has an error associated with it.\nFPSA systematically follows a predict, correct, iterate scheme.\nA transport sweep, one iteration in \\cref{si1}, is made for a prediction.\nThe FP approximation is used to correct the error in the prediction, and this iteration is performed until a convergence criterion is met.\nThe equations used are:\n\\begin{subequations}\n\\label{fpsaeq}\n\\begin{align}\n\\label{predict}\n\\mathrm{Predict}&: \\mathcal{L} \\psi^{m+\\frac{1}{2}} = \\mathcal{S} \\psi^{m} + Q\\,,\\\\\n\\label{correct}\n\\mathrm{Correct}&: \\psi^{m+1} = \\psi^{m+\\frac{1}{2}} + \\mathcal{P}^{-1} \\mathcal{S} \\left( \\psi^{m+\\frac{1}{2}} - \\psi^{m}\\right),\n\\end{align}\n\\end{subequations}\nwhere we define $\\mathcal{P}$ as\n\\begin{equation}\n\\label{FPSAsi1}\n\\mathcal{P} = \\mathcal{A}-\\mathcal{F} =\\underbrace{\\left(\\mu\\frac{\\partial}{\\partial x} + \\sigma_a\\right)}_\\mathcal{A} - \\underbrace{\\left(\\frac{\\sigma_{tr}}{2}\\frac{\\partial }{\\partial \\mu} (1-\\mu^2) \\frac{\\partial }{\\partial \\mu}\\right)}_\\mathcal{F},\n\\end{equation}\nIn this synthetic acceleration method, the FP approximation is used to correct the error in each iteration of the high-order (HO) equation (\\ref{predict}). \nTherefore, there is no consistency between the angular moments of the flux in the HO and low-order (LO) equations.\n\n\\subsection{NFPA: Nonlinear Fokker-Planck Acceleration}\\label{NFPA}\n\nSimilar to FPSA, NFPA uses the FP approximation to accelerate the convergence of the solution.\nWe introduce the additive term $\\hat{D}_F$ to \\cref{fp1}, obtaining the modified FP equation\n\\begin{equation}\n\\label{mfp1}\n\\mu\\frac{\\partial \\psi}{\\partial x} + \\sigma_a \\psi = \\frac{\\sigma_{tr}}{2}\\frac{\\partial }{\\partial \\mu} (1-\\mu^2) \\frac{\\partial \\psi}{\\partial \\mu} + \\hat{D}_F + Q\\,.\n\\end{equation}\nThe role of $\\hat{D}_F$ is to force the transport and modified FP equations to be consistent.\nSubtracting \\cref{mfp1} from \\cref{transport1} and rearranging, we obtain the consistency term\n\\begin{equation}\n\\label{dfp}\n\\hat{D}_F = \\sum_{l=0}^L \\frac{(2l+1)}{2} P_l \\sigma_l \\phi_l - \\frac{\\sigma_{tr}}{2}\\frac{\\partial}{\\partial \\mu} (1-\\mu^2) \\frac{\\partial \\psi}{\\partial \\mu} - \\sigma_{s,0} \\psi\\,.\n\\end{equation}\n\nThe NFPA method is given by the following equations:\n\\begin{subequations}\\label[pluraleq]{holocons}\n\\begin{align}\n\\label{HO1}\n\\text{HO}&: \\mu\\frac{\\partial \\psi_{HO}}{\\partial x} + \\sigma_t \\psi_{HO} = \\sum_{l=0}^L \\frac{(2l+1)}{2} P_l \\sigma_l \\phi_{l, LO} + Q\\,,\\\\\n\\label{LO11}\n\\text{LO}&: \\mu\\frac{\\partial \\psi_{LO}}{\\partial x} + \\sigma_a \\psi_{LO} = \\frac{\\sigma_{tr}}{2}\\frac{\\partial }{\\partial \\mu} (1-\\mu^2) \\frac{\\partial \\psi_{LO}}{\\partial \\mu} + \\hat{D}_F + Q\\,,\\\\\n\\label{con1}\n\\text{Consistency term}&: \\hat{D}_F = \\sum_{l=0}^L \\frac{(2l+1)}{2} P_l \\sigma_l \\phi_{l, HO}^m - \\frac{\\sigma_{tr}}{2}\\frac{\\partial }{\\partial \\mu} (1-\\mu^2) \\frac{\\partial \\psi_{HO}}{\\partial \\mu} - \\sigma_{s,0} \\psi_{HO}\\,,\n\\end{align}\n\\end{subequations}\nwhere $\\psi_{HO}$ is the angular flux obtained from the HO equation and $\\psi_{LO}$ is the angular flux obtained from the LO equation.\nThe nonlinear HOLO-plus-consistency system given by \\cref{holocons} can be solved using any nonlinear solution technique \\cite{kelley}. Note that the NFPA scheme returns a FP equation that is consistent with HO transport. \nMoreover, this modified FP equation accounts for large-angle scattering which the standard FP equation does not. \nThe LO equation (\\ref{fp1}) can then be integrated into multiphysics models in a similar fashion to standard HOLO schemes \\cite{patelFBR}. To solve the HOLO-plus-consistency system above, we use Picard iteration \\cite{kelley}:\n\\begin{subequations}\n\\begin{align}\n\\label{H1}\n\\text{Transport Sweep for HO}&:\n\\mathcal{L} \\psi_{HO}^{k+1} = \\mathcal{S} \\psi_{LO}^{k} + Q, \\\\\n\\label{L1}\n\\text{Evaluate Consistency Term}&: \\hat{D}_F^{k+1} = \\left(\\mathcal{S} - \\mathcal{F} - \\sigma_{s,0}\\mathcal{I}\\right) \\psi_{HO}^{k+1}, \\\\\n\\label{c1}\n\\text{Solve LO Equation}&: \\psi_{LO}^{k+1} = \\mathcal{P}^{-1} \\left(\\hat{D}_F^{k+1} + Q\\right), \n\\end{align}\n\\end{subequations}\nwhere $\\mathcal{L}$ and $\\mathcal{S}$ are given in \\cref{trans1}, $\\mathcal{P}$ and $\\mathcal{F}$ are given in \\cref{FPSAsi1}, $\\mathcal{I}$ is the identity operator, and $k$ is the iteration index.\nIteration is done until a convergence criterion is met.\n\nThe main advantage of setting up the LO equation in this fashion is that the stiffness matrix for LO needs to be setup and inverted \\textit{only once}, just as with FPSA \\cite{JapanFPSA, japanDiss}. This has a large impact on the method's performance.\nA flowchart of this algorithm is shown in \\cref{Nalgorithm}.\n\n\\begin{figure}[H]\n\\centering\n\\begin{tikzpicture}[node distance = 3cm, auto]\n \n \\node [block] (init) {Initial guess of flux moments};\n \\node [cloud_HO, right of=init, node distance=4cm] (HOm) {HO};\n \\node [cloud_LO, below of=HOm, node distance=2cm] (LOm) {LO};\n \\node [HO, below of=init] (transport) {One sweep in transport equation};\n \\node [decision, below of=transport,node distance=4cm] (decide) {Flux moments converged?};\n \\node [LO, left of=decide, node distance=4cm] (dterm) {Solve for consistency term};\n \\node [LO, left of=dterm, node distance=3cm] (MFP) {Solve for FP angular flux};\n \\node [LO, above of=MFP, node distance=4cm] (moments) {Convert angular flux to moments};\n \\node [block, right of=decide, node distance=4cm] (stop) {Stop};\n \n \\path [line] (init) -- (transport);\n \\path [line] (transport) -- (decide);\n \\path [line] (decide) -- node {no} (dterm);\n \\path [line] (dterm) -- (MFP);\n \\path [line] (MFP) -- (moments);\n \\path [line] (moments) -- (transport);\n \\path [line] (decide) -- node {yes}(stop);\n\\end{tikzpicture}\n\\caption{NFPA algorithm}\n\\label{Nalgorithm}\n\\end{figure}\n\n\\section{Numerical Experiments}\\label{sec3}\n\nIn \\cref{sec31} we describe the discretization methods used to implement the algorithms.\nIn \\cref{sec32} we provide numerical results for 2 different choices of source $Q$ and boundary conditions.\nFor each choice we solve the problem using 3 different scattering kernels, applying 3 different choices of parameters for each kernel.\nWe provide NFPA numerical results for these 18 cases and compare them against those obtained from FPSA and other standard methods.\n\nAll numerical experiments were performed using MATLAB.\nRuntime was tracked using the tic-toc functionality \\cite{matlab17}, with\nonly the solver runtime being taken into consideration in the comparisons.\nA 2017 MacBook Pro with a 2.8 GHz Quad-Core Intel Core i7 and 16 GB of RAM was used for all simulations.\n\n\n\\subsection{Discretization}\\label{sec31}\n\nThe Transport and FP equations were discretized using linear discontinuous finite element discretization in space \\cite{mpd1}, and discrete ordinates (S$_N$) in angle \\cite{landm}.\nThe Fokker-Planck operator $\\mathcal{F}$ was discretized using moment preserving discretization (MPD) \\cite{mpd1}.\nDetails of the derivation of the linear discontinuous finite element discretization can be seen in \\cite{japanDiss,martin}.\nThe finite element discretization for the Fokker-Planck equation follows the same derivation.\n\nA brief review for the angular discretization used for the FP equation is given below.\nFirst, we use Gauss-Legendre quadrature to discretize the FP equation in angle:\n\\begin{equation}\n\\mu_n\\frac{\\partial \\psi_n(x)}{\\partial x} + \\sigma_a \\psi_n(x) - \\frac{\\sigma_{tr}}{2}\\nabla^2_n \\psi_n(x) = Q_n(x),\n\\end{equation}\nfor $n=1,..,N$.\nHere, $\\nabla^2_n$ term is the discrete form of the angular Laplacian operator evaluated at angle $n$.\n\nThe MPD scheme is then shown as\n\\begin{equation}\n\\nabla^2_n \\psi_n = M \\psi_n = V^{-1} L V \\psi_n,\n\\end{equation}\nwhere $M$ is the MPD discretized operator defined by\n\\begin{subequations}\n\\begin{equation}\nV_{i,j} = P_{i-1}(\\mu_j)w_j,\n\\end{equation}\nand \n\\begin{equation}\nL_{i,j} = -i(i-1),\n\\end{equation}\n\\end{subequations}\nfor $i,j=1,...,N$.\nHere, $P_l(\\mu_j)$ are the Legendre polynomials evaluated at each angle $\\mu_j$ and $w_j$ are the respective weights.\n$M$ is defined as a (N x N) operator for a vector of $N$ angular fluxes $ \\psi(x)$, at spatial location $x$. \n\nIn summary, if we write the FP equation as\n\\begin{equation}\n\\mathcal{H} \\frac{\\partial \\psi}{\\partial x}(x) + \\sigma_a \\psi(x) - \\mathcal{F} \\psi(x) = Q(x),\n\\end{equation}\nthen $\\mathcal{H}$ is Diag$(\\mu_n)$ for $n=1,...,N$, $Q(x)$ is a vector of source terms $Q_n(x)$, and $\\mathcal{F}$ is represented by $\\frac{\\sigma_{tr}}{2}M$.\n\n\n\\subsection{Numerical Results}\\label{sec32}\n\nIt is shown that for slowly converging problems, typical convergence methods like $L_\\infty$ suffer from false convergence \\cite{adamslarsen}.\nTo work around this issue, the criterion is modified to use information about the current and previous iteration:\n\\begin{equation}\n\\label{falseconverge}\n\\frac{|| \\phi^{m}_0(x) - \\phi^{m-1}_0(x) ||_2}{1-\\frac{|| \\phi^{m+1}_0(x) - \\phi^{m}_0(x) ||_2}{|| \\phi^{m}_0(x) - \\phi^{m-1}_0(x) ||_2}} < 10^{-8}.\n\\end{equation}\n\nTwo problems were tested using 200 spatial cells, $X$ = 400, $\\sigma_a = 0$, $L$ = 15, and $N$ = 16.\nProblem 1 has vacuum boundaries and a homogeneous isotropic source $Q$ for $0 < x < X$.\nProblem 2 has no internal source and an incoming beam at the left boundary. The source and boundary conditions used are shown in \\cref{parameters}. \n\\begin{table}[H]\n\\begin{center}\n\\scalebox{0.9}{\n\\begin{tabular}{c | c | c} \\hline \n& Problem 1 & Problem 2 \\\\ \\hline \\hline\nQ(x) & 0.5 & 0 \\\\\n$\\psi_L$ & 0 & $\\delta(\\mu - \\mu_N)$ \\\\\n$\\psi_R$ & 0 & 0 \\\\\n\\end{tabular}}\n\\end{center}\n\\caption{Problem Parameters}\n\\label{parameters} \n\\end{table} \nWe consider three scattering kernels in this paper: Screened Rutherford \\cite{pomraning1}, Exponential \\cite{pomraning2}, and Henyey-Greenstein \\cite{HGK}.\nThree cases for each kernel were tested.\nThe results obtained with NFPA are compared with those obtained using GMRES, DSA, and FPSA with the MPD scheme.\n\n\\subsubsection{SRK: Screened Rutherford Kernel}\n\nThe Screened Rutherford Kernel \\cite{pomraning1, JapanFPSA} is a widely used scattering kernel for modeling scattering behavior of electrons \\cite{SRK}.\nThe kernel depends on the parameter $\\eta$, such that\n\\begin{equation}\n\\sigma^{SRK}_{s,l} = \\sigma_s \\int_{-1}^{1} d\\mu P_l(\\mu) \\frac{\\eta (\\eta+1)}{(1+2\\eta-\\mu)^2}.\n\\end{equation}\nThe SRK has a valid FP limit as $\\eta$ approaches 0 \\cite{patelFBR}. Three different values of $\\eta$ were used to generate the scattering kernels shown in \\cref{SRK}.\nGMRES, DSA, FPSA, and NFPA all converged to the same solution for problems 1 and 2. \\Cref{SRK_plots} shows the solutions for SRK with $\\eta = 10^{-7}$.\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[scale=0.1,angle=0]{SRK.jpg}\n \\caption{Screened Rutherford Kernels}\n \\label{SRK}\n\\end{center}\n\\end{figure}\n\\begin{figure}[H]\n \\centering\n \\subfloat[Problem 1]{{\\includegraphics[width=7cm]{s7_iso.jpg} }}\n \\qquad\n \\subfloat[Problem 2]{{\\includegraphics[width=7cm]{s7_beam.jpg} }}\n \\caption{Results for SRK Problems with $\\eta = 10^{-7}$}\n \\label{SRK_plots}\n\\end{figure}\n\n\\begin{table}[H]\n\\begin{center}\n\\scalebox{0.8}{\n\\begin{tabular}{c || c || c || c} \\hline \nParameter & Solver & Runtime (s) & Iterations \\\\ \\hline \\hline\n\\multirow{4}{*}{$\\eta = 10^{-5}$} & GMRES & 98.8 & 12 \\\\\n& DSA & 2380 & 53585 \\\\\n& FPSA & 1.21 & 26 \\\\\n& NFPA & 1.39 & 26 \\\\ \\hline \n\\multirow{4}{*}{$\\eta = 10^{-6}$} & GMRES & 208 & 84 \\\\\n& DSA & 3040 & 69156 \\\\\n& FPSA & 0.747 & 16 \\\\\n& NFPA & 0.857 & 16 \\\\ \\hline \n\\multirow{4}{*}{$\\eta = 10^{-7}$} & GMRES & 174 & 124 \\\\\n& DSA & 3270 & 73940 \\\\\n& FPSA & 0.475 & 10 \\\\\n& NFPA & 0.542 & 10 \\\\ \\hline\n\\end{tabular}}\n\\end{center}\n\\caption{Runtime and Iteration Counts for Problem 1 with SRK}\n\\label{SRKresults1} \n\\end{table}\n\\begin{table}[H]\n\\begin{center}\n\\scalebox{0.8}{\n\\begin{tabular}{c || c || c || c} \\hline \nParameter & Solver & Runtime (s) & Iterations \\\\ \\hline \\hline\n\\multirow{4}{*}{$\\eta = 10^{-5}$} & GMRES & 52.4 & 187 \\\\\n& DSA & 1107 & 25072 \\\\\n& FPSA & 0.953 & 20 \\\\\n& NFPA & 1.14 & 20 \\\\ \\hline \n\\multirow{4}{*}{$\\eta = 10^{-6}$} & GMRES & 108 & 71 \\\\\n& DSA & 1434 & 32562 \\\\\n& FPSA & 0.730 & 14 \\\\\n& NFPA & 0.857 & 14 \\\\ \\hline \n\\multirow{4}{*}{$\\eta = 10^{-7}$} & GMRES & 94.1 & 185 \\\\\n& DSA & 1470 & 33246 \\\\\n& FPSA & 0.438 & 8 \\\\\n& NFPA & 0.484 & 8 \\\\ \\hline \n\\end{tabular}}\n\\end{center}\n\\caption{Runtime and Iteration Counts for Problem 2 with SRK}\n\\label{SRKresults2} \n\\end{table}\n\nThe results of all solvers are shown in \\cref{SRKresults1,SRKresults2}.\nWe see that NFPA and FPSA tremendously outperform GMRES and DSA in runtime for all cases.\nFPSA is a simpler method than NFPA, requiring less calculations per iteration; therefore, it is expected that it outperforms NFPA in runtime.\nWe see a reduction in runtime and iterations for FPSA and NFPA as the FP limit is approached, with DSA and GMRES requiring many more iterations by comparison as $\\eta$ approaches 0.\n\nAn advantage that NFPA offers is that the angular moments of the flux in the LO equation will remain consistent with those of the transport equation even as a problem becomes less forward-peaked.\nOn the other hand, the moments found using only the FP equation and source iteration lose accuracy.\nTo illustrate this, Problem 1 was tested using different Screened Rutherford Kernels with increasing $\\eta$ parameters.\nThe percent errors (relative to the transport solution) for the scalar flux obtained with the LO equation and with the standard FP equation at the center of the slab are shown in \\cref{momcomp}.\nIt can be seen that the percent relative errors in the scalar flux of the FP solution is orders of magnitude larger than the error produced using the LO equation.\nThe same trend can be seen when using the exponential and Henyey-Greenstein kernels. \n\n\\begin{figure}[H]\n\\begin{center}\n \\includegraphics[scale=0.15,angle=0]{relerrorlog.jpg}\n \\caption{Log Scale of $\\%$ Relative Error vs $\\eta$ for Problem 1 at the Center of the Slab with SRK}\n \\label{momcomp}\n\\end{center}\n\\end{figure}\n\n\\subsubsection{EK: Exponential Kernel}\n\nThe exponential kernel \\cite{pomraning2, JapanFPSA} is a fictitious kernel made for problems that have a valid Fokker-Planck limit \\cite{pomraning1}.\nThe zero$^{\\text{th}}$ moment, $\\sigma^{EK}_{s,0}$, is chosen arbitrarily; we define $\\sigma^{EK}_{s,0}$ as the same zero$^{\\text{th}}$ moment from the SRK.\nThe $\\Delta$ parameter determines the kernel: the first and second moments are given by \n\\begin{subequations}\n\\begin{align}\n\\sigma^{EK}_{s,1} &= \\sigma^{EK}_{s,0} (1-\\Delta),\\\\\n\\sigma^{EK}_{s,2} &= \\sigma^{EK}_{s,0} (1-3\\Delta+3\\Delta^2),\n\\end{align}\nand the relationship for $l\\geq 3$ is\n\\begin{equation}\n\\sigma^{EK}_{s,l} = \\sigma^{EK}_{s,l-2} - \\Delta(2l+1) \\sigma^{EK}_{s,l-1}.\n\\end{equation}\n\\end{subequations}\nAs $\\Delta$ is reduced, the scattering kernel becomes more forward-peaked.\n\nThe EK has a valid FP limit as $\\Delta$ approaches 0 \\cite{patelFBR}.\nThree different values of $\\Delta$ were used to generate the scattering kernels shown in \\cref{EXP}.\nThe generated scattering kernels are shown in \\cref{EXP}.\nGMRES, DSA, FPSA, and NFPA all converged to the same solution for problems 1 and 2.\n\\Cref{EK_plots} shows the solutions for EK with $\\Delta = 10^{-7}$.\n\\begin{figure}[t]\n\\begin{center}\n \\includegraphics[scale=0.1,angle=0]{EXP.jpg}\n \\caption{Exponential Kernels}\n \\label{EXP}\n\\end{center}\n\\end{figure}\n\\begin{figure}[H]\n \\centering\n \\subfloat[Problem 1]{{\\includegraphics[width=7cm]{dta7_iso.jpg} }}\n \\qquad\n \\subfloat[Problem 2]{{\\includegraphics[width=7cm]{dta7_beam.jpg} }}\n \\caption{Results for EK Problems with $\\Delta = 10^{-7}$}\n \\label{EK_plots}\n\\end{figure}\n\nThe runtimes and iterations for GMRES, DSA, FPSA, and NFPA are shown in \\cref{Expresults1,Expresults2}.\nWe see a similar trend with the EK as seen with SRK.\nSmaller $\\Delta$ values lead to a reduction in runtime and iterations for NFPA and FPSA, which greatly outperform DSA and GMRES in both categories.\n\n\\begin{table}[h]\n\\begin{center}\n\\scalebox{0.8}{\n\\begin{tabular}{c || c || c || c} \\hline \nParameter & Solver & Runtime (s) & Iterations \\\\ \\hline \\hline\n\\multirow{4}{*}{$\\Delta = 10^{-5}$} & GMRES & 196 & 142 \\\\\n& DSA & 3110 & 70140 \\\\\n& FPSA & 0.514 & 11 \\\\ \n& NFPA & 0.630 & 11 \\\\\\hline \n\\multirow{4}{*}{$\\Delta = 10^{-6}$} & GMRES & 156 & 132 \\\\\n& DSA & 3120 & 70758 \\\\\n& FPSA & 0.388 & 7 \\\\ \n& NFPA & 0.393 & 7 \\\\ \\hline \n\\multirow{4}{*}{$\\Delta = 10^{-7}$} & GMRES & 81 & 127 \\\\\n& DSA & 3120 & 70851 \\\\\n& FPSA & 0.292 & 6 \\\\ \n& NFPA & 0.318 & 6 \\\\ \\hline\n\\end{tabular}}\n\\end{center}\n\\caption{Runtime and Iteration Counts for Problem 1 with EK}\n\\label{Expresults1} \n\\end{table}\n\\begin{table}[h]\n\\begin{center}\n\\scalebox{0.8}{\n\\begin{tabular}{c || c || c || c} \\hline \nParameter & Solver & Runtime (s) & Iterations \\\\ \\hline \\hline\n\\multirow{4}{*}{$\\Delta = 10^{-5}$} & GMRES & 110 & 73 \\\\\n& DSA & 1455 & 33033 \\\\\n& FPSA & 0.492 & 10 \\\\ \n& NFPA & 0.613 & 10 \\\\ \\hline \n\\multirow{4}{*}{$\\Delta = 10^{-6}$} & GMRES & 82.7 & 79 \\\\\n& DSA & 1470 & 33309 \\\\\n& FPSA & 0.358 & 7 \\\\ \n& NFPA & 0.431 & 7 \\\\ \\hline \n\\multirow{4}{*}{$\\Delta = 10^{-7}$} & GMRES & 56.8 & 90 \\\\\n& DSA & 1470 & 33339 \\\\\n& FPSA & 0.273 & 5 \\\\ \n& NFPA & 0.319 & 5 \\\\ \\hline \n\\end{tabular}}\n\\end{center}\n\\caption{Runtime and Iteration Counts for Problem 2 with EK}\n\\label{Expresults2} \n\\end{table}\n\n\\subsubsection{HGK: Henyey-Greenstein Kernel}\n\nThe Henyey-Greenstein Kernel \\cite{HGK,JapanFPSA} is most commonly used in light transport in clouds.\nIt relies on the anisotropy factor $g$, such that\n\\begin{equation}\n\\sigma^{HGK}_{s,l} = \\sigma_s g^l.\n\\end{equation}\nAs $g$ goes from zero to unity, the scattering shifts from isotropic to highly anisotropic.\n\\begin{figure}[H]\n\\begin{center}\n \\includegraphics[scale=0.1,angle=0]{HGK.jpg}\n \\caption{Henyey-Greenstein Kernels}\n \\label{HGK}\n\\end{center}\n\\end{figure}\n\\begin{figure}[H]\n \\centering\n \\subfloat[Problem 1]{{\\includegraphics[width=7cm]{g099_iso.jpg} }}\n \\qquad\n \\subfloat[Problem 2]{{\\includegraphics[width=7cm]{g099_beam.jpg} }}\n \\caption{Results for HGK Problems with $g = 0.99$}\n \\label{HGK_plots}\n\\end{figure}\n\n\nThe HGK does not have a valid FP limit \\cite{patelFBR}.\nThe three kernels tested are shown in \\cref{HGK}.\nGMRES, DSA, FPSA, and NFPA all converged to the same solution for problems 1 and 2.\n\\Cref{HGK_plots} shows the solutions for HGK with $g = 0.99$.\nThe results of each solver are shown in \\cref{HGKresults1,HGKresults2}. \n\\begin{table}[h]\n\\begin{center}\n\\scalebox{0.8}{\n\\begin{tabular}{c || c || c || c} \\hline \nParameter & Solver & Runtime (s) & Iterations \\\\ \\hline \\hline\n\\multirow{4}{*}{$g=0.9$} & GMRES & 9.88 & 76 \\\\\n& DSA & 24.5 & 554 \\\\\n& FPSA & 1.50 & 32 \\\\ \n& NFPA & 1.39 & 27 \\\\ \\hline \n\\multirow{4}{*}{$g=0.95$} & GMRES & 12.2 & 131 \\\\\n& DSA & 47.7 & 1083 \\\\\n& FPSA & 1.75 & 38 \\\\ \n& NFPA & 1.83 & 35 \\\\ \\hline \n\\multirow{4}{*}{$g=0.99$} & GMRES & 40.0 & 27 \\\\\n& DSA & 243 & 5530 \\\\\n& FPSA & 3.38 & 74 \\\\ \n& NFPA & 3.93 & 73 \\\\ \\hline\n\\end{tabular}}\n\\end{center}\n\\caption{Runtime and Iteration Counts for Problem 1 with HGK}\n\\label{HGKresults1} \n\\end{table}\n\\begin{table}[h]\n\\begin{center}\n\\scalebox{0.8}{\n\\begin{tabular}{c || c || c || c} \\hline \nParameter & Solver & Runtime (s) & Iterations \\\\ \\hline \\hline\n\\multirow{4}{*}{$g=0.9$} & GMRES & 24.3 & 135 \\\\\n& DSA & 14.8 & 336 \\\\\n& FPSA & 1.15 & 23 \\\\ \n& NFPA & 1.35 & 24 \\\\ \\hline \n\\multirow{4}{*}{$g=0.95$} & GMRES & 31.3 & 107 \\\\\n& DSA & 29.7 & 675 \\\\\n& FPSA & 1.56 & 32 \\\\ \n& NFPA & 1.90 & 33 \\\\ \\hline \n\\multirow{4}{*}{$g=0.99$} & GMRES & 41.4 & 126 \\\\\n& DSA & 146 & 3345 \\\\\n& FPSA & 3.31 & 67 \\\\ \n& NFPA & 3.99 & 67 \\\\ \\hline \n\\end{tabular}}\n\\end{center}\n\\caption{Runtime and Iteration Counts for Problem 2 with HGK}\n\\label{HGKresults2} \n\\end{table}\n\nHere we see that NFPA and FPSA do not perform as well compared to their results for the SRK and EK.\nContrary to what happened in those cases, both solvers require more time and iterations as the problem becomes more anisotropic.\nThis is somewhat expected, due to HGK not having a valid Fokker-Planck limit.\nHowever, both NFPA and FPSA continue to greatly outperform GMRES and DSA.\nMoreover, NFPA outperforms FPSA in iteration count for problem 1.\n\n\n\\section{Discussion}\\label{sec4}\n\nThis paper introduced the Nonlinear Fokker-Planck Acceleration technique for steady-state, monoenergetic transport in homogeneous slab geometry.\nTo our knowledge, this is the first nonlinear HOLO method that accelerates \\textit{all $L$ moments} of the angular flux.\nUpon convergence, the LO and HO models are consistent; in other words, the (lower-order) modified Fokker-Planck equation \\textit{preserves the same angular moments} of the flux obtained with the (higher-order) transport equation.\n\nNFPA was tested on a homogeneous medium with an isotropic internal source with vacuum boundaries, and in a homogeneous medium with no internal source and an incoming beam boundary.\nFor both problems, three different scattering kernels were used.\nThe runtime and iterations of NFPA and FPSA were shown to be similar.\nThey both vastly outperformed DSA and GMRES for all cases by orders of magnitude.\nHowever, NFPA has the feature of preserving the angular moments of the flux in both the HO and LO equations, which offers the advantage of integrating the LO model into multiphysics models. \n\nIn the future, we intend to test NFPA capabilities for a variety of multiphysics problems and analyze its performance.\nTo apply NFPA to more realistic problems, it needs to be extended to include time and energy dependence. \nAdditionally, the method needs to be adapted to address geometries with higher-order spatial dimensions.\nFinally, for the NFPA method to become mathematically ``complete\", a full convergence examination using Fourier analysis must be performed.\nHowever, this is beyond the scope of this paper and must be left for future work.\n\n\\section*{Acknowledgements}\n\nThe authors acknowledge support under award number NRC-HQ-84-15-G-0024 from the Nuclear Regulatory Commission.\nThe statements, findings, conclusions, and recommendations are those of the authors and do not necessarily reflect the view of the U.S. Nuclear Regulatory Commission.\n\nJ.~K. Patel would like to thank Dr.~James Warsa for his wonderful transport class at UNM, as well as his synthetic acceleration codes.\nThe authors would also like to thank Dr.~Anil Prinja for discussions involving Fokker-Planck acceleration.\n\n\n\n", "answers": ["NFPA and FPSA greatly outperform GMRES and DSA."], "length": 3996, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "a40877e222497d3ff2efbeb1926e20600f8aac947820063c"} {"input": "What genre did Margaret Way write in?", "context": "Margaret Way (b. Brisbane d. Cleveland, Queensland, Australia ) was an Australian writer of romance novels and women's fiction. A prolific author, Way wrote more than 120 novels since 1970, many through Mills & Boon, a romance imprint of British publisher Harlequin UK Ltd., owned by Harlequin Enterprises.\n\nBiography\nBefore her marriage, she was a well-known pianist, teacher, vocal coach and accompanist. She began writing when her son, Laurence Way, was born, a friend took a pile of Mills & Boon books to her, she read all and decided that she also could write these types of novels. She began to write and promote her country with her stories set in Australia. She sold her first novels in 1970. Margaret Way lives with her family in her native Brisbane. Beginning in 2013, Margaret began to self-publish, releasing her first \"e-book\" mid-July.\n\nMargaret died on the 10th of August 2022 in Cleveland, Queensland.\n\nBibliography\n\nSingle Novels\nKing Country (1970)\nBlaze of Silk (1970)\nThe Time of the Jacaranda (1970)\nBauhinia Junction (1971)\nMan from Bahl Bahla (1971)\nSummer Magic (1971)\nReturn to Belle Amber (1971)\nRing of Jade (1972)\nCopper Moon (1972)\nRainbow Bird (1972)\nMan Like Daintree (1972)\nNoonfire (1972)\nStorm Over Mandargi (1973)\nWind River (1973)\nLove Theme (1974)\nMcCabe's Kingdom (1974)\nSweet Sundown (1974)\nReeds of Honey (1975)\nStorm Flower (1975)\nLesson in Loving (1975)\nFlight into Yesterday (1976)\nRed Cliffs of Malpara (1976)\nMan on Half-moon (1976)\nSwan's Reach (1976)\nMutiny in Paradise (1977)\nOne Way Ticket (1977)\nPortrait of Jaime (1977)\nBlack Ingo (1977)\nAwakening Flame (1978)\nWild Swan (1978)\nRing of Fire (1978)\nWake the Sleeping Tiger (1978)\nValley of the Moon (1979)\nWhite Magnolia (1979)\nWinds of Heaven (1979)\nBlue Lotus (1979)\nButterfly and the Baron (1979)\nGolden Puma (1980)\nTemple of Fire (1980)\nLord of the High Valley (1980)\nFlamingo Park (1980)\nNorth of Capricorn (1981)\nSeason for Change (1981)\nShadow Dance (1981)\nMcIvor Affair (1981)\nHome to Morning Star (1981)\nBroken Rhapsody (1982)\nThe Silver Veil (1982)\nSpellbound (1982)\nHunter's Moon (1982)\nGirl at Cobalt Creek (1983)\nNo Alternative (1983)\nHouse of Memories (1983)\nAlmost a Stranger (1984)\nA place called Rambulara (1984)\nFallen Idol (1984)\nHunt the Sun (1985)\nEagle's Ridge (1985)\nThe Tiger's Cage (1986)\nInnocent in Eden (1986)\nDiamond Valley (1986)\nMorning Glory (1988)\nDevil Moon (1988)\nMowana Magic (1988)\nHungry Heart (1988)\nRise of an Eagle (1988)\nOne Fateful Summer (1993)\nThe Carradine Brand (1994)\nHolding on to Alex (1997)\nThe Australian Heiress (1997)\nClaiming His Child (1999)\nThe Cattleman's Bride (2000)\nThe Cattle Baron (2001)\nThe Husbands of the Outback (2001)\nSecrets of the Outback (2002)\nWith This Ring (2003)\nInnocent Mistress (2004)\nCattle Rancher, Convenient Wife (2007)\nOutback Marriages (2007)\nPromoted: Nanny to Wife (2007)\nCattle Rancher, Secret Son (2007)\nGenni's Dilemma (2008)\nBride At Briar Ridge (2009)\nOutback Heiress, Surprise Proposal (2009)\nCattle Baron, Nanny Needed (2009)\n\nLegends of the Outback Series\nMail Order Marriage (1999)\nThe Bridesmaid's Wedding (2000)\nThe English Bride (2000)\nA Wife at Kimbara (2000)\n\nKoomera Crossing Series\nSarah's Baby (2003)\nRunaway Wife (2003)\nOutback Bridegroom (2003)\nOutback Surrender (2003)\nHome to Eden (2004)\n\nMcIvor Sisters Series\nThe Outback Engagement (2005)\nMarriage at Murraree (2005)\n\nMen Of The Outback Series\nThe Cattleman (2006)\nThe Cattle Baron's Bride (2006)\nHer Outback Protector (2006)\nThe Horseman (2006)\n\nOutback Marriages Series\nOutback Man Seeks Wife (2007)\nCattle Rancher, Convenient Wife (2007)\n\nBarons of the Outback Series Multi-Author\nWedding At Wangaree Valley (2008)\nBride At Briar's Ridge (2008)\n\nFamily Ties Multi-Author\nOnce Burned (1995)\n\nHitched! Multi-Author\nA Faulkner Possession (1996)\n\nSimply the Best Multi-Author\nGeorgia and the Tycoon (1997)\n\nThe Big Event Multi-Author\nBeresford's Bride (1998)\n\nGuardian Angels Multi-Author\nGabriel's Mission (1998)\n\nAustralians Series Multi-Author\n7. Her Outback Man (1998)\n17. Master of Maramba (2001)\n19. Outback Fire (2001)\n22. Mistaken Mistress (2002)\n24. Outback Angel (2002)\n33. The Australian Tycoon's Proposal (2004)\n35. His Heiress Wife (2004)\n\nMarrying the Boss Series Multi-Author\nBoardroom Proposal (1999)\n\nContract Brides Series Multi-Author\nStrategy for Marriage (2002)\n\nEverlasting Love Series Multi-Author\nHidden Legacy (2008)\n\nDiamond Brides Series Multi-Author\nThe Australian's Society Bride (2008)\n\nCollections\nSummer Magic / Ring of Jade / Noonfire (1981)\nWife at Kimbara / Bridesmaid's Wedding (2005)\n\nOmnibus in Collaboration\nPretty Witch / Without Any Amazement / Storm Over Mandargi (1977) (with Lucy Gillen and Margaret Malcolm)\nDear Caliban / Heart of the Eagle / Swans' Reach (1978) (with Jane Donnelly and Elizabeth Graham)\nThe Bonds of Matrimony / Dragon Island / Reeds of Honey (1979) (with Elizabeth Hunter and Henrietta Reid)\nThe Man Outside / Castles in Spain / McCabe's Kingdom (1979) (with Jane Donnelly and Rebecca Stratton)\nWinds From The Sea / Island of Darkness / Wind River (1979) (with Margaret Pargeter and Rebecca Stratton)\nMoorland Magic / Tree of Idleness / Sweet Sundown (1980) (with Elizabeth Ashton and Elizabeth Hunter)\nThe Shifting Sands / Portrait of Jaime / Touched by Fire (1982) (with Jane Donnelly and Kay Thorpe)\nHead of Chancery / Wild Heart / One-Way Ticket (1986) (with Betty Beaty and Doris Smith)\nHeart of the Scorpion / The Winds of Heaven / Sweet Compulsion (1987) (with Janice Gray and Victoria Woolf)\nOne Brief Sweet Hour / Once More With Feeling / Blue Lotus (1990) (with Jane Arbor and Natalie Sparks)\nMarry Me Cowboy (1995) (with Janet Dailey, Susan Fox and Anne McAllister)\nHusbands on Horseback (1996) (with Diana Palmer)\nWedlocked (1999) (with Day Leclaire and Anne McAllister)\nMistletoe Magic (1999) (with Betty Neels and Rebecca Winters)\nThe Australians (2000) (with Helen Bianchin and Miranda Lee)\nWeddings Down Under (2001) (with Helen Bianchin and Jessica Hart)\nOutback Husbands (2002) (with Marion Lennox)\nThe Mother's Day Collection (2002) (with Helen Dickson and Kate Hoffmann)\nAustralian Nights (2003) (with Miranda Lee)\nOutback Weddings (2003) (with Barbara Hannay)\nAustralian Playboys (2003) (with Helen Bianchin and Marion Lennox)\nAustralian Tycoons (2004) (with Emma Darcy and Marion Lennox)\nA Mother's Day Gift (2004) (with Anne Ashley and Lucy Monroe)\nWhite Wedding (2004) (with Judy Christenberry and Jessica Steele)\nA Christmas Engagement (2004) (with Sara Craven and Jessica Matthews)\nA Very Special Mother's Day (2005) (with Anne Herries)\nAll I Want for Christmas... (2005) (with Betty Neels and Jessica Steele)\nThe Mills and Boon Collection (2006) (with Caroline Anderson and Penny Jordan)\nOutback Desire (2006) (with Emma Darcy and Carol Marinelli)\nTo Mum, with Love (2006) (with Rebecca Winters)\nAustralian Heroes (2007) (with Marion Lennox and Fiona McArthur)\nTall, Dark and Sexy (2008) (with Caroline Anderson and Helen Bianchin)\nThe Boss's Proposal (2008) (with Jessica Steele and Patricia Thayer)\nIsland Heat / Outback Man Seeks Wife / Prince's Forbidden Virgin / One Night Before Marriage / Their Lost-and-found Family / Single Dad's Marriage Wish (2008) (with Robyn Donald, Marion Lennox, Carol Marinelli, Sarah Mayberry and Anne Oliver)\nAustralian Billionaires (2009) (with Jennie Adams and Amy Andrews)\nCattle Baron : Nanny Needed / Bachelor Dad on Her Doorstep (2009) (with Michelle Douglas)\n\nExternal links\nMargaret Way at Harlequin Enterprises Ltd\n\nAustralian romantic fiction writers\nAustralian women novelists\nLiving people\nYear of birth missing (living people)\nWomen romantic fiction writers", "answers": ["Romance novels and women's fiction."], "length": 1193, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "fc1544761460f97af3a338bbf3e3c648c00ec27b8b297069"} {"input": "What are the datasets used in this community for research?", "context": "\\section{Introduction}\nUnderwater robot picking is to use the robot to automatically capture sea creatures like holothurian, echinus, scallop, or starfish in an open-sea farm where underwater object detection is the key technology for locating creatures. Until now, the datasets used in this community are released by the Underwater Robot Professional Contest (URPC$\\protect\\footnote{Underwater Robot Professional Contest: {\\bf http://en.cnurpc.org}.}$) beginning from 2017, in which URPC2017 and URPC2018 are most often used for research. Unfortunately, as the information listed in Table \\ref{Info}, URPC series datasets do not provide the annotation file of the test set and cannot be downloaded after the contest. \nTherefore, researchers \\cite{2020arXiv200511552C,2019arXiv191103029L} first have to divide the training data into two subsets, including a new subset of training data and a new subset of testing data, and then train their proposed method and other \\emph{SOTA} methods. On the one hand, training other methods results in a significant increase in workload. On the other hand, different researchers divide different datasets in different ways, \n\\begin{table}[t]\n\\renewcommand\\tabcolsep{3.5pt}\n\\caption{Information about all the collected datasets. * denotes the test set's annotations are not available. \\emph{3} in Class means three types of creatures are labeled, \\emph{i.e.,} holothurian, echinus, and scallop. \\emph{4} means four types of creatures are labeled (starfish added). Retention represents the proportion of images that retain after similar images have been removed.}\n\\centering \n\\begin{tabular}{|l|c|c|c|c|c|}\n\\hline\nDataset&Train&Test&Class&Retention&Year \\\\ \n\\hline \nURPC2017&17,655&985*&3&15\\%&2017 \\\\\n\\hline\nURPC2018&2,901&800*&4&99\\%&2018 \\\\\n\\hline\nURPC2019&4,757&1,029*&4&86\\%&2019 \\\\\n\\hline\nURPC2020$_{ZJ}$&5,543&2,000*&4&82\\%&2020 \\\\\n\\hline\nURPC2020$_{DL}$&6,575&2,400*&4&80\\%&2020 \\\\\n\\hline\nUDD&1,827&400&3&84\\%&2020 \\\\\n\\hline \n\n\\end{tabular}\n\\label{Info}\n\\end{table}\n\\begin{figure*}[htbp]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{example.pdf}\n\\end{center}\n \\caption{Examples in DUO, which show a variety of scenarios in underwater environments.}\n\\label{exam}\n\\end{figure*}\ncausing there is no unified benchmark to compare the performance of different algorithms.\nIn terms of the content of the dataset images, there are a large number of similar or duplicate images in the URPC datasets. URPC2017 only retains 15\\% images after removing similar images compared to other datasets. Thus the detector trained on URPC2017 is easy to overfit and cannot reflect the real performance.\nFor other URPC datasets, the latter also includes images from the former, \\emph{e.g.}, URPC2019 adds 2,000 new images compared to URPC2018; compared with URPC2019, URPC2020$_{ZJ}$ adds 800 new images. The URPC2020$_{DL}$ adds 1,000 new images compared to the URPC2020$_{ZJ}$. It is worth mentioning that the annotation of all datasets is incomplete; some datasets lack the starfish labels and it is easy to find error or missing labels. \\cite{DBLP:conf/iclr/ZhangBHRV17} pointed out that although the CNN model has a strong fitting ability for any dataset, the existence of dirty data will significantly weaken its robustness.\nTherefore, a reasonable dataset (containing a small number of similar images as well as an accurate annotation) and a corresponding recognized benchmark are urgently needed to promote community development.\n\n\nTo address these issues, we introduce a dataset called Detecting Underwater Objects (DUO) by collecting and re-annotating all the available underwater datasets. It contains 7,782 underwater images after deleting overly similar images and has a more accurate annotation with four types of classes (\\emph{i.e.,} holothurian, echinus, scallop, and starfish). \nBesides, based on the MMDetection$\\protect\\footnote{MMDetection is an open source object detection toolbox based on PyTorch. {\\bf https://github.com/open-mmlab/mmdetection}}$ \\cite{chen2019mmdetection} framework, we also provide a \\emph{SOTA} detector benchmark containing efficiency and accuracy indicators, providing a reference for both academic research and industrial applications. It is worth noting that JETSON AGX XAVIER$\\protect\\footnote{JETSON AGX XAVIER is an embedded development board produced by NVIDIA which could be deployed in an underwater robot. Please refer {\\bf https://developer.nvidia.com/embedded/jetson-agx-xavier-developer-kit} for more information.}$ was used to assess all the detectors in the efficiency test in order to simulate robot-embedded environment. DUO will be released in https://github.com/chongweiliu soon.\n\nIn summary, the contributions of this paper can be listed as follows.\n\n $\\bullet$ By collecting and re-annotating all relevant datasets, we introduce a dataset called DUO with more reasonable annotations as well as a variety of underwater scenes.\n\n $\\bullet$ We provide a corresponding benchmark of \\emph{SOTA} detectors on DUO including efficiency and accuracy indicators which could be a reference for both academic research and industrial applications. \n\n\n\\pagestyle{empty}\n\\section{Background}\nIn the year of 2017, underwater object detection for open-sea farming is first proposed in the target recognition track of Underwater Robot Picking Contest 2017$\\protect\\footnote{From 2020, the name has been changed into Underwater Robot Professional Contest which is also short for URPC.}$ (URPC2017) which aims to promote the development of theory, technology, and industry of the underwater agile robot and fill the blank of the grabbing task of the underwater agile robot. The competition sets up a target recognition track, a fixed-point grasping track, and an autonomous grasping track. The target recognition track concentrates on finding the {\\bf high accuracy and efficiency} algorithm which could be used in an underwater robot for automatically grasping.\n\nThe datasets we used to generate the DUO are listed below. The detailed information has been shown in Table \\ref{Info}.\n\n {\\bf URPC2017}: It contains 17,655 images for training and 985 images for testing and the resolution of all the images is 720$\\times$405. All the images are taken from 6 videos at an interval of 10 frames. However, all the videos were filmed in an artificial simulated environment and pictures from the same video look almost identical. \n \n {\\bf URPC2018}: It contains 2,901 images for training and 800 images for testing and the resolutions of the images are 586$\\times$480, 704$\\times$576, 720$\\times$405, and 1,920$\\times$1,080. The test set's annotations are not available. Besides, some images were also collected from an artificial underwater environment.\n \n {\\bf URPC2019}: It contains 4,757 images for training and 1029 images for testing and the highest resolution of the images is 3,840$\\times$2,160 captured by a GOPro camera. The test set's annotations are also not available and it contains images from the former contests.\n \n {\\bf URPC2020$_{ZJ}$}: From 2020, the URPC will be held twice a year. It was held first in Zhanjiang, China, in April and then in Dalian, China, in August. URPC2020$_{ZJ}$ means the dataset released in the first URPC2020 and URPC2020$_{DL}$ means the dataset released in the second URPC2020. This dataset contains 5,543 images for training and 2,000 images for testing and the highest resolution of the images is 3,840$\\times$2,160. The test set's annotations are also not available.\n \n {\\bf URPC2020$_{DL}$}: This dataset contains 6,575 images for training and 2,400 images for testing and the highest resolution of the images is 3,840$\\times$2,160. The test set's annotations are also not available.\n \n {\\bf UDD \\cite{2020arXiv200301446W}}: This dataset contains 1,827 images for training and 400 images for testing and the highest resolution of the images is 3,840$\\times$2,160. All the images are captured by a diver and a robot in a real open-sea farm.\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=1\\linewidth]{pie.pdf}\n\\end{center}\n \\caption{The proportion distribution of the objects in DUO.}\n\\label{pie}\n\\end{figure}\n\n\n\n\\begin{figure*}\n \\centering\n \\subfigure[]{\\includegraphics[width=3.45in]{imagesize.pdf}}\n \\subfigure[]{\\includegraphics[width=3.45in]{numInstance.pdf}}\n \\caption{(a) The distribution of instance sizes for DUO; (b) The number of categories per image.}\n \\label{sum}\n\\end{figure*}\n\\section{Proposed Dataset}\n\n\\subsection{Image Deduplicating}\nAs we explained in Section 1, there are a large number of similar or repeated images in the series of URPC datasets. Therefore, it is important to delete duplicate or overly similar images and keep a variety of underwater scenarios when we merge these datasets together. Here we employ the Perceptual Hash algorithm (PHash) to remove those images. PHash has the special property that the hash value is dependent on the image content, and it remains approximately the same if the content is not significantly modified. Thus we can easily distinguish different scenarios and delete duplicate images within one scenario. \n\nAfter deduplicating, we obtain 7,782 images (6,671 images for training; 1,111 for testing). The retention rate of the new dataset is 95\\%, which means that there are only a few similar images in the new dataset. Figure \\ref{exam} shows that our dataset also retains various underwater scenes.\n\n\\subsection{Image Re-annotation}\nDue to the small size of objects and the blur underwater environment, there are always missing or wrong labels in the existing annotation files. In addition, some test sets' annotation files are not available and some datasets do not have the starfish annotation. In order to address these issues, we follow the next process which combines a CNN model and manual annotation to re-annotate these images. Specifically, we first train a detector (\\emph{i.e.,} GFL \\cite{li2020generalized}) with the originally labeled images. After that, the trained detector predicts all the 7,782 images. We treat the prediction as the groundtruth and use it to train the GFL again. We get the final GFL prediction called {\\bf the coarse annotation}. Next, we use manual correction to get the final annotation called {\\bf the fine annotation}. Notably, we adopt the COCO \\cite{Belongie2014} annotation form as the final format.\n\\subsection{Dataset Statistics}\n{\\bf The proportion of classes}: The total number of objects is 74,515. Holothurian, echinus, scallop, and starfish are 7,887, 50,156, 1,924, and 14,548, respectively. Figure \\ref{pie} shows the proportion of each creatures where echinus accounts for 67.3\\% of the total. The whole data distribution shows an obvious long-tail distribution because the different economic benefits of different seafoods determine the different breed quantities.\n\n{\\bf The distribution of instance sizes}: Figure \\ref{sum}(a) shows an instance size distribution of DUO. \\emph{Percent of image size} represents the ratio of object area to image area, and \\emph{Percent of instance} represents the ratio of the corresponding number of objects to the total number of objects. Because of these small creatures and high-resolution images, the vast majority of objects occupy 0.3\\% to 1.5\\% of the image area.\n\n{\\bf The instance number per image}: Figure \\ref{sum}(b) illustrates the number of categories per image for DUO. \\emph{Number of instances} represents the number of objects one image has, and \\emph{ Percentage of images} represents the ratio of the corresponding number of images to the total number of images. Most images contain between 5 and 15 instances, with an average of 9.57 instances per image.\n\n{\\bf Summary}:\nIn general, smaller objects are harder to detect. For PASCAL VOC \\cite{Everingham2007The} or COCO \\cite{Belongie2014}, roughly 50\\% of all objects occupy no more than 10\\% of the image itself, and others evenly occupy from 10\\% to 100\\%. \nIn the aspect of instances number per image, COCO contains 7.7 instances per image and VOC contains 3. In comparison, DUO has 9.57 instances per image and most instances less than 1.5\\% of the image size.\nTherefore, DUO contains almost exclusively massive small instances and has the long-tail distribution at the same time, which means it is promising to design a detector to deal with massive small objects and stay high efficiency at the same time for underwater robot picking.\n\n\\section{Benchmark}\nBecause the aim of underwater object detection for robot picking is to find {\\bf the high accuracy and efficiency} algorithm, we consider both the accuracy and efficiency evaluations in the benchmark as shown in Table \\ref{ben}.\n\n\\subsection{Evaluation Metrics}\nHere we adopt the standard COCO metrics (mean average precision, \\emph{i.e.,} mAP) for the accuracy evaluation and also provide the mAP of each class due to the long-tail distribution.\n\n{\\bf AP} -- mAP at IoU=0.50:0.05:0.95.\n\n{\\bf AP$_{50}$} -- mAP at IoU=0.50.\n\n{\\bf AP$_{75}$} -- mAP at IoU=0.75. \n\n{\\bf AP$_{S}$} -- {\\bf AP} for small objects of area smaller than 32$^{2}$.\n\n{\\bf AP$_{M}$} -- {\\bf AP} for objects of area between 32$^{2}$ and 96$^{2}$.\n\n{\\bf AP$_{L}$} -- {\\bf AP} for large objects of area bigger than 96$^{2}$.\n\n{\\bf AP$_{Ho}$} -- {\\bf AP} in holothurian.\n\n{\\bf AP$_{Ec}$} -- {\\bf AP} in echinus.\n\n{\\bf AP$_{Sc}$} -- {\\bf AP} in scallop.\n\n{\\bf AP$_{St}$} -- {\\bf AP} in starfish.\n\n\nFor the efficiency evaluation, we provide three metrics:\n\n{\\bf Param.} -- The parameters of a detector.\n\n{\\bf FLOPs} -- Floating-point operations per second.\n\n{\\bf FPS} -- Frames per second.\n\nNotably, {\\bf FLOPs} is calculated under the 512$\\times$512 input image size and {\\bf FPS} is tested on a JETSON AGX XAVIER under MODE$\\_$30W$\\_$ALL. \n\n\\subsection{Standard Training Configuration}\nWe follow a widely used open-source toolbox, \\emph{i.e.,} MMDetection (V2.5.0) to produce up our benchmark. During the training, the standard configurations are as follows:\n\n $\\bullet$ We initialize the backbone models (\\emph{e.g.,} ResNet50) with pre-trained parameters on ImageNet \\cite{Deng2009ImageNet}.\n\n $\\bullet$ We resize each image into 512 $\\times$ 512 pixels both in training and testing. Each image is flipped horizontally with 0.5 probability during training.\n\n $\\bullet$ We normalize RGB channels by subtracting 123.675, 116.28, 103.53 and dividing by 58.395, 57.12, 57.375, respectively.\n\n $\\bullet$ SGD method is adopted to optimize the model. The initial learning rate is set to be 0.005 in a single GTX 1080Ti with batchsize 4 and is decreased by 0.1 at the 8th and 11th epoch, respectively. WarmUp \\cite{2019arXiv190307071L} is also employed in the first 500 iterations. Totally there are 12 training epochs.\n\n $\\bullet$ Testing time augmentation (\\emph{i.e.,} flipping test or multi-scale testing) is not employed.\n\n\n\n\\subsection{Benchmark Analysis}\nTable \\ref{ben} shows the benchmark for the \\emph{SOTA} methods. Multi- and one- stage detectors with three kinds of backbones (\\emph{i.e.,} ResNet18, 50, 101) give a comprehensive assessment on DUO. We also deploy all the methods to AGX to assess efficiency.\n\nIn general, the multi-stage (Cascade R-CNN) detectors have high accuracy and low efficiency, while the one-stage (RetinaNet) detectors have low accuracy and high efficiency. However, due to recent studies \\cite{zhang2019bridging} on the allocation of more reasonable positive and negative samples in training, one-stage detectors (ATSS or GFL) can achieve both high accuracy and high efficiency.\n\n\\begin{table*}[htbp]\n\\renewcommand\\tabcolsep{3.0pt}\n\n\\begin{center}\n\\caption{Benchmark of \\emph{SOTA} detectors (single-model and single-scale results) on DUO. FPS is measured on the same machine with a JETSON AGX XAVIER under the same MMDetection framework, using a batch size of 1 whenever possible. R: ResNet.} \n\\label{ben}\n\\begin{tabular}{|l|l|c|c|c|ccc|ccc|cccc|}\n\\hline\nMethod&Backbone&Param.&FLOPs&FPS&AP&AP$_{50}$&AP$_{75}$&AP$_{S}$&AP$_{M}$&AP$_{L}$&AP$_{Ho}$&AP$_{Ec}$&AP$_{Sc}$&AP$_{St}$ \\\\ \n\\hline \n\\emph{multi-stage:} &&&&&&&&&&&&&& \\\\\n\n\\multirow{3}{*}{Faster R-CNN \\cite{Ren2015Faster}}\n&R-18&28.14M&49.75G&5.7&50.1&72.6&57.8&42.9&51.9&48.7&49.1&60.1&31.6&59.7\\\\\n&R-50&41.14M&63.26G&4.7&54.8&75.9&63.1&53.0&56.2&53.8&55.5&62.4&38.7&62.5\\\\\n&R-101&60.13M&82.74G&3.7&53.8&75.4&61.6&39.0&55.2&52.8&54.3&62.0&38.5&60.4\\\\\n\\hline\n\n\\multirow{3}{*}{Cascade R-CNN \\cite{Cai_2019}}\n&R-18&55.93M&77.54G&3.4&52.7&73.4&60.3&\\bf 49.0&54.7&50.9&51.4&62.3&34.9&62.3\\\\\n&R-50&68.94M&91.06G&3.0&55.6&75.5&63.8&44.9&57.4&54.4&56.8&63.6&38.7&63.5\\\\\n&R-101&87.93M&110.53G&2.6&56.0&76.1&63.6&51.2&57.5&54.7&56.2&63.9&41.3&62.6\\\\\n\\hline\n\n\\multirow{3}{*}{Grid R-CNN \\cite{lu2019grid}}\n&R-18&51.24M&163.15G&3.9&51.9&72.1&59.2&40.4&54.2&50.1&50.7&61.8&33.3&61.9\\\\\n&R-50&64.24M&176.67G&3.4&55.9&75.8&64.3&40.9&57.5&54.8&56.7&62.9&39.5&64.4\\\\\n&R-101&83.24M&196.14G&2.8&55.6&75.6&62.9&45.6&57.1&54.5&55.5&62.9&41.0&62.9\\\\\n\\hline\n\n\\multirow{3}{*}{RepPoints \\cite{yang2019reppoints}}\n&R-18&20.11M&\\bf 35.60G&5.6&51.7&76.9&57.8&43.8&54.0&49.7&50.8&63.3&33.6&59.2\\\\\n&R-50&36.60M&48.54G&4.8&56.0&80.2&63.1&40.8&58.5&53.7&56.7&65.7&39.3&62.3\\\\\n&R-101&55.60M&68.02G&3.8&55.4&79.0&62.6&42.2&57.3&53.9&56.0&65.8&39.0&60.9\\\\\n\\hline \n\\hline \n\\emph{one-stage:} &&&&&&&&&&&&&& \\\\\n\\multirow{3}{*}{RetinaNet \\cite{Lin2017Focal}}\n&R-18&19.68M&39.68G&7.1&44.7&66.3&50.7&29.3&47.6&42.5&46.9&54.2&23.9&53.8\\\\\n&R-50&36.17M&52.62G&5.9&49.3&70.3&55.4&36.5&51.9&47.6&54.4&56.6&27.8&58.3\\\\\n&R-101&55.16M&72.10G&4.5&50.4&71.7&57.3&34.6&52.8&49.0&54.6&57.0&33.7&56.3\\\\\n\\hline \n\n\\multirow{3}{*}{FreeAnchor \\cite{2019arXiv190902466Z}}\n&R-18&19.68M&39.68G&6.8&49.0&71.9&55.3&38.6&51.7&46.7&47.2&62.8&28.6&57.6\\\\\n&R-50&36.17M&52.62G&5.8&54.4&76.6&62.5&38.1&55.7&53.4&55.3&65.2&35.3&61.8\\\\\n&R-101&55.16M&72.10G&4.4&54.6&76.9&62.9&36.5&56.5&52.9&54.0&65.1&38.4&60.7\\\\\n\\hline \n\n\\multirow{3}{*}{FoveaBox \\cite{DBLP:journals/corr/abs-1904-03797}}\n&R-18&21.20M&44.75G&6.7&51.6&74.9&57.4&40.0&53.6&49.8&51.0&61.9&34.6&59.1\\\\\n&R-50&37.69M&57.69G&5.5&55.3&77.8&62.3&44.7&57.4&53.4&57.9&64.2&36.4&62.8\\\\\n&R-101&56.68M&77.16G&4.2&54.7&77.3&62.3&37.7&57.1&52.4&55.3&63.6&38.9&60.8\\\\\n\\hline \n\n\\multirow{3}{*}{PAA \\cite{2020arXiv200708103K}}\n&R-18&\\bf 18.94M&38.84G&3.0&52.6&75.3&58.8&41.3&55.1&50.2&49.9&64.6&35.6&60.5\\\\\n&R-50&31.89M&51.55G&2.9&56.8&79.0&63.8&38.9&58.9&54.9&56.5&66.9&39.9&64.0\\\\\n&R-101&50.89M&71.03G&2.4&56.5&78.5&63.7&40.9&58.7&54.5&55.8&66.5&42.0&61.6\\\\\n\\hline \n\n\\multirow{3}{*}{FSAF \\cite{zhu2019feature}}\n&R-18&19.53M&38.88G&\\bf 7.4&49.6&74.3&55.1&43.4&51.8&47.5&45.5&63.5&30.3&58.9\\\\\n&R-50&36.02M&51.82G&6.0&54.9&79.3&62.1&46.2&56.7&53.3&53.7&66.4&36.8&62.5\\\\\n&R-101&55.01M&55.01G&4.5&54.6&78.7&61.9&46.0&57.1&52.2&53.0&66.3&38.2&61.1\\\\\n\\hline \n\n\\multirow{3}{*}{FCOS \\cite{DBLP:journals/corr/abs-1904-01355}}\n&R-18&\\bf 18.94M&38.84G&6.5&48.4&72.8&53.7&30.7&50.9&46.3&46.5&61.5&29.1&56.6\\\\\n&R-50&31.84M&50.34G&5.4&53.0&77.1&59.9&39.7&55.6&50.5&52.3&64.5&35.2&60.0\\\\\n&R-101&50.78M&69.81G&4.2&53.2&77.3&60.1&43.4&55.4&51.2&51.7&64.1&38.5&58.5\\\\\n\\hline \n\n\\multirow{3}{*}{ATSS \\cite{zhang2019bridging}}\n&R-18&\\bf 18.94M&38.84G&6.0&54.0&76.5&60.9&44.1&56.6&51.4&52.6&65.5&35.8&61.9\\\\\n&R-50&31.89M&51.55G&5.2&58.2&\\bf 80.1&66.5&43.9&60.6&55.9&\\bf 58.6&67.6&41.8&64.6\\\\\n&R-101&50.89M&71.03G&3.8&57.6&79.4&65.3&46.5&60.3&55.0&57.7&67.2&42.6&62.9\\\\\n\\hline \n\n\\multirow{3}{*}{GFL \\cite{li2020generalized}}\n&R-18&19.09M&39.63G&6.3&54.4&75.5&61.9&35.0&57.1&51.8&51.8&66.9&36.5&62.5\\\\\n&R-50&32.04M&52.35G&5.5&\\bf 58.6&79.3&\\bf 66.7&46.5&\\bf 61.6&55.6&\\bf 58.6&\\bf 69.1&41.3&\\bf 65.3\\\\\n&R-101&51.03M&71.82G&4.1&58.3&79.3&65.5&45.1&60.5&\\bf 56.3&57.0&\\bf 69.1&\\bf 43.0&64.0\\\\\n\n\n\\hline \n\\end{tabular}\n\\end{center}\n\\end{table*}\nTherefore, in terms of accuracy, the accuracy difference between the multi- and the one- stage methods in AP is not obvious, and the AP$_{S}$ of different methods is always the lowest among the three size AP. For class AP, AP$_{Sc}$ lags significantly behind the other three classes because it has the smallest number of instances. In terms of efficiency, large parameters and FLOPs result in low FPS on AGX, with a maximum FPS of 7.4, which is hardly deployable on underwater robot. Finally, we also found that ResNet101 was not significantly improved over ResNet50, which means that a very deep network may not be useful for detecting small creatures in underwater scenarios. \n\nConsequently, the design of high accuracy and high efficiency detector is still the main direction in this field and there is still large space to improve the performance.\nIn order to achieve this goal, a shallow backbone with strong multi-scale feature fusion ability can be proposed to extract the discriminant features of small scale aquatic organisms; a specially designed training strategy may overcome the DUO's long-tail distribution, such as a more reasonable positive/negative label sampling mechanism or a class-balanced image allocation strategy within a training batch.\n\n\\section{Conclusion}\nIn this paper, we introduce a dataset (DUO) and a corresponding benchmark to fill in the gaps in the community. DUO contains a variety of underwater scenes and more reasonable annotations. Benchmark includes efficiency and accuracy indicators to conduct a comprehensive evaluation of the \\emph{SOTA} decoders. The two contributions could serve as a reference for academic research and industrial applications, as well as promote community development.\n\\bibliographystyle{IEEEbib}\n", "answers": ["URPC2017, URPC2018, URPC2019, URPC2020_ZJ and URPC2020_DL."], "length": 2616, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "b321c147c0e0a8a35ee8738ad89326ecad1509bf591b4233"} {"input": "Is there any evidence of heaven and hell?", "context": "LOL Anonymous I'm here! & I did answer you back there.\nkl el kotb el qadema (tawrat + enjel + 9o7f ibrahem) 7orfat w t'3ayarat.. law ma 9ar hal shy chan ma 6ala3 ktab yded ye;3y elly gablah l7ad ma 6ala3 lna el quraan b norah elly allah 7f'6ah don ta7ref w tabdel ela yom el dein .. shlon tyeeb Joan shy ma5ooth 5erah?!\n1- Imagine you have two individuals. One truly believes in God but he is a bad husband and father who cheats on his wife and neglects his children. The other man is a nonbeliever; he accepts religion in theory and doctrine but simply does not have faith no matter how hard he tried. He is a good father and a good husband as well. When they stand before God at the Day of Judgment, would the nonbeliever be condemned to eternal hell only for that one flaw in him? Don't the other good traits in that person matter at all? Does being a nonbeliever equal being evil?\n2-God knows everything. However, why would He have created when He knows exactly what each person is going to do and where they will end up? For example, let us suppose that tomorrow I will murder someone. God knows that I will do that. And He knows that I will suffer in hell when the time comes. So why does He purposely inflict that pain on me? To teach me a lesson? He knows the outcome of everything and everyone He has created.\n3-Since you brought up science in the previous comments section, I am seriously curious (I am not trying to be glib) about dinosaurs and prehistoric humans. We have seen the evolution of humans and we have fossils and hard evidence of it. The science world has traced back \"Eve\" to Africa. The concept of Adam and Eve seems too mythological to be 100% factual. It has the same tone of Greek and Ancient Egyptian legends.\n-Note: I am not trying to corner you here, I just want to know your response to my queries and thoughts. Thanks.\n_ ولماذا نصلى ولمن نصلى .. انى لا ارى لصلاتكم هذه اى حكمه ولماذا كل تلك الحركات اما كان يكفى الخشوع..\nاعتقد ان المشكلة ليست كما ذكرت..فما ذكرته يعني ان هناك العديد من المسائل الفقهية المتصلة بالعصر الجديد دون حل. على العكس, فالحلال بين والحرام بين. اعتقد المشكلة تكمن بالمسلمين الذين اصابهم الوهن.\nYou'll be answerd personaly only if; you comment with your blogger's name/nick.\nI answered you before & there was no response from your behalf. Why would you think you're worth an answer from me since you can seek answers from good books?\nYour questions are very simple, with clean cut answers. Don't use the word corner; it indicates things you don't have the first alphabet for!\nReligion was established to calm our fears from the unknown. Freud did write about it in the father figure argument. Try to read it.\nI deleted my last comment because in the end of it I inserted some thing that may offend some people (if you read it, I gauss you know what I mean). Abraham did not believe in God until he saw the birds come to life. So, all what I want is a single prove similar to Abraham.\nI indeed busy doing my homework; I have to solve the energy equation and the continuity equation to come with another, hopefully, exact solution to the problem.\nDo you work? I mean do you have a job, it seems that you are always on line!!!???? It is amazing how some one would stick to the internet. Try to go out more often, play sport or find any other hobby. If you could not disattach yourself from the internet try and seek a professional help… :-). Honestly, I’m not making fun of you it is just an advice… sincere advice.\nI measure prosperity of a nation by the degree of its civilization in comparison to others of its time. Islamic civilization was at the peak during the Umayyad and Abbasid reigns, at the time when Moslems conquered other nations and dying civilizations, and mixed with them, and the new comers in Islam started researching for the truth. In their quest they translated Greek philosophy books and dug into ancient ones. Their search for truth led them to the emergence of chemistry, Algebra and other sciences And their findings excelled them in innovative inventions. The maps and astrolabes that Columbus used to sail on his quest to discover the West years later were taken from the Arabs. And in contrary to the scribbles of their holy book, Arabic books were the first that brought the roundedness of earth to world attention. Arabic language romanticized the rigid Latin, poetry and verse advanced. At one time, civilization was dubbed as Arabization in Andalusia and that was evident in Andalusia’s Christian and Jewish ancient books. Islamic Architecture and their aggregation systems portrayed marvels of their time. Islamic libraries were huge and rich for the European knowledge seekers.\nAnd history tells us that those were the times when Moslems were the furthest from the dogma of their religion, there societies were open to controlled prostitution in the name of dancimng Jawari, and alcohol was openly consumed. We also notice in these periods that struggle for power among them mounted, yet they were the most tolerant to other nations and beliefs. There was nothing Islamic about them at that time, only a name. After that time, the Islamic civilization collapsed and kept deteriorating as people clung more to the their religious beliefs that were diverted through different interpretations.\nAnd based on that; I believe that it’s not only Moslems who missread their religion, the problem stemmed from the core of the religion (Quran and tradition) for being vague and subject to different interpretations. If Quran was the true word of God, then it should have contained the miracle of not being subject to different interpretations, at least to keep Moslems united, and then, one would believe that Islam is suitable for every location and time.\nNothing can give a solid proof of the existence of heaven and hell, yet, nothing can disprove it either. Same goes to God. So the chance remain fifty-fifty. You either believe in it for the sake of taking a lesser risk if you don’t believe in blind faith, or discard it all together. Yet, the possibility is there until proved or disproved. As for now; these cases can only be understood through logic. So let’s reiterate your question and narrow it down to make it closer to something that can be measured.\nIs there any mention of the stories of prophets outside the holy books? Are there any archeological evidences of each prophet’s reign (and I stress prophet’s)? Are there any mention of each prophet in history books outside its holy book, or the books that were based on it’s holy books? Ok, that was too general, let’s narrow it down a bit since your knowledge of your religion, mashallah, especially in reciting Quran is superb.\nIs there any mention of the prophet Mohammad reign in non-Moslem history books other than those referenced to Moslem books? And if there were; did those stories mach? Did archeology of Mohammad’s time match those stories? Did ancient books of other nations confirm those stories? After all Mecca was an open, mid trade center for traders of the South and the North, and supposedly was exposed and open to other nations, and an emergence of a new religion would not have gone unnoticed in the ancient books of those nations.\nThat's why faith is called \"FAITH\" to believe in the unseen, you see.\nkeep in mind that every major religion has gone through various phases from emergence to growth to stagnation to depression and reformation. these are cycles that play out across centuries and i suspect islam is not immune to them.\nislam is unique in that it is truly the last 'great' religion and it is here to stay. the future should be one of consilience and consolidation between religions and peoples.\nat the end of the day - tolerance of thought will be the salvation of mankind.\nI've been asking your first question for as long as I can remember and nobody has given me a convincing answer yet. I also asked the second question verbatim when I was in my high school's Tarbiya Islamiya class, and the teacher had no answer. I kept hounding him until the students yelled at me to shut up and then he kicked me out!\nWe're all adults here, so why can't you answer Anon's query? I still find it amazing that you named yourself after Saint Joan. A Christian (gasp!) who will burn in hell according to some of the comments here.\nAll men are my brothers. I would have liked to have said it then, and I would like to say it now: all men are my brothers. But all men are not my brothers. Why? Because all women are my sisters. And the brother who denies the rights of his sister: that brother is not my brother. At the very best, he is my half-brother - by definition. Osama is not my brother.\nReligion is sensitive ground, as well it might be. Here we walk on eggshells. Because religion is itself an eggshell. Today, in the West, there are no good excuses for religious belief - unless we think that ignorance, reaction and sentimentality are good excuses. This is of course not so in the East, where, we acknowledge, almost every living citizen in many huge and populous countries is intimately defined by religious belief. The excuses, here, are very persuasive; and we duly accept that 'faith' - recently and almost endearingly defined as 'the desire for the approval of supernatural beings' - is a world-historical force and a world-historical actor. All religions, unsurprisingly, have their terrorists, Christian, Jewish, Hindu, even Buddhist. But we are not hearing from those religions. We are hearing from Islam.\nLet us make the position clear. We can begin by saying, not only that we respect Muhammad, but that no serious person could fail to respect Muhammad - a unique and luminous historical being. Judged by the continuities he was able to set in motion, he remains a titanic figure, and, for Muslims, all-answering: a revolutionary, a warrior, and a sovereign, a Christ and a Caesar, 'with a Koran in one hand', as Bagehot imagined him, 'and a sword in the other'. Muhammad has strong claims to being the most extraordinary man who ever lived. And always a man, as he always maintained, and not a god. Naturally we respect Muhammad. But we do not respect Muhammad Atta.\nUntil recently it was being said that what we are confronted with, here, is 'a civil war' within Islam. That's what all this was supposed to be: not a clash of civilisations or anything like that, but a civil war within Islam. Well, the civil war appears to be over. And Islamism won it. The loser, moderate Islam, is always deceptively well-represented on the level of the op-ed page and the public debate; elsewhere, it is supine and inaudible. We are not hearing from moderate Islam. Whereas Islamism, as a mover and shaper of world events, is pretty well all there is.\nSo, to repeat, we respect Islam - the donor of countless benefits to mankind, and the possessor of a thrilling history. But Islamism? No, we can hardly be asked to respect a creedal wave that calls for our own elimination. More, we regard the Great Leap Backwards as a tragic development in Islam's story, and now in ours. Naturally we respect Islam. But we do not respect Islamism, just as we respect Muhammad and do not respect Muhammad Atta.\nالمؤامرات على الأديان وجميع الانقلابات المخربة والثورات على القيم والمبادئ خرجت من هذا التراث .. وان كل معول هدم كان وراءه توجيه يهودي.\n•تذكروا أن الشعب الذي لا يهلك غيره يهلك نفسه.\n•يجب ان نخلق الجيل الذي لا يخجل من كشف عورته (ألا تفسر لنا هذه الجملة موجة العرى في الافلام والموضات التى تسود العالم الآن).\n.علينا ان نشعل حربا بين الشعوب ونضرب الدول بعضها ببعض فبهذا يصبح جميع المتحاربين في حاجة الى أموالنا فنفرض عليهم شروطنا.\n•الجماهير عمياء فاشتروها بالمال وسوقوها كالبهائم الى أهدافكم.\n•سيطروا على الانتخابات ووسائل الاعلام والصحافة (وهم قد سيطروا عليها بالمال والجنس والمرأة في الغرب الرأسمالي وبالحزب والسلطة في العالم الاشتراكي).\n•ادفعوا الجماهير العمياء الى الثورة وسلموهم مقاليد الحكم ليحكموا في غوغائية وغباء (وقد فعلوا هذه في الثورة الفرنسية) وحينئذ نأتي نحن ونعدمهم فنكون منقذين للعالم (وقد اعدموهم جميعاً من روبسبير الى ميرابوا).\n•ارفعوا شعار الحرية واهدموا بها الاخلاق والاسرة والقومية والوطنية.\n.ارفعوا شعار العلم واهدموا به الدين .. وهذا ما فعله كمال أتاتورك (حفيد مزاراحي) حينما اقام الدولة العلمانية في تركيا ووقف يخطب في البرلمان التركي عام 1923 ساخراً من القرآن.\nنحن الآن في القرن العشرين لا نستطيع ان نسير وراء كتاب تشريع يبحث عن التين والزيتون.\n•الذي يعرقل مؤامراتكم اوقعوه في فضائح ثم هددوه بكشفها (وقد فعلوها في ووترجيت) او في مآزق مالية ثم تقدموا لانقاذه (وقد فعلها دزرائيلي مع الخديو واستولى على القنال).. وإذ تعذر الامر سارعوا الى اغتياله (وقد فعلوها بكنيدي) ثم اقتلوا قاتله لتدفنوا اسرارنا معه الى الأبد (وقد فعلوها بقاتل كنيدي).\n•اقتلوا القوميات والوطنيات بالدعوة الى الاممية والمواطنة العالمية وقد فعلها ماركس في الشيوعية.\n•كل ما عدا اليهود حيوانات ناطقة سخرها الله في خدمة اليهود.\nواليهودية ترى ان الله واحد ولكنها تحتكره لنفسها فلا عمل لله الا الحفاظ على اسرائيل وتسخير جميع الشعوب لخدمتها.\nواللاهوت اليهودي لا يؤمن بآخرة، وقد شطبوا كل ما جاء عن الآخرة في التوراة .. والقيامة عندهم هي قيامة دولتهم في فلسطين والبعث بعثها والنشر نشرها .. ويوم الحساب هو اليوم الذي يحاسبون فيه كل الأمم يوم يعود المسيح ويباركهم ويختارهم نواباً له في حكم العالم وإقامة ملكوت الله على الأرض .. والعجيب انهم كفروا بالمسيح حينما جاء ثم أعلنوا إيمانهم بعودته وشرطوا هذه العودة بانها رجعة من المسيح ليختارهم رؤساء وحكاماً للعالم الى الأبد.\nوالفكر اليهودي يلقى غلالة من الأسرار والطلاسم والكتمان والغموض على كل شئ .. والكبالا والسحر وعلم الأعداد والحروف وتسخير الشياطين من علومهم التى شغفوا بها وروجوها ونشروها.\nوكانت وسيلتهم الى هدم الكتب السماوية هى تفسيرها بالتأويل وذلك برفض المعاني الظاهرة واختراع معان باطنة تهدم الغرض الديني وتفسد هدفه.\nونستطيع ان نرى اثر التوجيه الهودي في الفلسفات العبثية والدمية والمادية والفوضية والإباحية .. واحيانا نلمح اسماء يهودية خلفها مثل : سارتر – فرويد – ماركس – ماركوز.\nواذا فتحنا ملف الديانة البهائية فإننا نجد اثر التوجيه اليهودي واضحا في كتبها.\nعبد الهاء تأليف سليم قبعين القاهرة مطبعة العمران 1922.\nمفواضات عبد البهاء الطبعة الاولى 1928م. موعود كل الامم.\nجورج تاوزنه مطبوع بإذن من المحفل الروحاني لمصر والسودان.\n•اكثر فلاسفة اليونان تعلموا الحكمة من بنى اسرائيل.\n•رسالة عبد البهاء هي توحيد المسلمين والنصارى واليهود وجمعهم على أصل نواميس موسى.\n•عمل موسى لا يس", "answers": ["Unknown."], "length": 2490, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "b41aef2d0475f46a78a3168c6c8a614170975f82507cf2e4"} {"input": "What position did Simon English hold in the 2008 general election?", "context": "Sir Simon William English (born 30 December 1961) is a New Zealand former National Party politician who served as the 39th prime minister of New Zealand from 2016 to 2017. He had previously served as the 17th deputy prime minister of New Zealand and minister of finance from 2008 to 2016 under John Key and the Fifth National Government.\n\nA farmer and public servant before entering politics, English was elected to the New Zealand Parliament in as the National Party's candidate in the Wallace electorate. He was elevated to Cabinet in 1996 and in 1999 was made minister of finance, although he served for less than a year due to his party's loss at the 1999 general election. In October 2001, English replaced Jenny Shipley as the leader of the National Party (and consequently as Leader of the Opposition). He led the party to its worst defeat at the 2002 general election, and as a consequence, in October 2003 he was replaced as leader by Don Brash.\n\nIn November 2006, after Brash's resignation, English became deputy leader under John Key. After National's victory at the 2008 general election, he became deputy prime minister and was also made minister of finance for the second time. Under English's direction New Zealand's economy maintained steady growth during National's three terms of government. He became a list-only MP after stepping down as an electorate MP at the 2014 general election.\n\nJohn Key resigned as leader of the National Party and prime minister in December 2016. English won the resulting leadership election unopposed and was sworn in as prime minister on 12 December 2016. His tenure was only ten months, and included a three-month election campaign. In the 2017 general election, National won the largest number of seats but fell short of a majority. The parties holding the balance of power declined to support the existing government, and English was subsequently replaced as prime minister by Jacinda Ardern, leader of the Labour Party. English initially continued on as Leader of the Opposition, but resigned as leader of the National Party on 27 February 2018 and left parliament two weeks later.\n\nEarly life\nEnglish was born on 30 December 1961 at Lumsden Maternity Centre in Lumsden. He is the eleventh of twelve children of Mervyn English and Norah (née O'Brien) English. His parents purchased Rosedale, a mixed sheep and cropping farm in Dipton, Southland from Mervyn's uncle, Vincent English, a bachelor, in 1944. English was born in the maternity unit at Lumsden.\n\nEnglish attended St Thomas's School in Winton, then boarded at St. Patrick's College in Upper Hutt, where he became head boy. He played in the first XV of the school's rugby team. English went on to study commerce at the University of Otago, where he was a resident at Selwyn College, and then completed an honours degree in English literature at Victoria University of Wellington.\n\nAfter finishing his studies, English returned to Dipton and farmed for a few years. From 1987 to 1989, he worked in Wellington as a policy analyst for the New Zealand Treasury, at a time when the free market policies favoured by Labour's finance minister Roger Douglas (known collectively as \"Rogernomics\") were being implemented.\n\nEnglish joined the National Party in 1980, while at Victoria University. He served for a period as chairman of the Southland branch of the Young Nationals, and became a member of the Wallace electorate committee. After moving to Wellington, he served for periods on the Island Bay and Miramar electorate committees, respectively.\n\nFourth National Government (1990–1999)\n\nAt the 1990 general election, English stood as the National candidate in Wallace, replacing the retiring Derek Angus, and was elected with a large majority. He would hold this seat, renamed Clutha-Southland in 1996, until 2014. He and three other newly elected National MPs (Tony Ryall, Nick Smith, and Roger Sowry) were soon identified as rising stars in New Zealand politics, and at various points were dubbed the \"brat pack\", the \"gang of four\", and the \"young Turks\". In his first term in parliament, English chaired a select committee into social services. He was made a parliamentary under-secretary in 1993, serving under the Minister of Health.\n\nFirst period in cabinet (1996–1999)\nIn early 1996, English was elevated to cabinet by Prime Minister Jim Bolger, becoming the Minister for Crown Health Enterprises and Associate Minister of Education (to Wyatt Creech). He was 34 at the time, becoming the cabinet's youngest member. After the 1996 general election, the National Party was forced into a coalition with New Zealand First to retain government. In the resulting cabinet reshuffle, English emerged as Minister of Health. However, as a condition of the coalition agreement, NZ First's Neil Kirton (a first-term MP) was made Associate Minister of Health, effectively becoming English's deputy. This arrangement was described in the press as a \"shotgun marriage\", and there were frequent differences of opinion between the two ministers. After their relationship became unworkable, Kirton was sacked from the role in August 1997, with the agreement of NZ First leader Winston Peters.\n\nAs Minister of Health, English was responsible for continuing the reforms to the public health system that National had begun after the 1990 general election. The reforms were unpopular, and health was perceived as one of the government's weaknesses, with the health portfolio consequently being viewed as a challenge. English believed that the unpopularity of the reforms was in part due to a failure in messaging, and encouraged his National colleagues to avoid bureaucratic and money-focused language (such as references to \"balance sheets\" and \"user charges\") and instead talk about the improvements to services the government's reforms would bring. He also rejected the idea that public hospitals could be run as commercial enterprises, a view which some of his colleagues had previously promoted.\n\nBy early 1997, as dissatisfaction with Bolger's leadership began to grow, English was being touted as a potential successor, along with Jenny Shipley and Doug Graham. His age (35) was viewed as the main impediment to a successful leadership run. National's leadership troubles were resolved in December 1997, when Bolger resigned and Shipley was elected to the leadership unopposed. English had been a supporter of Bolger as leader, but Shipley reappointed him Minister of Health in her new cabinet.\n\nEnglish was promoted to Minister of Finance in a reshuffle in January 1999, a position which was at the time subordinate to the Treasurer, Bill Birch. After a few months, the pair switched positions as part of Birch's transition to retirement, with English assuming the senior portfolio. In early interviews, he emphasised his wish to be seen as a pragmatist rather than an ideologue, and said that the initiatives of some of his predecessors (Roger Douglas's \"Rogernomics\" and Ruth Richardson's \"Ruthanasia\") had focused on \"fruitless, theoretical debates\" when \"people just want to see problems solved\".\n\nOpposition (1999–2008)\n\nAfter the National Party lost the 1999 election to Helen Clark's Labour Party, English continued on in the shadow cabinet as National's spokesperson for finance. He was elected deputy leader of the party in February 2001, following the resignation of Wyatt Creech, with Gerry Brownlee being his unsuccessful opponent.\n\nLeader of the Opposition\nIn October 2001, after months of speculation, Jenny Shipley resigned as leader of the National Party after being told she no longer had the support of the party caucus. English was elected as her replacement unopposed (with Roger Sowry as his deputy), and consequently became Leader of the Opposition. However, he did not openly organise against Shipley, and according to The Southland Times \"there was almost an element of 'aw, shucks, I'll do it then' about Mr English's ascension\".\n\nAged 39 when he was elected, English became the second-youngest leader in the National Party's history, after Jim McLay (who was 38 when elected in 1984). He also became only the third Southlander to lead a major New Zealand political party, after Joseph Ward and Adam Hamilton. However, English failed to improve the party's performance. In the 2002 election, National suffered its worst electoral defeat ever, gaining barely more than twenty percent of the vote. English described it as \"the worst day of my political life\". Both party insiders and the general public were split as to how much to blame him for the loss, but most of the party believed that English would be able to rebuild National's support.\n\nBy late 2003, however, National's performance in opinion polls remained poor. The party had briefly increased its popularity in the year following the election, but by October its support had fallen to levels only slightly better than what it achieved in the last ballot. English also appeared in a boxing match for a charity against entertainer Ted Clarke. This did not boost his polling or that of the National party either, with suggestions that it devalued his image as a serious politician. Don Brash, former governor of the Reserve Bank and a relative newcomer to politics, began to build up support to replace English. On 28 October, Brash gained sufficient backing in Caucus to defeat English in a leadership contest.\n\nShadow cabinet roles and deputy leader\nOn 2 November 2003, when Brash changed responsibilities for certain MPs, English became National's spokesman for education, ranked at fifth place in the party's parliamentary hierarchy. He remained in parliament after the 2005 election. In his new shadow education portfolio, English performed strongly, and remained a party favourite despite his election defeat as leader in 2002, eventually being returned to the finance portfolio in August 2004 as deputy spokesman (while still retaining responsibility for education).\n\nIn November 2006, Brash resigned as leader. English was considered as a potential replacement leader (running against John Key) or deputy leader (against incumbent Gerry Brownlee) in the ensuing leadership election. However, a contest was avoided when the MPs agreed a Key/English ticket would run unopposed in a display of party unity. English took over the deputy leadership and the finance portfolio in the Key shadow cabinet.\n\nFifth National Government (2008–2017)\n\nDeputy Prime Minister and Minister of Finance (2008–2016)\n\nAt the 2008 election, English was re-elected by his electorate, winning by a margin of about 15,500 votes. He became Deputy Prime Minister of New Zealand and Minister of Finance in the fifth National Government, being sworn into office on 19 November 2008 and continued to serve in those roles until becoming Prime Minister on 12 December 2014. He was also made Minister of Infrastructure in National's first term of government and Minister responsible for Housing New Zealand Corporation and minister responsible for the New Zealand flag consideration process in its third.\n\nHe was comfortably re-elected in Clutha-Southland in the 2011 election but opted to run as a party-list candidate in 2014. \n\nThe pairing of John Key as leader of the National Party and English as his deputy has been compared to that of Bob Hawke and Paul Keating (in Australia) and Tony Blair and Gordon Brown (in the UK).\n\nEnglish acceded to the role of Finance Minister in the continuing wake of the financial crisis. In response to New Zealand's rising debt, English made budget deficit-reduction his main priority. His first budget outlined three focuses in New Zealand's financial recovery: \"improving the business environment and removing roadblocks to growth; investment in productive infrastructure; and improving the way government works\". One of his first acts was creating the National Infrastructure Unit, charged with formulating a plan for infrastructure projects and investments. He commissioned a government-wide spending review, with an aim to reducing government expenditure—with the exceptions of a two-year stimulus package and long-term increases on infrastructure spending.\n\nIn April 2011, the Opposition criticised English for suggesting that New Zealand businesses could use New Zealand's low wages to help it compete with Australia. The National Government campaigned for re-election in 2011 on its economic record. The Government boasted growth for five consecutive quarters up to mid-2010, totalling 1.6% of real GDP.\n\nStrong growth resulted in a surplus of $473 million for the 2015/16 financial year, projected to rise to $8.5 billion by 2020/21. In his 2016 Economic and Fiscal Update address, English stated that reducing debt and tackling the costs of the 2016 Kaikōura earthquake were higher priorities than reducing rates of tax.\n\nAllowances issue\nIn 2009, the media, including TVNZ and TV3 revealed that English was receiving about NZ$900 a week as part of a living allowance for ministers, to live in his own NZ$1.2 million Wellington home. At the time, English also received $276,200 in his annual salary as Deputy Prime Minister. It was also revealed other ministers with homes in the capital city were also claiming accommodation allowances. On 3 August 2009, Prime Minister John Key started a review of the housing allowances claimed by cabinet ministers. English subsequently paid back $12,000 and only claimed about $24,000 a year in living allowances. The Auditor-General's office said in September 2009 that they were making \"preliminary enquiries\" into parliamentary housing expenses in response to a letter of complaint from Progressive party leader Jim Anderton. Two days later English stated that he would no longer take up any housing allowance and had paid back all the allowance he had received since the November 2008 election.\n\nPrime Minister (2016–2017)\n\nJohn Key resigned on 12 December, and endorsed English as his successor in the resulting leadership election. Following the drop-out of both Judith Collins and Jonathan Coleman from the leadership election, English was sworn in as the 39th Prime Minister of New Zealand on 12 December 2016.\n\nEnglish appointed his first cabinet on 18 December. In a reshuffle, he appointed Steven Joyce to succeed him as Finance Minister, while most ministerial portfolios remained the same.\n\nIn February 2017, English did not attend Waitangi Day commemorations at the historic treaty grounds, reportedly in response to the Ngāpuhi iwi's decision to stop the Prime Minister from speaking at the marae. Ngāpuhi have protested the Government's negotiation of the Trans Pacific Partnership Agreement (TPPA), which the iwi believe infringes upon Māori sovereignty, and thus does not adhere to the Treaty of Waitangi. English had been invited to attend in an official capacity; his non-attendance was criticised by a Ngāpuhi elder and Opposition leader Andrew Little.\n\nIn his first overseas trip as Prime Minister, English travelled to Europe to discuss trade ties, including a prospective New Zealand–European Union free trade agreement. He first travelled to London on 13 January 2017 to meet British Prime Minister Theresa May. Discussing trade relations, English said the two nations were \"natural partners\" and would \"continue to forge ties\" after the UK's withdrawal from the EU. He also arranged to meet with London Mayor Sadiq Khan, Belgian Prime Minister Charles Michel and German Chancellor Angela Merkel. In a meeting with Merkel, English received crucial backing from Germany for a trade deal with the EU. On 16 January, English stated that his government would continue to promote TPPA, despite the United States' decision to withdraw from the agreement. He explained that Southeast Asian countries would now be treated as a priority in negotiations—he also asserted that the United States was ceding influence to China by its rejection of the trade pact.\n\nAt a press conference at the Beehive on 1 February 2017, English announced that the 2017 general election would be held on 23 September. The Prime Minister later confirmed that his party would approach ACT, United Future and the Māori Party if confidence and supply agreements were required to form a government following the election. In his second cabinet reshuffle on 24 April, English appointed Gerry Brownlee as his new Foreign Affairs Minister; he also promoted Nikki Kaye to the portfolio of Education Minister, and moved Mark Mitchell into the cabinet to become Defence Minister. The reshuffle was perceived as an election preparation.\n\nOn 13 February 2017, English welcomed Australian Prime Minister Malcolm Turnbull to Wellington. The two leaders reaffirmed their shared trade agenda, and discussed changes to the Australian citizenship pathway which will affect permanent residents originating from New Zealand.\n\nOn 19 June, it was reported that Todd Barclay, who succeeded English as MP for Clutha-Southland, had clandestinely recorded one of his employee's conversations the previous year, and that John Key's leaders' budget was used to pay a confidential settlement after the employee resigned. English admitted that he had been aware of the illegal recording and the settlement, and thus implicated in the scandal.\n\nDuring the 2017 National campaign launch, English introduced a $379 million social investment package including digital learning academies for high school students, more resources for mathematics, and boosting support for teaching second languages in schools, and maintaining National Standards in the school curriculum. Prime Minister English also sought to defend National's financial management and economic track record and claimed that the opposition Labour Party would raise taxes. Early opinion polling had forecast a poor showing in the election for the Labour Party, but in early August 37-year-old Jacinda Ardern took over as Labour leader and seemingly energised younger voters.\n\nAt the 2017 general election, National won the largest share of the party vote (44.4%) and the largest number of seats (56) in the House Representatives. However, National lacked enough seats to govern alone due to two of the party's support partners, the Māori Party and United Future, losing their parliamentary seats. In response, English stated that the party would be entering into talks to form a coalition with New Zealand First. Following talks with the two largest parties, New Zealand First entered a coalition arrangement with the Labour Party. English was succeeded as prime minister by Jacinda Ardern on 26 October.\n\nOpposition (2017–2018)\n\nLeader of the Opposition\nEnglish was re-elected as National Party leader on 24 October 2017. At the time of his re-election, English announced his intention to stay on as leader until the next general election. On 13 February 2018, however, he stood down as National Party leader due to personal reasons, and instructed the party to put into motion the processes to elect a new leader. He also retired from Parliament. English's resignation followed weeks of speculation that he would step aside for a new leader. On 27 February, he was succeeded as party leader by Simon Bridges as the result of the leadership election held that day.\n\nPost-premiership \nIn 2018, English joined the board of Australian conglomerate, Wesfarmers. English serves in Chairmanships of Mount Cook Alpine Salmon, Impact Lab Ltd and Manawanui Support Ltd. He is also a director of The Instillery, Centre for Independent Studies and The Todd Corporation Limited, and is a member of the Impact Advisory Group of Macquarie Infrastructure and Real Assets.\n\nPolitical and social views\n\nEnglish is regarded as more socially conservative than his predecessor, John Key. He has stated his opposition to voluntary euthanasia and physician-assisted suicide, same-sex civil unions, and the decriminalisation of prostitution. As Prime Minister he opposed any \"liberalisation\" of abortion law.\n\nIn 2004, English voted against a bill to establish civil unions for both same-sex and opposite-sex couples. In 2005, he voted for the Marriage (Gender Clarification) Amendment Bill, which would have amended the Marriage Act to define marriage as only between a man and a woman. English voted against the Marriage (Definition of Marriage) Amendment Bill, a bill that legalised same-sex marriage in New Zealand. However, in December 2016 he stated, \"I'd probably vote differently now on the gay marriage issue. I don't think that gay marriage is a threat to anyone else's marriage\".\n\nIn 2009, English voted against the Misuse of Drugs (Medicinal Cannabis) Amendment Bill, a bill aimed at amending the Misuse of Drugs Act so that cannabis could be used for medical purposes.\n\nPersonal life \nEnglish met his future wife, Mary Scanlon, at university. She was studying medicine at the time, and became a general practitioner. Both her parents were immigrants, her father being Samoan and her mother Italian, born on the island of Stromboli. They have six children: a daughter and five sons.\n\nEnglish is a practising Roman Catholic, but has stated that he considers his religious beliefs personal and thus separate from politics.\n\nIn June 2002, English took part in TV3's Fight For Life, a celebrity boxing fundraiser to raise money for the Yellow Ribbon anti-youth-suicide campaign, influenced by the death of a teenage nephew in 1997. He lost a split decision to former university colleague Ted Clarke.\n\nHonours\nIn the 2018 Queen's Birthday Honours, English was appointed a Knight Companion of the New Zealand Order of Merit, for services of over 27 years to the State.\n\nSee also\n\nList of New Zealand governments\nPolitics of New Zealand\n\nReferences\n\nExternal links\n\nProfile at National Party \nProfile on Parliament.nz\nReleases and speeches at Beehive.govt.nz\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n|-\n\n1961 births\n21st-century New Zealand politicians\nCandidates in the 2017 New Zealand general election\nDeputy Prime Ministers of New Zealand\nLeaders of the Opposition (New Zealand)\nLiving people\nMembers of the Cabinet of New Zealand\nMembers of the New Zealand House of Representatives\nNew Zealand farmers\nNew Zealand finance ministers\nNew Zealand list MPs\nNew Zealand MPs for South Island electorates\nNew Zealand National Party MPs\nNew Zealand National Party leaders\nNew Zealand Roman Catholics\nNew Zealand people of Irish descent\nPeople educated at St. Patrick's College, Silverstream\nPeople from Dipton, New Zealand\nPeople from Lumsden, New Zealand\nPrime Ministers of New Zealand\nUniversity of Otago alumni\nVictoria University of Wellington alumni\nKnights Companion of the New Zealand Order of Merit\nNew Zealand politicians awarded knighthoods", "answers": ["He became deputy prime minister and minister of finance."], "length": 3602, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "73debbba53d5848cdc1e726c047bb0b82c5730d747fa710c"} {"input": "What is the name of the generative interactive model used in the method?", "context": "\\section{Introduction}\nIn recent years, vehicular technology has attracted significant attention from the automotive and telecommunication industries, leading to the emergence of vehicle-to-everything (V2X) communications for improving road safety, traffic management services and driving comfort.\nV2X supported by the sixth generation (6G) is envisioned to be a key enabler of future connected autonomous vehicles \\cite{9779322}. Although its transformative benefits for leveraging intelligent transportation systems, V2X still face several technical issues mainly related to performance and security.\n\nThe integration of sensing and communication (ISAC) has emerged very recently as a revolutionary element of 6G that could potentially help enabling adaptive learning and intelligent decision-making in future V2X applications.\nThe combination of sensing and communication allows vehicles to perceive their surroundings better, predict manoeuvres from nearby users and make intelligent decisions, thus paving the way toward a safer transportation system \\cite{9665433}.\nModernized vehicles are augmented with various types of sensors divided into exteroceptive to observe their surrounding environment and proprioceptive to observe their internal states.\nThe former like GPS, Lidar, and Cameras are conveyed to improve situational awareness, while latter sensors, such as steering, pedal, and wheel speed, convey to improve self-awareness. \n\nWhile sensing the environment, vehicles can exchange messages that assist in improving situational- and self-awareness and in coordinating maneuvers with other vehicles.\nThose messages like the basic safety (BSMs) and cooperative awareness messages (CAMs) are composed of transmitting vehicle's states such as position and velocity and other vehicles' states in the vicinity. Vehicles might use their sensors, such as cameras and Lidar, to detect road users (e.g., pedestrians), which can be communicated with other road users via the V2X messages to improve the overall performance. However, V2X communication links carrying those messages are inherently vulnerable to malicious attacks due to the open and shared nature of the wireless spectrum among vehicles and other cellular users \\cite{8336901}. For instance, a jammer in the vicinity might alter the information to be communicated to nearby vehicles/users or can intentionally disrupt communication between a platoon of vehicles making the legitimate signals unrecognizable for on-board units (OBUs) and/or road side units (RSUs) that endanger vehicular safety \n\\cite{8553649}.\n\nIn addition, the integrity of GPS signals and the correct acquisition of navigation data to compute position, velocity and time information is critical in V2X applications for their safe operation. However, since civil GPS receivers rely on unencrypted satellite signals, spoofers can easily replicate them by deceiving the GPS receiver to compute falsified positions \\cite{9226611}.\nAlso, the long distance between satellites and terrestrial GPS receivers leads to an extremely weak signal that can be easily drowned out by a spoofer. \nThus, GPS sensors' vulnerability to spoofing attacks poses a severe threat that might be causing vehicles to be out of control or even hijacked and endanger human life \\cite{9881548}.\nTherefore, GPS spoofing attacks and jamming interference needs to be controlled and detected in real-time to reach secured vehicular communications allowing vehicles to securely talk to each other and interact with the infrastructure (e.g., roadside terminals, base stations) \\cite{9860410}.\n\nExisting methods for GPS spoofing detection include GPS signal analysis methods and GPS message encryption methods \\cite{9845684}. However, the former requires the ground truth source during the detection process, which is not always possible to collect. In contrast, the latter involves support from a secured infrastructure and advanced computing resources on GPS receivers, which hinders their adoption in V2X applications. On the other hand, existing methods for jammer detection in vehicular networks are based on analysing the packet drop rate as in \\cite{9484071}, making it difficult to detect an advanced jammer manipulating the legitimate signal instead of disrupting it.\nIn this work, we propose a method to jointly detect GPS spoofing and jamming attacks in the V2X network. A coupled generalized dynamic Bayesian network (C-GDBN) is employed to learn the interaction between RF signals received by the RSU from multiple vehicles and their corresponding trajectories. This integration of vehicles' positional information with vehicle-to-infrastructure (V2I) communications allows semantic learning while mapping RF signals with vehicles' trajectories and enables the RSU to jointly predict the RF signals it expects to receive from the vehicles from which it can anticipate the expected trajectories.\n\nThe main contributions of this paper can be summarized as follows: \\textit{i)} A joint GPS spoofing and jamming detection method is proposed for the V2X scenario, which is based on learning a generative interactive model as the C-GDBN. Such a model encodes the cross-correlation between the RF signals transmitted by multiple vehicles and their trajectories, where their semantic meaning is coupled stochastically at a high abstraction level. \\textit{ii)} A cognitive RSU equipped with the acquired C-GDBN can predict and estimate vehicle positions based on real-time RF signals. This allows RSU to evaluate whether both RF signals and vehicles' trajectories are evolving according to the dynamic rules encoded in the C-GDBN and, consequently, to identify the cause (i.e., a jammer attacking the V2I or a spoofer attacking the satellite link) of the abnormal behaviour that occurred in the V2X environment. \\textit{iii)} Extensive simulation results demonstrate that the proposed method accurately estimates the vehicles' trajectories from the predicted RF signals, effectively detect any abnormal behaviour and identify the type of abnormality occurring with high detection probabilities.\nTo our best knowledge, this is the first work that studies the joint detection of jamming and spoofing in V2X systems.\n\n\\section{System model and problem formulation}\nThe system model depicted in Fig.~\\ref{fig_SystemModel}, includes a single cell vehicular network consisting of a road side unit (RSU) located at $\\mathrm{p}_{R}=[{x}_{R},{y}_{R}]$, a road side jammer (RSJ) located at $\\mathrm{p}_{J}=[{x}_{J},{y}_{J}]$, a road side spoofer (RSS) located at $\\mathrm{p}_{s}=[{x}_{s},{y}_{s}]$ and $N$ vehicles moving along multi-lane road in an urban area. The time-varying positions of the $n$-th vehicle is given by $\\mathrm{p}_{n,t}=[{x}_{n,t},{y}_{n,t}]$ where $n \\in N$. Among the $K$ orthogonal subchannels available for the Vehicle-to-Infrastructure (V2I) communications, RSU assigns one V2I link to each vehicle. Each vehicle exchanges messages composed of the vehicle's state (i.e., position and velocity) with RSU through the $k$-th V2I link by transmitting a signal $\\textrm{x}_{t,k}$ carrying those messages at each time instant $t$ where $k \\in K$. We consider a reactive RSJ that aims to attack the V2I link by injecting intentional interference to the communication link between vehicles and RSU to alter the transmitted signals by the vehicles. In contrast, the RSS purposes to mislead the vehicles by spoofing the GPS signal and so registering wrong GPS positions. RSU aims to detect both the spoofer on the satellite link and the jammer on multiple V2I links in order to take effective actions and protect the vehicular network. \nThe joint GPS spoofing and jamming detection problem can be formulated as the following ternary hypothesis test:\n\\begin{equation}\n \\begin{cases}\n \\mathcal{H}_{0}: \\mathrm{z}_{t,k} = \\mathrm{g}_{t,k}^{nR} \\mathrm{x}_{t,k} + \\mathrm{v}_{t,k}, \\\\\n \\mathcal{H}_{1}: \\mathrm{z}_{t,k} = \\mathrm{g}_{t,k}^{nR} \\mathrm{x}_{t,k} + \\mathrm{g}_{t,k}^{JR} \\mathrm{x}_{t,k}^{j} + \\mathrm{v}_{t,k}, \\\\\n \\mathcal{H}_{2}: \\mathrm{z}_{t,k} = \\mathrm{g}_{t,k}^{nR} \\mathrm{x}_{t,k}^{*} + \\mathrm{v}_{t,k},\n \\end{cases}\n\\end{equation}\nwhere $\\mathcal{H}_{0}$, $\\mathcal{H}_{1}$ and $\\mathcal{H}_{2}$ denote three hypotheses corresponding to the absence of both jammer and spoofer, the presence of the jammer, and the presence of the spoofer, respectively. $\\textrm{z}_{t,k}$ is the received signal at the RSU at $t$ over the $k$-th V2I link, $\\textrm{g}_{t,k}^{nR}$ is the channel power gain from vehicle $n$ to the RSU formulated as: $\\textrm{g}_{t,k}^{nR} = \\alpha_{t,k}^{nR} \\mathrm{h}_{t,k}^{nR}$, where $\\alpha_{t,k}^{nR}$ is the large-scale fading including path-loss and shadowing modeled as \\cite{8723178}: $\\alpha_{t,k}^{nR}=G\\beta d_{t,nR}^{-\\gamma}$.\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[height=5.3cm]{Figures/SystemModel_V1.pdf}\n \\caption{An illustration of the system model.}\n \\label{fig_SystemModel}\n\\end{figure}\n$G$ is the pathloss constant, $\\beta$ is a log normal shadow fading random variable, $d_{t,nR}=\\sqrt{({x}_{n,t}-x_{R})^{2}+({y}_{n,t}-y_{R})^{2}}$ is the distance between the $n$-th vehicle and the RSU. $\\gamma$ is the power decay exponent and\n$\\mathrm{h}_{t,k}$ is the small-scale fading component distributed according to $\\mathcal{CN}(0,1)$. In addition, $\\mathrm{x}_{t,k}$ is the desired signal transmitted by the $n$-th vehicle, and $\\mathrm{v}_{t,k}$ is an additive white Gaussian noise with variance $\\sigma_{n}^{2}$. $\\mathrm{x}_{t,k}^{J}$ is the jamming signal, $\\mathrm{x}_{t,k}^{*}$ is the spoofed signal (i.e., the signal that carries the bits related to the wrong GPS positions), $\\mathrm{g}_{t,k}^{JR} = \\alpha_{t,k}^{JR} \\mathrm{h}_{t,k}^{JR}$ is the channel power gain from RSJ to RSU where $\\alpha_{t,k}^{JR}=G\\beta d_{t,JR}^{-\\gamma}$ such that $d_{t,JR}=\\sqrt{({x}_{J}-x_{R})^{2}+({y}_{J}-y_{R})^{2}}$.\nWe assume that the channel state information (CSI) of V2I links is known and can be estimated at the RSU as in \\cite{8345717}. \nThe RSU is equipped with an RF antenna which can track the vehicles' trajectories after decoding the received RF signals. RSU aims to learn the interaction between the RF signals received from multiple vehicles and their corresponding trajectories.\n\n\\section{Proposed method for joint detection of GPS spoofing and jamming}\n\n\\subsection{Environment Representation}\nThe RSU is receiving RF signals from each vehicle and tracking its trajectory (which we refer to as GPS signal) by decoding and demodulating the received RF signals. \nThe Generalized state-space model describing the $i$-th signal evolvement at multiple levels embodies the following equations: \n\\begin{equation} \\label{eq_discreteLevel}\n \\mathrm{\\Tilde{S}_{t}}^{(i)} = \\mathrm{f}(\\mathrm{\\Tilde{S}_{t-1}}^{(i)}) + \\mathrm{\\tilde{w}}_{t},\n\\end{equation}\n\\begin{equation} \\label{eq_continuousLevel}\n \\mathrm{\\Tilde{X}_{t}}^{(i)} = \\mathrm{A} \\mathrm{\\Tilde{X}_{t-1}}^{(i)} + \\mathrm{B} \\mathrm{U}_{\\mathrm{\\Tilde{S}_{t}}^{(i)}} + \\mathrm{\\tilde{w}}_{t},\n\\end{equation}\n\\begin{equation} \\label{eq_observationLevel}\n \\mathrm{\\Tilde{Z}_{t}}^{(i)} = \\mathrm{H} \\mathrm{\\Tilde{X}_{t}}^{(i)} + \\mathrm{\\tilde{v}}_{t},\n\\end{equation}\nwhere $i \\in \\{$RF, GPS$\\}$ indicates the type of signal received by the RSU. The transition system model defined in \\eqref{eq_discreteLevel} explains the evolution of the discrete random variables $\\mathrm{\\Tilde{S}_{t}}^{(i)}$ representing the clusters of the RF (or GPS) signal dynamics, $\\mathrm{f}(.)$ is a non linear function of its argument and the additive term $\\mathrm{\\tilde{w}}_{t}$ denotes the process noise. The dynamic model defined in \\eqref{eq_continuousLevel} explains the RF signal dynamics evolution or the motion dynamics evolution of the $n$-th vehicle, where $\\mathrm{\\Tilde{X}_{t}}^{(i)}$ are hidden continuous variables generating sensory signals, $\\mathrm{A} \\in \\mathbb{R}^{2d}$ and $\\mathrm{B} \\in \\mathbb{R}^{2d}$ are the dynamic and control matrices, respectively, and $\\mathrm{U}_{\\mathrm{\\Tilde{S}_{t}}^{(i)}}$ is the control vector representing the dynamic rules of how the signals evolve with time. The measurement model defined in \\eqref{eq_observationLevel} describes dependence of the sensory signals $\\mathrm{\\Tilde{Z}_{t}}^{(i)}$ on the hidden states $\\mathrm{\\Tilde{X}_{t}}^{(i)}$ that is parametrized by the measurement matrix $\\mathrm{B} \\in \\mathbb{R}^{2d}$ where $d$ stands for the data dimensionality and $\\mathrm{\\tilde{v}}_{t}$ is a random noise. \n\n\\subsection{Learning GDBN}\nThe hierarchical dynamic models defined in \\eqref{eq_discreteLevel}, \\eqref{eq_continuousLevel} and \\eqref{eq_observationLevel} are structured in a Generalized Dynamic Bayesian Network (GDBN) \\cite{9858012} as shown in Fig.~\\ref{fig_GDBN_CGDBN}-(a) that provides a probabilistic graphical model expressing the conditional dependencies among random hidden variables and observable states. The generative process explaining how sensory signals have been generated can be factorized as:\n\\begin{equation} \\label{eq_generative_process}\n\\begin{split}\n \\mathrm{P}(\\mathrm{\\tilde{Z}}_{t}^{(i)}, \\mathrm{\\tilde{X}}_{t}^{(i)}, \\mathrm{\\tilde{S}}_{t}^{(i)}) = \\mathrm{P}(\\mathrm{\\tilde{S}}_{0}^{(i)}) \\mathrm{P}(\\mathrm{\\tilde{X}}_{0}^{(i)}) \\\\ \\bigg[ \\prod_{t=1}^{\\mathrm{T}} \\mathrm{P}(\\mathrm{\\tilde{Z}}_{t}^{(i)}|\\mathrm{\\tilde{X}}_{t}^{(i)}) \\mathrm{P}(\\mathrm{\\tilde{X}}_{t}^{(i)}|\\mathrm{\\tilde{X}}_{t-1}^{(i)}, \\mathrm{\\tilde{S}}_{t}^{(i)}) \\mathrm{P}(\\mathrm{\\tilde{S}}_{t}^{(i)}|\\mathrm{\\tilde{S}}_{t-1}^{(i)}) \\bigg],\n\\end{split}\n\\end{equation}\nwhere $\\mathrm{P}(\\mathrm{\\tilde{S}}_{0}^{(i)})$ and $\\mathrm{P}(\\mathrm{\\tilde{X}}_{0}^{(i)})$ are initial prior distributions, $\\mathrm{P}(\\mathrm{\\tilde{Z}}_{t}^{(i)}|\\mathrm{\\tilde{X}}_{t}^{(i)})$ is the likelihood, $\\mathrm{P}(\\mathrm{\\tilde{X}}_{t}^{(i)}|\\mathrm{\\tilde{X}}_{t-1}^{(i)}, \\mathrm{\\tilde{S}}_{t}^{(i)})$ and $\\mathrm{P}(\\mathrm{\\tilde{S}}_{t}^{(i)}|\\mathrm{\\tilde{S}}_{t-1}^{(i)})$ are the transition densities describing the temporal and hierarchical dynamics of the generalized state-space model.\nThe generative process defined in \\eqref{eq_generative_process} indicates the cause-effect relationships the model impose on the random variables $\\mathrm{\\tilde{S}}_{t}^{(i)}$, $\\mathrm{\\tilde{X}}_{t}^{(i)}$ and $\\mathrm{\\tilde{Z}}_{t}^{(i)}$ forming a chain of causality describing how one state contributes to the production of another state which is represented by the link $\\mathrm{\\tilde{S}}_{t}^{(i)} \\rightarrow \\mathrm{\\tilde{X}}_{t}^{(i)} \\rightarrow \\mathrm{\\tilde{Z}}_{t}^{(i)}$.\n\nThe RSU starts perceiving the environment using a static assumption about the environmental states evolution by assuming that sensory signals are only subject to random noise. Hence, RSU predicts the RF signal (or vehciles trajectory) using the following simplified model:\n$\\mathrm{\\tilde{X}}_{t}^{(i)} = \\mathrm{A} \\mathrm{\\tilde{X}}_{t-1}^{(i)} + \\mathrm{\\tilde{w}}_{t}$, \nthat differs from \\eqref{eq_continuousLevel} in the control vector $\\mathrm{U}_{\\mathrm{\\Tilde{S}_{t}}^{(i)}}$ which is supposed to be null, i.e., $\\mathrm{U}_{\\mathrm{\\Tilde{S}_{t}}^{(i)}} = 0$ as the dynamic rules explaining how the environmental states evolve with time are not discovered yet.\nThose rules can be discovered by exploiting the generalized errors (GEs), i.e., the difference between predictions and observations. The GEs projected into the measurement space are calculated as:\n$\\tilde{\\varepsilon}_{\\mathrm{\\tilde{Z}}_{t}^{(i)}}^{} = \\mathrm{\\tilde{Z}}_{t}^{(i)} - \\mathrm{H} \\mathrm{\\tilde{X}}_{t}^{(i)}$.\nProjecting $\\tilde{\\varepsilon}_{\\mathrm{\\tilde{Z}}_t}^{}$ back into the generalized state space can be done as follows:\n\\begin{equation}\\label{GE_continuousLevel_initialModel}\n \\tilde{\\varepsilon}_{\\mathrm{\\tilde{X}}_t}^{(i)} = \\mathrm{H}^{-1}\\tilde{\\varepsilon}_{\\mathrm{\\tilde{Z}}_{t}^{(i)}}^{}=\\mathrm{H}^{-1}(\\mathrm{\\tilde{Z}}_{t}^{(i)}-\\mathrm{H}\\mathrm{\\tilde{X}}_{t}^{(i)}) = \\mathrm{H}^{-1}\\mathrm{\\tilde{Z}}_{t}^{(i)} - \\mathrm{\\tilde{X}}_{t}^{(i)}.\n\\end{equation}\nThe GEs defined in \\eqref{GE_continuousLevel_initialModel} can be grouped into discrete clusters in an unsupervised manner by employing the Growing Neural Gas (GNG). The latter produces a set of discrete variables (clusters) denoted by:\n$\\mathbf{\\tilde{S}^{(i)}}=\\{\\mathrm{\\tilde{S}}_{1}^{(i)},\\mathrm{\\tilde{S}}_{2}^{(i)},\\dots,\\mathrm{\\tilde{S}}_{M_{i}}^{(i)}\\}$,\nwhere $M_{i}$ is the total number of clusters and each cluster $\\mathrm{\\tilde{S}}_{m}^{(i)} \\in \\mathbf{\\tilde{S}^{(i)}}$ follows a Gaussian distribution composed of GEs with homogeneous properties, such that $\\mathrm{\\tilde{S}}_{m}^{(i)} \\sim \\mathcal{N}(\\tilde{\\mu}_{\\mathrm{\\tilde{S}}_{m}^{(i)}}=[\\mu_{\\tilde{S}_{m}^{(i)}}, \\Dot{\\mu}_{\\tilde{S}_{m}^{(i)}}], \\Sigma_{\\mathrm{\\tilde{S}}_{m}^{(i)}})$.\n\\begin{figure}[t!]\n \\begin{center}\n \\begin{minipage}[b]{.40\\linewidth}\n \\centering\n \\includegraphics[width=2.5cm]{Figures/GDBN.pdf}\n \\\\[-1.0mm]\n {\\scriptsize (a)}\n \\end{minipage}\n \\begin{minipage}[b]{.50\\linewidth}\n \\centering\n \\includegraphics[width=5.0cm]{Figures/C_GDBN.pdf}\n \n {\\scriptsize (b)}\n \\end{minipage}\n \\caption{(a) The GDBN. (b) The coupled GDBN (C-GDBN) composed of two GDBNs representing the two signals received at the RSU where their discrete hidden variables are stochastically coupled.}\n \\label{fig_GDBN_CGDBN}\n \\end{center}\n\\end{figure}\nThe dynamic transitions of the sensory signals among the available clusters can be captured in a time-varying transition matrix ($\\Pi_{\\tau}$) by estimating the time-varying transition probabilities $\\pi_{ij}=\\mathrm{P}(\\mathrm{\\tilde{S}}_{t}^{(i)}=i|\\mathrm{\\tilde{S}}_{t-1}^{(i)}=j, \\tau)$ where $\\tau$ is the time spent in $\\mathrm{\\tilde{S}}_{t-1}^{(i)}=j$ before transition to $\\mathrm{\\tilde{S}}_{t}^{(i)}=i$.\n\n\\subsection{Learning Coupled GDBN (C-GDBN)}\nThe learning procedure described in the previous section can be executed for each signal type, i.e., RF and GPS. After learning a separated GDBN model for each signal type, we analyse the interaction behaviour between RF signal and GPS signal received at the RSU by tracking the cluster firing among $\\mathbf{\\tilde{S}^{(1)}}$ and $\\mathbf{\\tilde{S}^{(2)}}$ during a certain experience. Such an interaction can be encoded in a Coupled GDBN (C-GDBN) as shown in Fig.\\ref{fig_GDBN_CGDBN}-(b) composed of the two GDBNs representing the two signals where their hidden variables at the discrete level are stochastically coupled (in $\\mathrm{\\tilde{C}}_{t}{=}[\\mathrm{\\tilde{S}}_{t}^{(1)},\\mathrm{\\tilde{S}}_{t}^{(2)}]$) as those variables are uncorrelated but have coupled means.\nThe interactive matrix $\\Phi \\in \\mathbb{R}^{M_{1},M_{2}}$ encodes the firing cluster pattern allowing to predict the GPS signal from RF signal is defined as follows:\n\\begin{equation} \\label{interactiveTM_fromRFtoGPS}\n\\Phi = \n \\begin{bmatrix} \n \\mathrm{P}(\\mathrm{\\Tilde{S}_{1}}^{(2)}|\\mathrm{\\Tilde{S}_{1}}^{(1)}) & \\mathrm{P}(\\mathrm{\\Tilde{S}_{2}}^{(2)}|\\mathrm{\\Tilde{S}_{1}}^{(1)}) & \\dots & \\mathrm{P}(\\mathrm{\\Tilde{S}_{M_{2}}}^{(2)}|\\mathrm{\\Tilde{S}_{1}}^{(1)}) \\\\\n \\mathrm{P}(\\mathrm{\\Tilde{S}_{1}}^{(2)}|\\mathrm{\\Tilde{S}_{2}}^{(1)}) & \\mathrm{P}(\\mathrm{\\Tilde{S}_{2}}^{(2)}|\\mathrm{\\Tilde{S}_{2}}^{(1)}) & \\dots & \\mathrm{P}(\\mathrm{\\Tilde{S}_{M_{2}}}^{(2)}|\\mathrm{\\Tilde{S}_{2}}^{(1)}) \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n \\mathrm{P}(\\mathrm{\\Tilde{S}_{1}}^{(2)}|\\mathrm{\\Tilde{S}_{M_{1}}}^{(1)}) & \\mathrm{P}(\\mathrm{\\Tilde{S}_{2}}^{(2)}|\\mathrm{\\Tilde{S}_{M_{1}}}^{(1)}) & \\dots & \\mathrm{P}(\\mathrm{\\Tilde{S}_{M_{2}}}^{(2)}|\\mathrm{\\Tilde{S}_{M_{1}}}^{(1)}) \n \\end{bmatrix}.\n\\end{equation}\n\n\\subsection{Joint Prediction and Perception}\nRSU starts predicting the RF signals it expects to receive from each vehicle based on a Modified Markov Jump Particle Filter (M-MJPF) \\cite{9858012} that combines Particle filter (PF) and Kalman filter (KF) to perform temporal and hierarchical predictions. Since the acquired C-GDBN allows predicting a certain signal's dynamic evolution based on another's evolution, it requires an interactive Bayesian filter capable of dealing with more complicated predictions. To this purpose, we propose to employ an Interactive M-MJPF (IM-MJPF) on the C-GDBN. The IM-MJPF consists of a PF that propagates a set of $L$ particles equally weighted, such that $\\{\\mathrm{\\tilde{S}}_{t,l}^{(1)}, \\mathrm{W}_{t,l}^{(1)}\\}{\\sim}\\{\\pi(\\mathrm{\\tilde{S}}_{t}^{(1)}), \\frac{1}{L}\\}$, where $\\mathrm{\\tilde{S}}_{t,l}^{(1)}$, $l \\in L$ and $(.^{(1)})$ is the RF signal type. In addition, RSU relies on $\\Phi$ defined in \\eqref{interactiveTM_fromRFtoGPS} to predict $\\mathrm{\\tilde{S}}_{t}^{(2)}$ realizing the discrete cluster of vehicle's trajectory starting from the predicted RF signal according to: $\\{\\mathrm{\\tilde{S}}_{t}^{(2)},\\mathrm{W}_{t,l}^{(2)}\\}{\\sim} \\{\\Phi(\\mathrm{\\tilde{S}}_{t,l}^{(1)}){=}\\mathrm{P}(.|\\mathrm{\\tilde{S}}_{t,l}^{(1)}), \\mathrm{W}_{t,l}^{(2)}\\}$. For each predicted discrete variable $\\mathrm{\\tilde{S}}_{t,l}^{(i)}$, a multiple KF is employed to predict multiple continuous variables which guided by the predictions at the higher level as declared in \\eqref{eq_continuousLevel} that can be represented probabilistically as $\\mathrm{P}(\\mathrm{\\tilde{X}}_{t}^{(i)}|\\mathrm{\\tilde{X}}_{t-1}^{(i)}, \\mathrm{\\tilde{S}}_{t}^{(i)})$. The posterior probability that is used to evaluate expectations is given by:\n\\begin{multline} \\label{piX}\n \\pi(\\mathrm{\\tilde{X}}_{t}^{(i)})=\\mathrm{P}(\\mathrm{\\tilde{X}}_{t}^{(i)},\\mathrm{\\tilde{S}}_{t}^{(i)}|\\mathrm{\\tilde{Z}}_{t-1}^{(i)})= \\\\ \\int \\mathrm{P}(\\mathrm{\\tilde{X}}_{t}^{(i)}|\\mathrm{\\tilde{X}}_{t-1}^{(i)}, \\mathrm{\\tilde{S}}_{t}^{(i)}) \\lambda(\\mathrm{\\tilde{X}}_{t-1}^{(i)})d\\mathrm{\\tilde{X}}_{t-1}^{(i)},\n\\end{multline}\nwhere $\\lambda(\\mathrm{\\tilde{X}}_{t-1}^{(i)}){=}\\mathrm{P}(\\mathrm{\\tilde{Z}}_{t-1}^{(i)}|\\mathrm{\\tilde{X}}_{t-1}^{(i)})$. \nThe posterior distribution can be updated (and so representing the updated belief) after having seen the new evidence $\\mathrm{\\tilde{Z}}_{t}^{(i)}$ by exploiting the diagnostic message $\\lambda(\\mathrm{\\tilde{X}}_{t}^{(i)})$ in the following form: $\\mathrm{P}(\\mathrm{\\tilde{X}}_{t}^{(i)}, \\mathrm{\\tilde{S}}_{t}^{(i)}|\\mathrm{\\tilde{Z}}_{t}^{(i)}) {=} \\pi(\\mathrm{\\tilde{X}}_{t}^{(i)})\\lambda(\\mathrm{\\tilde{X}}_{t}^{(i)})$. Likewise, belief in discrete hidden variables can be updated according to: $\\mathrm{W}_{t,l}^{(i)}{=}\\mathrm{W}_{t,l}^{(i)}\\lambda (\\mathrm{\\tilde{S}}_{t}^{(i)})$ where:\n$\\lambda (\\mathrm{\\tilde{S}}_{t}^{(i)}) {=} \\lambda (\\mathrm{\\Tilde{X}}_{t}^{(i)})\\mathrm{P}(\\mathrm{\\Tilde{X}}_{t}^{(i)}|\\mathrm{\\tilde{S}}_{t}^{(i)}) {=} \\mathrm{P}(\\mathrm{\\tilde{Z}}_{t}^{(i)}|\\mathrm{\\Tilde{X}}_{t}^{(i)})\\mathrm{P}(\\mathrm{\\Tilde{X}}_{t}^{(i)}|\\mathrm{\\tilde{S}}_{t}^{(i)})$.\n\n\\subsection{Joint GPS spoofing and jamming detection}\nRSU can evaluate the current situation and identify if V2I is under attack, or the satellite link is under spoofing based on a multiple abnormality indicator produced by the IM-MJPF. The first indicator calculates the similarity between the predicted RF signal and the observed one, which is defined as:\n\\begin{equation}\\label{eq_CLA1}\n \\Upsilon_{\\mathrm{\\tilde{X}}_{t}^{(1)}} = -ln \\bigg( \\mathcal{BC} \\big(\\pi(\\mathrm{\\tilde{X}}_{t}^{(1)}),\\lambda(\\mathrm{\\tilde{X}}_{t}^{(1)}) \\big) \\bigg),\n\\end{equation}\nwhere $\\mathcal{BC}(.){=}\\int \\sqrt{\\pi(\\mathrm{\\tilde{X}}_{t}^{(1)}),\\lambda(\\mathrm{\\tilde{X}}_{t}^{(1)}})d\\mathrm{\\tilde{X}}_{t}^{(1)}$ is the Bhattacharyya coefficient.\nThe second indicator calculates the similarity between the predicted GPS signal (from the RF signal) and the observed one after decoding the RF signal which is defined as:\n\\begin{equation}\\label{eq_CLA2}\n \\Upsilon_{\\mathrm{\\tilde{X}}_{t}^{(2)}} = -ln \\bigg( \\mathcal{BC} \\big(\\pi(\\mathrm{\\tilde{X}}_{t}^{(2)}),\\lambda(\\mathrm{\\tilde{X}}_{t}^{(2)}) \\big) \\bigg),\n\\end{equation}\nwhere $\\mathcal{BC}(.){=}\\int \\sqrt{\\pi(\\mathrm{\\tilde{X}}_{t}^{(2)}),\\lambda(\\mathrm{\\tilde{X}}_{t}^{(2)}})d\\mathrm{\\tilde{X}}_{t}^{(2)}$.\nDifferent hypotheses can be identified by the RSU to understand the current situation whether there is: a jammer attacking the V2I link, or a spoofer attacking the link between the satellite and the vehicle or both jammer and spoofer are absent according to:\n\\begin{equation}\n \\begin{cases}\n \\mathcal{H}_{0}: \\text{if} \\ \\ \\Upsilon_{\\mathrm{\\tilde{X}}_{t}^{(1)}} < \\xi_{1} \\ \\text{and} \\ \\Upsilon_{\\mathrm{\\tilde{X}}_{t}^{(2)}} < \\xi_{2}, \\\\\n \\mathcal{H}_{1}: \\text{if} \\ \\ \\Upsilon_{\\mathrm{\\tilde{X}}_{t}^{(1)}} \\geq \\xi_{1} \\ \\text{and} \\ \\Upsilon_{\\mathrm{\\tilde{X}}_{t}^{(2)}} \\geq \\xi_{2}, \\\\\n \\mathcal{H}_{2}: \\text{if} \\ \\ \\Upsilon_{\\mathrm{\\tilde{X}}_{t}^{(1)}} < \\xi_{1} \\ \\text{and} \\ \\Upsilon_{\\mathrm{\\tilde{X}}_{t}^{(2)}} \\geq \\xi_{2},\n \\end{cases}\n\\end{equation}\nwhere $\\xi_{1} = \\mathbb{E}[\\Bar{\\Upsilon}_{\\mathrm{\\tilde{X}}_{t}^{(1)}}] + 3\\sqrt{\\mathbb{V}[\\Bar{\\Upsilon}_{\\mathrm{\\tilde{X}}_{t}^{(1)}}]}$, and $\\xi_{2} = \\mathbb{E}[\\Bar{\\Upsilon}_{\\mathrm{\\tilde{X}}_{t}^{(2)}}] + 3\\sqrt{\\mathbb{V}[\\Bar{\\Upsilon}_{\\mathrm{\\tilde{X}}_{t}^{(2)}}]}$. In $\\xi_{1}$ and $\\xi_{2}$, $\\Bar{\\Upsilon}_{\\mathrm{\\tilde{X}}_{t}^{(1)}}$ and $\\Bar{\\Upsilon}_{\\mathrm{\\tilde{X}}_{t}^{(2)}}$ stand for the abnormality signals during training (i.e., normal situation when jammer and spoofer are absent).\n\n\\subsection{Evaluation metrics}\nIn order to evaluate the performance of the proposed method to jointly detect jammer and GPS spoofer, we adopt the jammer detection probability ($\\mathrm{P}_{d}^{j}$) and the spoofer detection probability ($\\mathrm{P}_{d}^{s}$), respectively, which are defined as:\n\\begin{equation}\n \\mathrm{P}_{d}^{j} = \\mathrm{Pr}(\\Upsilon_{\\mathrm{\\tilde{X}}_{t}^{(1)}}\\geq \\xi_{1}, \\Upsilon_{\\mathrm{\\tilde{X}}_{t}^{(2)}} \\geq \\xi_{2}|\\mathcal{H}_{1}),\n\\end{equation}\n\\begin{equation}\n \\mathrm{P}_{d}^{s} = \\mathrm{Pr}(\\Upsilon_{\\mathrm{\\tilde{X}}_{t}^{(1)}}< \\xi_{1}, \\Upsilon_{\\mathrm{\\tilde{X}}_{t}^{(2)}} \\geq \\xi_{2}|\\mathcal{H}_{2}).\n\\end{equation}\nAlso, we evaluate the accuracy of the proposed method in predicting and estimating the vehicles' trajectories and the expected RF signals by adopting the root mean square error (RMSE) defined as:\n\\begin{equation}\n RMSE = \\sqrt{ \\frac{1}{T} \\sum_{t=1}^{T}\\bigg( \\mathrm{\\tilde{Z}}_{t}^{(i)}-\\mathrm{\\tilde{X}}_{t}^{(i)} \\bigg)^{2} },\n\\end{equation}\nwhere $T$ is the total number of predictions.\n\n\\section{Simulation Results}\nIn this section, we evaluate the performance of the proposed method to jointly detect the jammer and the spoofer using extensive simulations. We consider $\\mathrm{N}=2$ vehicles interacting inside the environment and exchanging their states (i.e., position and velocity) with the RSU. The vehicles move along predefined trajectories performing various maneuvers which are picked from the \\textit{Lankershim} dataset proposed by \\cite{5206559}. The dataset depicts a four way intersection and includes about $19$ intersection maneuvers. RSU assigns one subchannel realizing the V2I link for each vehicle over which the vehicles' states are transmitted. The transmitted signal carrying the vehicle's state and the jamming signal are both QPSK modulated. \nThe simulation settings are: carrier frequency of $2$GHz, BW${=}1.4$MHz, cell radius of $500$m, RSU antenna height and gain is $25$m and $8$ dBi, receiver noise figure of $5$dB, vehicle antenna height and gain is $1.5$m and $3$dBi, vehicle speed is $40$Km/h, V2I transmit power is $23$dBm, jammer transmit power ranging from $20$dBm to $40$dBm, SNR of $20$dB, path loss model ($128.1{+}37.6log d$), Log-normal shadowing with $8$dB standard deviation and a fast fading channel following the Rayleigh distribution.\n\\begin{figure}[ht!]\n \\begin{center}\n \\begin{minipage}[b]{.55\\linewidth}\n \\centering\n \\includegraphics[width=5.0cm]{Results/ObservedTrajectories_reference}\n \\\\[-1.5mm]\n {\\scriptsize (a)}\n \\end{minipage}\n \\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=4.9cm]{Results/ObservedRFsignal_Veh1_reference}\n \\\\[-1.5mm]\n {\\scriptsize (b)}\n \\end{minipage}\n \\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=4.9cm]{Results/ObservedRFsignal_Veh2_reference}\n \\\\[-1.5mm]\n {\\scriptsize (c)}\n \\end{minipage}\n \\caption{An example visualizing the received RF signals from the two vehicles and the corresponding trajectories: (a) Vehicles' trajectories, (b) received RF signal from vehicle 1, (c) received RF signal from vehicle 2.}\n \\label{fig_receivedRFsignalandTrajectory}\n \\end{center}\n\\end{figure}\n\\begin{figure}[t!]\n \\begin{center}\n \\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=4.5cm]{Results/clusters_trajectory_veh1}\n \\\\[-1.5mm]\n {\\scriptsize (a)}\n \\end{minipage}\n \\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=4.5cm]{Results/clusters_trajectory_veh2}\n \\\\[-1.5mm]\n {\\scriptsize (b)}\n \\end{minipage}\n \\begin{minipage}[b]{0.49\\linewidth}\n \\centering\n \\includegraphics[width=4.5cm]{Results/clusters_RFsignal_veh1}\n \\\\[-1.5mm]\n {\\scriptsize (c)}\n \\end{minipage}\n \\begin{minipage}[b]{0.49\\linewidth}\n \\centering\n \\includegraphics[width=4.5cm]{Results/clusters_RFsignal_veh2}\n \\\\[-1.5mm]\n {\\scriptsize (d)}\n \\end{minipage}\n \n \\caption{GNG output after clustering the generalized errors obtained from different experiences: (a) clustered trajectory of vehicle 1, (b) clustered trajectory of vehicle 2, (c) clustered RF signal received from vehicle 1, (d) clustered RF signal received from vehicle 2.}\n \\label{fig_GNG_of_receivedRFsignalandTrajectory}\n \\end{center}\n\\end{figure}\n\nThe RSU aims to learn multiple interactive models (i.e., C-GDBN models) encoding the cross relationship between the received RF signal from each vehicle and its corresponding trajectory. These models allow the RSU to predict the trajectory the vehicle will follow based on the received RF signal and evaluate whether the V2I is under jamming attacks or the satellite link is under spoofing. It is to note that the RSU is receiving only the RF signals from the two vehicles and obtaining their positions after decoding the RF signals. Thus, the RSU should be able to evaluate if the received RF signals are evolving according to the dynamic rules learned so far and if the vehicles are following the expected (right) trajectories to decide whether the V2I links are really under attack or whether the satellite link is under spoofing.\n\nFig.~\\ref{fig_receivedRFsignalandTrajectory}-(a) illustrates an example of the interaction between the two vehicles performing a particular manoeuvre, and Fig.~\\ref{fig_receivedRFsignalandTrajectory}-(b) shows the received RF signals by the RSU from the two vehicles. At the beginning of the learning process, RSU performs predictions according to the simplified model defined in \\eqref{eq_continuousLevel} where $\\mathrm{U}_{\\mathrm{\\Tilde{S}_{t}}^{(i)}} {=} 0$.\nAfter obtaining the generalized errors as pointed out in \\eqref{GE_continuousLevel_initialModel}, RUS clusters those errors using GNG to learn two GDBN models encoding the dynamic rules of how the RF signal and the GPS signal evolve with time, respectively, as showed in Fig.~\\ref{fig_GNG_of_receivedRFsignalandTrajectory} and Fig.~\\ref{fig_graphicalRep_transitionMatrices}. RSU can couple the two GDBNs by learning the interactive transition matrix that is encoded in a C-GDBN as shown in Fig.~\\ref{fig_interactiveMatrices}.\n\\begin{figure}[t!]\n \\begin{center}\n \\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=4.5cm]{Results/graphTransition_Trajectory_veh1}\n \\\\[-1.5mm]\n {\\scriptsize (a)}\n \\end{minipage}\n \\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=4.5cm]{Results/graphTransition_Trajectory_veh2}\n \\\\[-1.5mm]\n {\\scriptsize (b)}\n \\end{minipage}\n \\begin{minipage}[b]{0.49\\linewidth}\n \\centering\n \\includegraphics[width=4.5cm]{Results/graphTransition_RFsignal_veh1}\n \\\\[-1.5mm]\n {\\scriptsize (c)}\n \\end{minipage}\n \\begin{minipage}[b]{0.49\\linewidth}\n \\centering\n \\includegraphics[width=4.5cm]{Results/graphTransition_RFsignal_veh2}\n \\\\[-1.5mm]\n {\\scriptsize (d)}\n \\end{minipage}\n \\caption{Graphical representation of the transition matrices (TM): (a) TM related to the trajectory of vehicle 1, (b) TM related to the trajectory of vehicle 2, (c) TM related to the RF signal received from vehicle 1, (d) TM related to the RF signal received from vehicle 2.}\n \\label{fig_graphicalRep_transitionMatrices}\n \\end{center}\n\\end{figure}\n\\begin{figure}[t!]\n \\begin{center}\n \\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=3.8cm]{Results/interactiveMatrix_RFtoGPS_Neu5_veh1}\n \\\\[-1.0mm]\n {\\scriptsize (a)}\n \\end{minipage}\n \\begin{minipage}[b]{0.49\\linewidth}\n \\centering\n \\includegraphics[width=3.8cm]{Results/interactiveMatrix_RFtoGPS_Neu25_veh1}\n \\\\[-1.0mm]\n {\\scriptsize (d)}\n \\end{minipage}\n \n \\caption{Interactive transition matrix defined in \\eqref{interactiveTM_fromRFtoGPS} using different configurations: (a) $\\mathrm{M_{1}}=5$, $\\mathrm{M_{2}}=5$, (b) $\\mathrm{M_{1}}=25$, $\\mathrm{M_{2}}=25$.}\n \\label{fig_interactiveMatrices}\n \\end{center}\n\\end{figure}\n\\begin{figure}[t!]\n \\begin{center}\n \\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=4.9cm]{Results/RF_situation1_best_veh1}\n \\\\[-1.5mm]\n {\\scriptsize (a)}\n \\end{minipage}\n \\begin{minipage}[b]{0.49\\linewidth}\n \\centering\n \\includegraphics[width=4.9cm]{Results/RF_situation1_worst_veh1}\n \\\\[-1.5mm]\n {\\scriptsize (b)}\n \\end{minipage}\n \\begin{minipage}[b]{0.49\\linewidth}\n \\centering\n \\includegraphics[width=4.9cm]{Results/RF_situation1_best_veh2}\n \\\\[-1.5mm]\n {\\scriptsize (c)}\n \\end{minipage}\n \\begin{minipage}[b]{0.49\\linewidth}\n \\centering\n \\includegraphics[width=4.9cm]{Results/RF_situation1_worst_veh2}\n \\\\[-1.5mm]\n {\\scriptsize (d)}\n \\end{minipage}\n \\caption{An example visualizing the predicted and observed RF signals transmitted by the 2 vehicles using different configurations. Predicted RF signal from: (a) vehicle 1 using $\\mathrm{M_{1}}{=}5$, $\\mathrm{M_{2}}{=}5$, (b) vehicle 1 using $\\mathrm{M_{1}}{=}25$, $\\mathrm{M_{2}}{=}25$, (c) vehicle 2 using $\\mathrm{M_{1}}{=}5$, $\\mathrm{M_{2}}{=}5$, (d) vehicle 2 using $\\mathrm{M_{1}}{=}25$, $\\mathrm{M_{2}}{=}25$.}\n \\label{fig_situation1_PredictedRF}\n \\end{center}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\begin{center}\n \\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=4.8cm]{Results/GPSfromRF_situation1_best}\n \\\\[-1.0mm]\n {\\scriptsize (a)}\n \\end{minipage}\n \\begin{minipage}[b]{0.49\\linewidth}\n \\centering\n \\includegraphics[width=4.8cm]{Results/GPSfromRF_situation1_worst}\n \\\\[-1.0mm]\n {\\scriptsize (b)}\n \\end{minipage}\n %\n \\caption{An example visualizing the predicted and observed trajectories of two vehicles interacting in the environment. (a) $\\mathrm{M_{1}}{=}5$, $\\mathrm{M_{2}}{=}5$, (b) $\\mathrm{M_{1}}{=}25$, $\\mathrm{M_{2}}{=}25$.}\n \\label{fig_situation1_VehiclesTrajectories}\n \\end{center}\n\\end{figure}\n\n\\begin{figure}[ht!]\n \\begin{center}\n \\begin{minipage}[b]{.49\\linewidth}\n \\centering\n \\includegraphics[width=4.8cm]{Results/rmse_on_trajectory}\n \\\\[-1.0mm]\n {\\scriptsize (a)}\n \\end{minipage}\n \\begin{minipage}[b]{0.49\\linewidth}\n \\centering\n \\includegraphics[width=4.8cm]{Results/rmse_on_RFSignal}\n \\\\[-1.0mm]\n {\\scriptsize (b)}\n \\end{minipage}\n \\caption{The average RMSE after testing different experiences and examples of: (a) trajectories and (b) RF signals.}\n \\label{fig_rmse_onTraj_onSig}\n \\end{center}\n\\end{figure}\n\nFig.~\\ref{fig_situation1_PredictedRF} illustrates an example comparing between predicted RF signals and observed ones based on two different configurations in learning the interactive matrix (as shown in Fig.~\\ref{fig_interactiveMatrices}). Also, Fig.~\\ref{fig_situation1_VehiclesTrajectories} illustrates an example comparing between the predicted and observed trajectories of the two vehicles using the two interactive matrices depicted in Fig.~\\ref{fig_interactiveMatrices}. From Fig.~\\ref{fig_situation1_PredictedRF} and Fig.~\\ref{fig_situation1_VehiclesTrajectories} we can see that using an interactive matrix with less clusters allows to perform better predictions compared to that with more clusters. This can be validated by observing Fig.~\\ref{fig_rmse_onTraj_onSig} that illustrates the RMSE values versus different number of clusters related to the two models representing the dynamics of the received RF signals and the vehicles' trajectories. It can be seen that as the number of clusters increases the RMSE error increases, since adding more clusters decreases the firing probability that explains the possibility to be in one of the $M_{2}$ clusters of the second model conditioned in being in a certain cluster of the first model.\n\nFig.~\\ref{fig_exNormal_Spoofed_JammedTrajectories} illustrates an example of vehicle's trajectory under normal situation (i.e., jammer and spoofer are absent), under jamming attacks and under spoofing attacks. Also the figure shows the predicted trajectory which should follow the same dynamic rules learned during a normal situation. After that, we implemented the IM-MJPF on the learned C-GDBN to perform multiple predictions, i.e., to predict the RF signal that the RSU is expecting to receive from a certain vehicle and the corresponding trajectory that the vehicle is supposed to follow. IM-MJPF through the comparison between multiple predictions and observations, produces multiple abnormality signals as defined in \\eqref{eq_CLA1} and \\eqref{eq_CLA2} which are used to detect the jammer and the spoofer.\n\nFig.~\\ref{fig_abnormalitySignals_JammerSpoofer} illustrates the multiple abnormality signals related to the example shown in Fig.~\\ref{fig_exNormal_Spoofed_JammedTrajectories}. We can observe that the abnormal signals related to both RF signal (Fig.~\\ref{fig_abnormalitySignals_JammerSpoofer}-(a)) and trajectory (Fig.~\\ref{fig_abnormalitySignals_JammerSpoofer}-(b)) are below the threshold under normal situations. This proves that RSU learned the correct dynamic rules of how RF signals and trajectories evolve when the jammer and spoofer are absent (i.e., under normal situations). Also, we can see that the RSU can notice a high deviation on both the RF signal and the corresponding trajectory due to a jamming interference from what it has learned so far by relying on the abnormality signals. In contrast, we can see that under spoofing attacks, RSU notice a deviation only on the trajectory and not on the RF signal since the spoofer has affected only the positions without manipulating the RF signal. In addition, it is obvious how the proposed method allows the RSU to identify the type of abnormality occurring and to explain the cause of the detected abnormality (i.e., understanding if it was because of a jammer attacking the V2I link or a spoofer attacking the satellite link).\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=6.5cm]{Results/trajectories_underJamming_andSpoofing}\n \n \\caption{Vehicle's trajectory under: normal situation, jamming and spoofing.}\n \\label{fig_exNormal_Spoofed_JammedTrajectories}\n\\end{figure}\n\\begin{figure}[t!]\n \\begin{center}\n \\begin{minipage}[b]{.92\\linewidth}\n \\centering\n \\includegraphics[height=2.6cm]{Results/abnSignal_onRF}\n \\\\[-1.5mm]\n {\\scriptsize (a)}\n \\end{minipage}\n \\begin{minipage}[b]{.92\\linewidth}\n \\centering\n \\includegraphics[height=2.6cm]{Results/abnSignal_onGPS}\n \\\\[-1.5mm]\n {\\scriptsize (b)}\n \\end{minipage}\n %\n \\caption{Abnormality Signals related to the example shown in Fig.\\ref{fig_exNormal_Spoofed_JammedTrajectories}: (a) abnormality indicators related to the RF signal, (b) abnormality indicators related to the trajectory.}\n \\label{fig_abnormalitySignals_JammerSpoofer}\n \\end{center}\n\\end{figure}\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[height=3.2cm]{Results/Detection_Probability_RFfromGPS_versusPj}\n \\caption{Detection probability ($\\mathrm{P_{d}}$) versus jammer's power ($\\mathrm{P_{J}}$) using different number of clusters $\\mathrm{M}_{2}$.}\n \\label{fig_jammerDetectionProb}\n\\end{figure}\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[height=3.2cm]{Results/spoofingDetectionProbability_falseAlarm_versusM2}\n \\caption{Spoofing detection probability ($\\mathrm{P}_{d}^{s}$) and spoofing false alarm ($\\mathrm{P}_{f}^{s}$) versus the number of clusters $\\mathrm{M}_{2}$.}\n \\label{fig_spooferDetectionProb}\n\\end{figure}\n\nFig.~\\ref{fig_jammerDetectionProb} shows the overall performance of the proposed method in detecting the jammer by testing many situations and examples and by considering different jamming powers which ranges from $20$dBm to $40$dBm. It can be seen that the proposed method is able to detect the jammer with high probabilities (near $1$) and by considering low and high jamming powers. Also, the figure compares the performance in detecting the jammer by varying the number of clusters ($M_{2}$).\nFig.~\\ref{fig_spooferDetectionProb} shows the overall performance of the proposed method in detecting the spoofer by testing different different examples of driving maneuvers. It can be seen that the RSU is able to detect the spoofer with high detection probability and null false alarm versus different number of clusters.\n\n\\section{Conclusion}\nA joint detection method of GPS spoofing and jamming attacks is proposed. The method is based on learning a dynamic interactive model encoding the cross-correlation between the received RF signals from multiple vehicles and their corresponding trajectories. Simulation results show the high effectiveness of the proposed approach in jointly detecting the GPS spoofer and jammer attacks. \nSubsequent work will extend the system model to consider more than two vehicles with different channel conditions and various modulation schemes to evaluate the effectiveness of the proposed method.\n\n\\bibliographystyle{IEEEtran}\n", "answers": ["The generative interactive model used in the method is called the Coupled Generalized Dynamic Bayesian Network (C-GDBN)."], "length": 4482, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "2d0e8d88c8dcd187eb0590fb072f6a69bd774c9aa029246b"} {"input": "How does a media application determine the context of an event?", "context": "2015-05-14 Assigned to ROVI GUIDES, INC. reassignment ROVI GUIDES, INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: TV GUIDE, INC.\n2015-05-14 Assigned to UV CORP. reassignment UV CORP. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: UNITED VIDEO PROPERTIES, INC.\n2015-05-14 Assigned to TV GUIDE, INC. reassignment TV GUIDE, INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: UV CORP.\nMethods and systems are described herein for quickly and easily displaying supplemental information about an event occurring in a media asset. In some embodiments, a media application may use a content-recognition module to determine the context of an event and distribute itemized tasks to multiple entities in order to generate the supplemental information about the event.\nWhile viewing media assets (e.g., a television program), users may wish to learn more information about an event (e.g., a statement made by a person appearing in the media asset, the validity of a claim in an advertisement, etc.) occurring in the media asset. While some media assets allow a user to select additional options or added features (e.g., pop-up biographies about the cast and crew), when the added features appear and what topic the added features concern are determined by the content producer and not the user. Furthermore, as the added feature is derived from the content producer, the added feature may be biased or may present limited viewpoints about an event. Therefore, added features provided by a content producer may not provide the added information about an event that a user desires.\nIn order to gain the added information that a user desires, the user may use additional devices (e.g., a laptop computer) to search (e.g., using an Internet search engine) for more information about the event. However, without knowing the proper context (e.g., who said the statement, what was the tone of the statement, when was the statement said, etc.) of the event or what search terms to use to describe the context of the event (e.g., how to describe the tone of the statement), a user may not be able to determine (even using a search engine) more information about the event. Moreover, the use of general search terms may not provide the accuracy or precision needed by the user. Furthermore, even if a user may eventually determine the information, the effort and time required may distract the user from the media asset.\nAccordingly, methods and systems are described herein for quickly and easily displaying supplemental information about an event occurring in a media asset. In some embodiments, a media application may use a content-recognition module to determine the context of an event in a media asset and distribute itemized tasks to multiple users in order to generate the supplemental information about the event. The context-recognition module prevents the user from being distracted from the media asset (e.g., while the user attempts to describe the context of the event or search for information about the event). In addition, by distributing tasks to multiple entities (e.g., crowd-sourcing), the media application may collect large amounts of information in relatively short periods of time (or in real-time) and aggregate and/or filter the information to generate the supplemental information about the event based on multiple viewpoints and/or sources. By using multiple viewpoints and/or sources, the media application enhances the completeness (e.g., by providing unbiased information) and accuracy of the supplemental information.\nFor example, when a statement or action is made by a character or person appearing on a media asset (e.g., a television program), a user may request supplemental information about the statement or action. In response, the media application may determine the context of the statement (e.g., who said the statement and to what the statement was referring) or action (e.g., what was the reason for the action). After determining the context of the statement or action, the media application may itemize into tasks the additional information it requires in order to generate the supplemental information. The media application may then transmit requests including the tasks to a plurality of other users. Based on the responses from the plurality of other users, the media application may generate the supplemental information for display to the user.\nIn some embodiments, a media application may use multiple types of content-recognition modules and/or algorithms to determine the context of an event. For example, the media application may process data associated with the event in order to determine the context of an event. In some embodiments, processing the various types of data may include cross-referencing the data in a database indicating the different contexts the event may have.\nIn some embodiments, a media application may generate supplemental information about an event in a media asset in response to a user request. In order to generate the supplemental information, the media application may transmit, to multiple users, a request for additional information regarding a context of an event shown in a media asset. Upon receiving messages from the plurality of users that include the requested additional information, the media application may generate the supplemental information associated with the context of the event based on the messages.\nIt should be noted, the systems and/or methods described above may be applied to, or used in accordance with, other systems, methods and/or apparatuses.\nFIG. 9 is a flowchart of illustrative steps for generating supplemental information based on additional information provided by a plurality of users in accordance with some embodiments of the disclosure.\nAccordingly, methods and systems are described herein for quickly and easily displaying supplemental information about an event occurring in a media asset. The methods and systems described herein alleviate the need for a user to determine the proper context (e.g., who said a statement, what was the tone of the statement, when was the statement said, etc.) of an event in a media asset, or the search terms to use to describe the event (e.g., the proper search terms to describe the tone of the statement), in order to determine more information about the event. In addition, the methods and systems increase the completeness and accuracy of the information compared to information gathered using traditional searching methods (e.g., an Internet search engine), without distracting the user from the media asset.\nIn some embodiments, a media application may receive a user input from a user device for supplemental information about the context of an event shown in a media asset. The media application may determine additional information required to generate the supplemental information about the context of the event shown in a media asset, and transmit requests for the additional information to one or more users. The media application may receive one or more messages, which include the requested additional information, from the one or more users and generate the supplemental information based on the one or more message. The media application may then instruct the user device to display the supplemental information.\nAs used herein, “supplemental information” refers to any information related to or associated with an event in a media asset. For example, supplemental information may include, but is not limited to, the verification of a statement or claim in a media asset, further descriptions and/or information about objects or entities shown and/or described in a media asset, and/or any other information, including, but not limited to, a video or audio segment, that may interest a user about an event in a media asset. In some embodiments, the media application may generate supplemental information based on one or more pieces of additional information.\nAs used herein, “additional information” refers to any information used to generate supplemental information. For example, in an embodiment in which supplement information is the verification of a statement made by a person displayed in a media asset, and a request for the additional information from the media application includes a request for a fact needed to verify the factual basis of the statement, the additional information may be the fact used to verify the statement. For example, if an advertisement claims to have the best product on the market, the media application may use additional information such as the name of the product in question, a list of all other products in the market, and the results of a comparison study of the product in question to all other products to determine whether or not the product is actually the “best” product on the market. Additionally or alternatively, the media application may request industry and/or user reviews related to the event (e.g., reviews indicating the quality of the product). The media application may then use the information in the reviews to generate the supplemental information.\nAs used herein, an “event” is any action (e.g., a verbal statement, opinion and/or physical movement), segment (e.g., a portion of a news broadcast featuring a particular topic), or other occurrence during a media asset that may be of particular interest to a user. For example, in some embodiments an event may be a statement or gesture made by a character or person in a media asset affirming or denying a claim.\nAs referred to herein, the terms “media asset” and “content” should be understood to mean an electronically consumable user asset, such as television programming, as well as pay-per-view programs, on-demand programs (as in video-on-demand (VOD) systems), Internet content (e.g., streaming content, downloadable content, Webcasts, etc.), video clips, audio, content information, pictures, rotating images, documents, playlists, websites, articles, books, electronic books, blogs, advertisements, chat sessions, social media, applications, games, and/or any other media or multimedia and/or combination of the same. Media applications also allow users to navigate among and locate content. As referred to herein, the term “multimedia” should be understood to mean content that utilizes at least two different content forms described above, for example, text, audio, images, video, or interactivity content forms. Content may be recorded, played, displayed or accessed by user equipment devices, but can also be part of a live performance.\nAs referred to herein, the phrase “user equipment device,” “user equipment,” “user device,” “electronic device,” “electronic equipment,” “media equipment device,” or “media device” should be understood to mean any device for accessing the content described above, such as a television, a Smart TV, a set-top box, an integrated receiver decoder (IRD) for handling satellite television, a digital storage device, a digital media receiver (DMR), a digital media adapter (DMA), a streaming media device, a DVD player, a DVD recorder, a connected DVD, a local media server, a BLU-RAY player, a BLU-RAY recorder, a personal computer (PC), a laptop computer, a tablet computer, a WebTV box, a personal computer television (PC/TV), a PC media server, a PC media center, a hand-held computer, a stationary telephone, a personal digital assistant (PDA), a mobile telephone, a portable video player, a portable music player, a portable gaming machine, a smart phone, or any other television equipment, computing equipment, or wireless device, and/or combination of the same.\nIn some embodiments, the user equipment device may have a front facing screen and a rear facing screen, multiple front screens, or multiple angled screens. In some embodiments, the user equipment device may have a front facing camera and/or a rear facing camera. On these user equipment devices, users may be able to navigate among and locate the same content available through a television. Consequently, media may be available on these devices, as well. The media provided may be for content available only through a television, for content available only through one or more of other types of user equipment devices, or for content available both through a television and one or more of the other types of user equipment devices. The media applications may be provided as on-line applications (i.e., provided on a website), or as stand-alone applications or clients on user equipment devices. Various devices and platforms that may implement media applications are described in more detail below.\nIn some embodiments, a media application may transmit, to a plurality of users, a request for additional information regarding a context of an event shown in a media asset. As used herein, a “plurality of users” may include, but is not limited to any device, entity, or source of information that may process a request for additional information. For example, the plurality of users may include a person operating a user equipment device. In some embodiments, the person may receive (e.g., via e-mail, Internet posting, advertisement, or any other applicable information delivery method) the request from the media application for additional information, and in response generate a message (e.g., via a return e-mail, an answer to the Internet posting, a user input in the advertisement, or any other applicable method of transmitting information) that includes the additional information. It should be noted that in some embodiments, transmitting a request to a plurality of users may also include querying one or more databases (e.g., an Internet search engine or any other storage device, including, but not limited to, databases containing previously generated supplemental information and/or additional information) or consulting one or more data gathering services (e.g., a intelligent personal assistant application) for the additional information.\nIn some embodiments, a media application may use a content-recognition module or algorithm to determine the context of an event and distribute itemized tasks to multiple users in order to generate the supplemental information about the event. The content-recognition module may use object recognition techniques such as edge detection, pattern recognition, including, but not limited to, self-learning systems (e.g., neural networks), optical character recognition, on-line character recognition (including but not limited to, dynamic character recognition, real-time character recognition, intelligent character recognition), and/or any other suitable technique or method to determine the objects and/or characteristics in media assets. For example, the media application may receive media assets in the form of a video. The video may include a series of frames. For each frame of the video, the media application may use a content-recognition module or algorithm to determine the context (e.g., the person that is speaking or a facial gesture affirming or denying a statement) of an event occurring during the frame or series of frames.\nIn some embodiments, the content-recognition module or algorithm may also include speech recognition techniques, including but not limited to Hidden Markov Models, dynamic time warping, and/or neural networks (as described above) to translate spoken words into text. The content-recognition module may also use other techniques for processing audio and/or visual data. For example, the media application may monitor the volume of a statement in a media asset to determine the tone of the statement (e.g., a high volume may indicate an angry tone).\nIn addition, the media application may use multiple types of optical character recognition and/or fuzzy logic, for example, when determining the context of a keyword(s) retrieved from data (e.g., media data, translated audio data, subtitle data, user-generated data, etc.) associated with the media asset (or when cross-referencing various types of data with databases indicating the different contexts of events as described below). For example, the particular data field may be a textual data field. Using fuzzy logic, the system may determine two fields and/or values to be identical even though the substance of the data field or value (e.g., two different spellings) is not identical. In some embodiments, the system may analyze particular data fields of a data structure or media asset frame for particular values or text. The data fields could be associated with characteristics, additional information, and/or any other data required for the function of the embodiments described herein. Furthermore, the data fields could contain values (e.g., the data fields could be expressed in binary or any other suitable code or programming language).\nAs used herein, the “context” of an event refers to the set of circumstances or facts that surround a particular event that influence or affect the meaning of the event. For example, when determining the context of a written and/or spoken statement, the media application may determine who or what authored/stated the statement, the written and/or spoken words and/or other statements that preceded and/or followed the statement, the tone of the statement, and/or any other conditions that may alter the connotation of the statement.\nFIG. 1 shows an illustrative example of a media application that may be used to display supplemental information in accordance with some embodiments of the disclosure. Display 100 illustrates a display on a user device displaying a media asset. Display 108 illustrates a display featuring supplemental information as described and/or generated in FIGS. 6-9. It should be noted that display 100 and display 108 may be presented on any of the devices shown in FIGS. 3-4. For example, in some embodiments, display 100 and display 108 may be displayed on user equipment 402, 404, and/or 406 (FIG. 4).\nIn FIG. 1, display 100 represents a display of a media asset (e.g., a streaming television program) on a user device (e.g., user equipment 402, 404, and/or 406 (FIG. 4)). Display 100 includes entity 102 and entity 104. In display 100, entity 104 is currently speaking as indicated by event 106. As shown in FIG. 1, event 106 is a statement (e.g., “We export a lot of coal”) by a person in the media asset.\nIn some embodiments, display 108 represents the continued display of the media asset on a user device, after a user has requested supplemental information about event 106. For example, a media application may have received a user input (e.g., via user input interface 310 (FIG. 3)) while entity 104 was speaking. Using the systems and methods described herein (e.g., FIGS. 6-9), the media application generated supplemental information 110. Supplemental information 110 represents more information about event 106.\nFor example, the media application (e.g., media application 206 (FIG. 2)) may have determined the context of event 106. Specifically, the media application may determine via a content-recognition module or algorithm the words spoken and/or actions by the person during the event. Additionally or alternatively, the media application may analyze the words and/or action during a predetermined amount of time (e.g., ten seconds) before and/or after the event (e.g., in order to better understand the context of the event). Furthermore, by cross-referencing the words and/or other information obtained by the content-recognition module (e.g., as discussed below in relation to FIG. 5) with a database, the content-recognition module determines that the term “we,” the person in the media asset refers to an organization or body. The content-recognition module or algorithm may also determine that the term “export” refers to shipping goods out of a country. The content-recognition module or algorithm may also determine that the term “a lot” refers to a particular numerical amount. Finally, the content-recognition module or algorithm may also determine that the term “coal” refers to a mineral of fossilized carbon.\nThe content-recognition module or algorithm may also determine the relationships between words and/or other information obtained by the content-recognition module. For example, by processing the relationship between the words, the media application determines that event 106 is a statement regarding a particular amount of a particular substance shipped out of a particular country. Therefore, the media application determines that the request for supplemental information is likely a request to determine the validity of the statement. The media application then generates the supplemental information.\nThe media application may also have stored supplemental information generated by previous requests (e.g., supplemental information generated in response to the same or different user viewing the media asset at an earlier date), and display the supplemental information again during the event (either in response to a user input requesting supplemental information or automatically without a user requesting supplemental information).\nFIG. 2 shows an illustrative example of a system that may be used to generate supplemental information (e.g., supplemental information 110 (FIG. 1)) based on additional information provided by a plurality of users in accordance with some embodiments of the disclosure. For example, in some embodiments, system 200 may be used to generate supplemental information (e.g., supplemental information 110 (FIG. 1)) on a display (e.g., display 108 (FIG. 1)) of a user device (e.g., user equipment 402, 404, and/or 406 (FIG. 4)). It should be noted that in some embodiments, the devices shown in FIG. 2 may correspond to one or more devices in FIGS. 3-4.\nFIG. 2 shows system 200. In system 200, a user is currently accessing a media asset on display 202. In some embodiments, display 202 may correspond to display 100 (FIG. 1)). During an event (e.g., event 106 (FIG. 1)) a user may have requested supplemental information about an event (e.g., event 106 (FIG. 1)) in display 202 using user device 204. Media application 206, which in some embodiments, may be implemented on user device 204 or at a remote location (e.g., supplemental information source 418 (FIG. 4)), receives the request for supplemental information.\nMedia application 206 determines the context of the event (e.g., who said the statement making up the event and to what the statement was referring). After determining the context of the statement, the media application may itemize into one or more tasks, additional information (e.g., facts) it requires in order to generate the supplemental information (e.g., a verification or correction of the factual basis of the statement). For example, if the event is a statement about the amount of coal that is exported from the United States (e.g., as described in relation to FIG. 1 above), media application 206 may determine the fact required to generate the supplemental information is the exact numerical amount of coal that is exported from the United States. The media application may then transmit requests for the additional information (e.g., a request for the exact numerical amount of coal that is exported from the United States) to a plurality of other users.\nIn FIG. 2, users operating user device 208, user device 210, and user device 212 represent a plurality of users. Having determined the additional information it requires in order to generate the supplemental information, media application 206 requests the additional information from the plurality of users. In system 200, media application 206 has transmitted the same task (e.g., the same question) to each of the plurality of users. In some embodiments, one or more of the users may receive different tasks. For example, by breaking the additional information into small, independent tasks, media application 206 may increase the speed (e.g., multiple users may work concurrently to solve different parts of a problem) and accuracy (e.g., reducing the tasks to smaller, less complex problems reduces the chance of human error) of the additional information returned by the plurality of users.\nIn addition, by breaking the additional information into small, independent tasks, the plurality of users may not know to what they are contributing (enhancing the privacy of the user that requested the supplemental information), however, the plurality of users can still be effective in their individual tasks. In addition, by breaking the additional information into small, independent tasks, the media application may more easily outsource the requests for additional information. For example, one or more of the tasks used to generate the additional information may be the same as one or more of the tasks used to generate other additional information (e.g., additional information used to generate different supplemental information in response to a request for supplemental information about the same or a different event issued by the same or a different user). The response to each of the request and/or the additional information may be stored (e.g., on any of the devices accessible by communications network 414 (FIG. 4)) for subsequent retrieval.\nBased on the responses, transmitted as messages including the additional information, from the plurality of other users, media application 206 may generate the supplemental information (e.g., supplemental information 110 (FIG. 1)) for display to the user on the user device 204. For example, media application may aggregate, append, and/or compare the additional information in each of the messages received from the plurality of users. The supplemental information may then be generated based on the aggregated, appended, and/or compared additional information (e.g., as described in FIG. 9 below).\nIn some embodiments, the plurality of users may receive summary information about the event with the request for additional information. (e.g., a video clip of a portion or segment of the media asset, a textual description, etc.), which may help the plurality of users provide additional information. For example, in some embodiments, the media application may instead of (or in addition to) determining the context of an event, determine a particular portion of the event that would be needed for the plurality of users to provide additional information about the event.\nFor example, the media application may use progress information associated with the progress of the media asset (e.g., line 506 (FIG. 5)) to determine at what point during the progression of the media asset the event occurred, and in response, transmit a portion of the media asset beginning ten second before that point and ending ten seconds after that point. For example, if the event is a statement made by a character or person in a media asset, the media application may determine when the statement began (e.g., the point of progress of the media asset in which the statement began) and ended. The media application may then include the portion containing the entire statement (and the event) in the request for additional information sent to the plurality of users.\nThe selected portion may include any amount of summary information that the media application determines is necessary for the user or any one of the plurality of users to understand the main action sequence. This summary information (e.g., a portion of the media asset) may be included with the request for additional information (e.g., in a file transmitted with the request), or may be included with the generated supplemental information as a reference for the user. For example, the media application may select a segment of the play length of the media asset or a particular scene of the media asset, which includes the event, for to display to the plurality of users along with the request for additional information.\nFor example, if an event (e.g., a statement) was in response to a question, the media application may also determine when the question began and ended, and send the entire question (or the play length of the media asset corresponding to the question) to the plurality of users as well. After determining the portion to provide to the plurality of users (e.g., a segment including the ten seconds before and the ten seconds after the event), the media application may provide the summary information of the event and any other material needed by the plurality of users to understand the event and/or request for supplemental information from the user.\nIn some embodiments, a portion of the media asset containing the event, as selected by the media application, may also include any amount of the play length of the media asset, or any amount of scenes or segments from the media asset. In some embodiments, the portion may include segments of the play length of the media asset or scenes from the media asset that are not adjacent during the normal playback of the media asset. For example, in some embodiments, a portion of the media asset may include one or more sequences or scenes of interest to the plurality of users, even though the particular sequences or scenes are featured at different points in the play length of the media asset. The media application may determine the segments or scenes to include based on a content recognition file (e.g., data structure 500 (FIG. 5)) describing the media asset. For example, if a plot point or other information, which may be relevant to an event is displayed earlier in the media asset, the summary information may include a portion of the media asset displaying the plot point.\nIn some embodiments, the length of a portion may be determined based on the genre of the media asset. In some embodiments, the length of the portion may depend on a user profile for the user or for anyone of the plurality of users. For example, a user profile and/or a content recognition file (e.g., data structure 500 (FIG. 5)) may indicate that a particular user may require more or less additional content. For example, the user may be aware of particular characters or plot points in the media asset and, therefore, may not require the additional content to introduce those aspects.\nIn some embodiments, the plurality of users may receive a particular user interface, which organizes the data about the event (e.g., a clip of the actual event, summary information about the event, information about the request for supplemental information issued by the user, etc.). The interface may also include an automatic submission form, which may be used to generate a message, which is sent to the media application.\nIn some embodiments, the media application may also receive user input from the user requesting the supplemental information that further affects the generation of supplemental information by the media application. For example, the user may request the supplemental information includes particular information (e.g., the factual basis of a statement), may request a multimedia format of the supplemental information (e.g., textual description, a video clip, etc.), may request a form of the supplemental information (e.g., a short description about the event, an Internet link to other sources of information on the event, or a true or false designation about the event) by entering user inputs (e.g., via user input interface 310 (FIG. 3)).\nIt should be noted that any information or process referred to in this disclosure that is referred to as being in response to a user input may alternatively and/or additionally be performed automatically by the media application (e.g., via control circuitry 304 (FIG. 3)). For example, in some embodiments, a user may request a true or false designation (e.g., an on-screen pop-up box indicating whether an event was true or false). Additionally and/or alternatively, in some embodiments, the true or false designation may appear automatically based on predetermined settings indicating to the media application to display a true or false designation in response to detecting an event.\nIn some embodiments, an indicator that supplemental information has previously been generated or is currently ready to generate (e.g., a plurality of users are available) may be displayed to a user (e.g., on display 100 (FIG. 1) during the event). The indicator may also indicate the particular information, the multimedia format, and/or the form of supplemental information that is available. An indicator may also appear with the supplemental information (e.g., supplemental information 110 (FIG. 1)), which allows the user to request additional supplemental information or provide feedback/responses (e.g., rating the quality of the supplemental information) to the media application and/or plurality of users.\nIn some embodiments, a user may also access (e.g., via selection of an indicator and/or automatically upon the supplemental information being generated) summary information about the event. For example, in some embodiments (e.g., when the supplemental information is not generated in real-time), the media asset may have progressed to a different point by the time the supplemental information is ready for display. Therefore, the media application may need to provide a video clip of the event or other summary information, so that the user remembers about what or why the supplemental information was requested.\nFIG. 3 is a block diagram of an illustrative user equipment device in accordance with some embodiments of the disclosure. It should be noted that the components shown in FIG. 3 may be used to store, receive, transmit, and/or display the media assets, additional information, and/or supplemental information as described herein. For example, media application 206 (FIG. 2) may be implemented on user equipment device 300, and may issue instructions (e.g., displaying supplemental information 110 (FIG. 1)) via control circuitry 304.\nUsers may access media assets and the media application (and its display screens described above and below) from one or more of their user equipment devices. FIG. 3 shows a generalized embodiment of illustrative user equipment device 300. More specific implementations of user equipment devices are discussed below in connection with FIG. 4. User equipment device 300 may receive content and data via input/output (hereinafter “I/O”) path 302. I/O path 302 may provide content (e.g., broadcast programming, on-demand programming, Internet content, content available over a local area network (LAN) or wide area network (WAN), and/or other content) and data to control circuitry 304, which includes processing circuitry 306 and storage 308. Control circuitry 304 may be used to send and receive commands, requests, and other suitable data using I/O path 302. I/O path 302 may connect control circuitry 304 (and specifically processing circuitry 306) to one or more communications paths (described below). I/O functions may be provided by one or more of these communications paths, but are shown as a single path in FIG. 3 to avoid overcomplicating the drawing.\nControl circuitry 304 may be based on any suitable processing circuitry such as processing circuitry 306. As referred to herein, processing circuitry should be understood to mean circuitry based on one or more microprocessors, microcontrollers, digital signal processors, programmable logic devices, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), etc., and may include a multi-core processor (e.g., dual-core, quad-core, hexa-core, or any suitable number of cores) or supercomputer. ", "answers": ["It uses a content-recognition module or algorithm."], "length": 5567, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "690d6407b1ef9f8d373370bbcb02f7ecf198379774101140"} {"input": "How are smartphones and tablets different from a technical perspective?", "context": "The future of mobile CPUs, part 1: Today’s fork in the road | Ars Technica\n2013 may be a big year for the evolution of smartphones and tablets.\nMobile computing's rise from niche market to the mainstream is among the most significant technological trends in our lifetimes. And to a large extent, it's been driven by the bounty of Moore’s Law—the rule that transistor density doubles every 24 months. Initially, most mobile devices relied on highly specialized hardware to meet stringent power and size budgets. But with so many transistors available, devices inevitably grew general-purpose capabilities. Most likely, that wasn't even the real motivation. The initial desire was probably to reduce costs by creating a more flexible software ecosystem with better re-use and faster time to market. As such, the first smartphones were very much a novelty, and it took many years before the world realized the potential of such devices. Apple played a major role by creating innovative smartphones that consumers craved and quickly adopted.\nTo some extent, this is where we still stand today. Smartphones are still (relatively) expensive and primarily interesting to the developed world. But over the next 10 years, this too will change. As Moore’s Law rolls on, the cost of a low-end smartphone will decline. At some point, the incremental cost will be quite minimal and many feature phones of today will be supplanted by smartphones. A $650 unsubsidized phone is well beyond the reach of most of the world compared to a $20 feature phone, but a $30 to $40 smartphone would naturally be very popular.\nIn this grand progression, 2013 will certainly be a significant milestone for mobile devices, smartphones and beyond. It's likely to be the first year in which tablets out-ship notebooks in the US. And in the coming years, this will lead to a confluence of high-end tablets and ultra-mobile notebooks as the world figures out how these devices co-exist, blend, hybridize, and/or merge.\nAgainst this backdrop, in this two-part series, we'll explore the major trends and evolution for mobile SoCs. More importantly, we'll look to where the major vendors are likely going in the next several years.\nTablet and phone divergence\nWhile phones and tablets are mobile devices that often share a great deal of software, it's becoming increasingly clear the two are very different products. These two markets have started to diverge and will continue doing so over time.\nFrom a technical perspective, smartphones are far more compact and power constrained. Smartphone SoCs are limited to around 1W, both by batteries and by thermal dissipation. The raison d’etre of a smartphone is connectivity, so a cellular modem is an absolute necessity. For the cost sensitive-models that make up the vast majority of the market, the modem is integrated into the SoC itself. High-end designs favor discrete modems with a greater power budget instead. The main smartphone OSes today are iOS and Android, though Windows is beginning to make an appearance (perhaps with Linux or BlackBerry on the horizon). Just as importantly, phone vendors like HTC must pass government certification and win the approval of carriers. There is very much a walled-garden aspect, where carriers control which devices can be attached to their networks, and in some cases devices can only be sold through a certain carrier. The business model places consumers quite far removed from the actual hardware.\nIn contrast, tablets are far more akin to the PC both technically and economically. The power budget for tablet SoCs is much greater, up to 4W for a passively cooled device and as high as 7-8W for systems with fans. This alone means there is a much wider range of tablet designs than smartphones. Moreover, the default connectivity for tablets is Wi-Fi rather than a cellular modem. The vast majority of tablets do not have cellular modems, and even fewer customers actually purchase a wireless data plan. As a result, cellular modems are almost always optional discrete components of the platform. The software ecosystem is relatively similar, with Microsoft, Apple, and Google OSes available. Because tablets eschew cellular modems, the time to market is faster, and they are much more commonly sold directly to consumers rather than through carriers. In terms of usage models, tablets are much more PC-like, with reasonable-sized screens that make games and media more attractive.\nLooking forward, these distinctions will likely become more pronounced. Many tablets today use high-end smartphone SoCs, but the difference in power targets and expected performance is quite large. As the markets grow in volume, SoCs will inevitably bifurcate to focus on one market or the other. Even today, Apple is doing so, with the A6 for phones and the larger A6X for tablets. Other vendors may need to wait a few years to have the requisite volume, but eventually the two markets will be clearly separate.\nHorizontal business model evolution\nAnother aspect of the mobile device market that is currently in flux and likely to change in the coming years is the business model for the chip and system vendors. Currently, Apple is the only company truly pursuing a vertically integrated model, where all phones and tablets are based on Apple’s own SoC designs and iOS. The tight integration between hardware and software has been a huge boon for Apple, and it has yielded superb products.\nSamsung is one of the few others companies that takes a vertically integrated approach to phones and tablets, although in truth its strategy seems to be ambivalent on that point. Unlike Apple, Samsung’s SoCs are readily available to third parties, and some Samsung devices, such as the S7562 Galaxy S Duos, use SoCs from competitors. More recently though, there has been a trend of Samsung devices using Samsung SoCs, at least for the premier products. For the moment, Samsung’s approach is best characterized as a hybrid, particularly as the company lacks a bespoke OS.\nThe rest of the major SoC vendors (e.g., Intel, Qualcomm, Nvidia, TI, Mediatek, etc.) have stayed pretty far away from actual mobile devices. These companies tend to focus on horizontal business models that avoid competing with customers or suppliers.\nIn the long term, mobile devices are likely to evolve similarly to the PC and favor a horizontal business model. The real advantage is one of flexibility; as costs drop and the market expands, it will be increasingly necessary for vendors like HTC to offer a wide range of phones based on radically different SoCs. While a vertically integrated company like Apple can focus and maintain leadership in a specific (and highly lucrative) niche, it would be very difficult to expand in many growing areas of the market. The differences between an iPhone 6 and a $20 feature phone are tremendous and would be very difficult for a single company to bridge.\nHowever, SoC vendors will attempt to reap the benefits of vertical integration by providing complete reference platforms to OEMs. Conceptually, this is a form of \"optional\" system integration, where the phone vendor or carrier can get the entire platform from the SoC supplier. This has the principal advantages of reducing time to market while also providing a baseline quality and experience for consumers. Currently, this approach has mostly been tested in emerging markets, but it's likely to become more common over time. There is a crucial distinction between reference platforms and vertical integration. Namely, OEMs can always choose to customize a platform to differentiate, and the SoC vendor avoids dealing with consumers directly. Typically, most of the customization is in terms of software on top of a base operating system.\nQuote:Moreover, that will make the transition to a 10nm node even more difficult, as the foundries will have to move from 20nm interconnects to 10nm interconnects and skip a generation.The advances in technology lately allowing components on such a small scale to even be envisioned, much less planned for, are truly amazing.\nOff topic: show\nI present the first generation 'non-technical' rock:\nI don't think your horizontal market development theory is supported by facts. Samsung and Apple are more vertically oriented than their competition, for starters. I know this article is narrowly focused on the hardware, but MS and Intel getting into hardware, Amazon getting into hardware, Google buying Moto, this is all vertical integration. How can you support the idea that this trend will be reversed with no real justification? I'm sure mobile chips will continue to specialize, but I don't think this means what you think it means. Automobile companies started making their own engines and with rare exceptions, never went back to being more horizontal. Same with retail and their store brands. Same with cloud companies and their servers. Same with mobile companies and their OSs. The horizontal market of PCs created by long-lasting standards and loose hegemony is the exception, not the norm.\nWhy wouldn't the foundries be able close the process gap with Intel? Is it a matter of money? Scale?\nI'm not so sure about several things:1- Moore's law's relevance. Moore's Law is about ICs. ICs are not as big a part of mobile computers as they are of desktops, even of laptops: screens, batteries, radios are a huge part of tablets' and phones' costs, as opposed to the bare SoC + RAM.2- The tablet vs phone dichotomy. For some reason (probably price insensitivity due to subsidies), Phones have a tendency to be more powerful than Tablets, ie phone SoCs are more than good enough for tablets. Since the OS and peripherals are the same, it makes more sense to design and build just one type of SoC, and just disable the phone-modem part of it (even the other radios are still required: BT, Wifi, GPS...), same as Intel disable cache and cores for their entry-level CPUs. Once you're fabbing a SoC, it makes more sense to make more of the same than to setup a separate run of a cut-down SoC on an older process, unless volumes are huge. We might still be getting previous-generation, well amortized SoCs in cheaper tablets, though.3- On the contrary, I see a tablet and phone convergence (the ugly phablet). I'm patiently waiting for the new 6\"+ phones to replace my Nook Color and Galaxy Note 1 with a single device.4- The advantage of diversity ? Software is becoming ever more important than hardware. Multiplying SoCs means multiplying product development costs, making support and updates more difficult... Again, unless volumes are huge, OEMs are probaly better off going the way of the car industry and using modular \"platforms\" housed in different chassis with various screen sizes, keyboards, radios, digitizers...I'm wondering why the \"single device\" trend does not figure in your analysis. Is it stillborn ? Does it have no impact nor dependency on/with SoCs ?\nSamsung has its own bespoke OS: Bada and it is used on an extensive line of devices. I think there are numbers somewhere that it outsold Windows Phone 7 for a time.\ngypsumfantastic wrote:Why wouldn't the foundries be able close the process gap with Intel? Is it a matter of money? Scale?First mover advantage.\nSoC? System on a Chip I guess?\nYou're way off on the Moore's Law/cost of smartphones point. The processors used in today's high-end smartphones are already cheap, around $25. And there are less expensive options if you want a lower end product. In fact, the hardware in the whole smartphone is relatively cheap. Analyst's estimate the Z10's materials cost around $160, the iPhone 5 around $140. They're using expensive glass and metals, then there's the battery, memory, etc. which means the processor is a small factor of the cost.And then there's the jump from $140 in materials to the unsubsidized costs. The reason these phones cost $650 is because of the high margins these companies are able to get and the high cost of hardware design and/or software development. But the point is that making the processors 4 times better/cheaper isn't going to change the economics of the smartphone. What will change the economics is commoditized designs and software and cheaper materials all around. Then you'll have a $50 smartphone that's decent.\nLast edited by ggeezz on Wed Feb 13, 2013 9:17 am\nbigterp wrote:SoC? System on a Chip I guess?Yup.\ngypsumfantastic wrote:Why wouldn't the foundries be able close the process gap with Intel? Is it a matter of money? Scale?Money and momentum, the x86 market is a huge money maker for Intel so it is able to recoup its huge investments for advanced foundries.\nQuote:Currently, the only products using 3D integration are FPGAs from Xilinx,Doesn't Sony use it in the PS Vita? I thought I read somewhere that they had the CPU, main memory (2 dies) and video memory, so 4 dies in total, sitting on top of each other all on the same chip.\nrenoX wrote:gypsumfantastic wrote:Why wouldn't the foundries be able close the process gap with Intel? Is it a matter of money? Scale?Money and momentum, the x86 market is a huge money maker for Intel so it is able to recoup its huge investments for advanced foundries.Exactly and I would clarify that it's all about margins, the difference between what it costs to make a chip and what it sells for. The margins for desktop and server processors is huge because a) the whole product is expensive so $200 to $1000 for the chip is acceptable, and b) Intel has huge advantages in that space and little competition.So Intel can afford to do the R&D to stay ahead of the curve and keep their position. When your smartphone chip sells for $25 you can't do the R&D to leapfrog a company that sells Xeons for $1000 and Core i5's for $200.\nI am happy to see Kanter here at Ars, I like his writing and he maintains Real World Tech, where Linus Torvalds often shows up to comment on CPU arch and other interesting topics.\nggeezz wrote:renoX wrote:gypsumfantastic wrote:Why wouldn't the foundries be able close the process gap with Intel? Is it a matter of money? Scale?Money and momentum, the x86 market is a huge money maker for Intel so it is able to recoup its huge investments for advanced foundries.Exactly and I would clarify that it's all about margins, the difference between what it costs to make a chip and what it sells for. The margins for desktop and server processors is huge because a) the whole product is expensive so $200 to $1000 for the chip is acceptable, and b) Intel has huge advantages in that space and little competition.So Intel can afford to do the R&D to stay ahead of the curve and keep their position. When your smartphone chip sells for $25 you can't do the R&D to leapfrog a company that sells Xeons for $1000 and Core i5's for $200.Spot on.Intel are able to piggyback other development efforts off the highly lucrative mainstream x86 market which generates the huge sums of money to fund their amazing fab technology.The question for the future is how the economics will stack up when overall device costs fall significantly and there is a big downward pressure on SoC prices. In that situation, can Intel still justify bringing their A-game to a market where products are essentially commoditised and you have processors selling for a only a few dollars each?The lesson from their previous involvement in the DRAM market is that they probably won't want to be involved because there isn't enough money to be made to justify manufacturing phone SoCs on a cutting edge, or near cutting edge process. In that scenario, Intel may not totally abandon the market but they might just stick to manufacturing SoCs on nodes that are a step or two behind the state of the art.\nsolomonrex wrote:I don't think your horizontal market development theory is supported by facts. Samsung and Apple are more vertically oriented than their competition, for starters. I know this article is narrowly focused on the hardware, but MS and Intel getting into hardware, Amazon getting into hardware, Google buying Moto, this is all vertical integration. How can you support the idea that this trend will be reversed with no real justification? I'm sure mobile chips will continue to specialize, but I don't think this means what you think it means. Automobile companies started making their own engines and with rare exceptions, never went back to being more horizontal. Same with retail and their store brands. Same with cloud companies and their servers. Same with mobile companies and their OSs. The horizontal market of PCs created by long-lasting standards and loose hegemony is the exception, not the norm.Yea, each year Amazon, MS, Apple and Google look more and more the same.\nIntel cannot abandon the phone/tablet market. Desktop/laptop sales are stagnating/decreasing and phones/tablets are on the rise. This trend is only going to increase going forward.But you're right, they're going to have use their fabs that are a step or two behind the cutting the edge. But they're going to have to up their game in the tablet space to even be able to do that.\ngypsumfantastic wrote:Why wouldn't the foundries be able close the process gap with Intel? Is it a matter of money? Scale?Intel's called Chipzilla for a reason up\nLagrange wrote:The question for the future is how the economics will stack up when overall device costs fall significantly and there is a big downward pressure on SoC prices. In that situation, can Intel still justify bringing their A-game to a market where products are essentially commoditised and you have processors selling for a only a few dollars each?The lesson from their previous involvement in the DRAM market is that they probably won't want to be involved because there isn't enough money to be made to justify manufacturing phone SoCs on a cutting edge, or near cutting edge process. In that scenario, Intel may not totally abandon the market but they might just stick to manufacturing SoCs on nodes that are a step or two behind the state of the art.I think the processing is a bigger advantage than many realize. If Intel can stay ahead in process design - which this article seems to indicate - they should have a major advantage. All else being equal a 14nm chip should be significantly faster and more efficient than the same chip at 22nm. Add in the fact that yields increase geometrically - you can fit a lot more 14nm chips on a given wafer size vs 22nm (or 32nm for the other manufacturers.) and you have a very appealing proposition. And then add in the fact that Intel actually has a pretty good graphics stack and IP. It's not a sure thing by any means, but I suspect ARM may have just prodded a sleeping giant.edit: Also worth noting, Intel, TSMC, and Samsung are the only manufacturers who are building out 450nm wafers. This will increase yields dramatically. Of course Samsung and TSMC will build ARM out, but it definitely puts quite a bit of pressure on all other manufacturers. As the article mentions Intel and Samsung are the only ones who control production top to bottom, and Samsung must share some of the benefits with ARM.\nAs someone who has worked in the semiconductor industry for longer than contemporary fanboys have existed, I'm getting a bit fed-up seeing many of these articles which rewrite history with analyses distorted by the lens of the contemporary Apple or Samsung fanboy.The mobile industry moved to horizontal integration a long time ago. Better indicators than these odd contemporary obsessions with the relatively non-innovative Samsung and Apple where when Motorola spun out its semiconductor division as Freescale, Nokia stopped making it's own custom designs with TI and ST, and Ericsson spun out it's Ericsson Mobile Platforms division and formed ST-Ericsson with ST.The true innovation in the mobile market was done a decade or more ago mainly by Moto/Nokia/Ericsson/TI/Qualcomm, and Samsung and Apple had little to do with it. Under the hood most stuff has been on a simple linear evolutionary path by the SoC providers since then. The phone manufacturers have then mainly been simply sticking these off-the-shelf SoCs (and their included low-level software stacks) in a box, a job made all the easier with the SoC manufacturer collaboration providing the bulk of the work for the AOSP.\nLast edited by paul5ra on Wed Feb 13, 2013 11:06 am\nintroiboad wrote:I am happy to see Kanter here at Ars, I like his writing and he maintains Real World Tech, where Linus Torvalds often shows up to comment on CPU arch and other interesting topics.Indeed. Most tech writing in this area is atrocious. This piece is one of the few well informed articles I've read in a long time.\nggeezz wrote:Intel cannot abandon the phone/tablet market. Desktop/laptop sales are stagnating/decreasing and phones/tablets are on the rise. This trend is only going to increase going forward.But you're right, they're going to have use their fabs that are a step or two behind the cutting the edge. But they're going to have to up their game in the tablet space to even be able to do that.The word you're looking for is Haswell, as far as I know.\nMabsark\nggeezz wrote:Intel cannot abandon the phone/tablet market. Desktop/laptop sales are stagnating/decreasing and phones/tablets are on the rise. This trend is only going to increase going forward.But you're right, they're going to have use their fabs that are a step or two behind the cutting the edge. But they're going to have to up their game in the tablet space to even be able to do that.Actually, that trend will not simply keep increasing going forward. The reason desktop/laptop sales are stagnating/decreasing is due to the fact that most people already have one and therefore don't need to buy another one. The exact same thing will happen with tablets as well. Sales are increasing now because people without tablets are buying them. When most people already own a tablet, they won't be buying a new one every year and therefore sales will stagnate/decrease.The PC market is saturated and in a couple of years, the tablet market will be saturated too. Basically, in order to increase sales in a saturated market, you need to increase the population growth or decrease the longevity of the product.\ngypsumfantastic wrote:Why wouldn't the foundries be able close the process gap with Intel? Is it a matter of money? Scale?Probably a mix of a lot of things. One big thing was during this recession, Intel was the ONLY fab company that didn't scale back their R&D. That alone gave Intel a large advantage.Intel has almost always been ahead. One of the reasons could be that Intel works with much higher margins than many of the commodity companies like Samsung and TSMC.Outside of the P4 flop and some of the monopolistic abuses, Intel has typically been selling to high end customers that are willing to pay a premium for \"the best\".Intel has a large benefit of having a relatively \"good name\" when it comes to CPUs, so they can effectively charge a brand-name premium.I'm sure there are other reasons, and probably better reasons, but these are the main ones that I think of.\nMabsark wrote:Actually, that trend will not simply keep increasing going forward. The reason desktop/laptop sales are stagnating/decreasing is due to the fact that most people already have one and therefore don't need to buy another one. The exact same thing will happen with tablets as well. Sales are increasing now because people without tablets are buying them. When most people already own a tablet, they won't be buying a new one every year and therefore sales will stagnate/decrease.The PC market is saturated and in a couple of years, the tablet market will be saturated too. Basically, in order to increase sales in a saturated market, you need to increase the population growth or decrease the longevity of the product.That's true as long as most people are still buying both a tablet and a laptop when each needs to be replaced. I think the assumption is that, as you say, the tablet market will saturate, with people just replacing existing ones, but the desktop/laptop market could decrease much farther than that, if most people stop replacing them at all. I'm not sure of the likelihood of that, but I think that's where this idea comes from.\nggeezz wrote:Intel cannot abandon the phone/tablet market. Desktop/laptop sales are stagnating/decreasing and phones/tablets are on the rise. This trend is only going to increase going forward.But you're right, they're going to have use their fabs that are a step or two behind the cutting the edge. But they're going to have to up their game in the tablet space to even be able to do that.The upcoming Haswell chip is showing to consume 1/3 the power of IvyBridge at peak, consumes 1/20th the power at idle, all the while maintaining Identical or better performance.This chip should actually compete with ARM CPUs on both power/performance and idle.I am expecting a large war.\nApple once again is dictating the performance in the mobile industry. Nice to see others being able to keep the pace, as well.\npaul5ra wrote:As someone who has worked in the semiconductor industry for longer than contemporary fanboys have existed, I'm getting a bit fed-up seeing many of these articles which rewrite history with analyses distorted by the lens of the contemporary Apple or Samsung fanboy.The mobile industry moved to horizontal integration a long time ago. Better indicators than these odd contemporary obsessions with the relatively non-innovative Samsung and Apple where when Motorola spun out its semiconductor division as Freescale, Nokia stopped making it's own custom designs with TI and ST, and Ericsson spun out it's Ericsson Mobile Platforms division and formed ST-Ericsson with ST.The true innovation in the mobile market was done a decade or more ago mainly by Moto/Nokia/Ericsson/TI/Qualcomm, and Samsung and Apple had little to do with it. Under the hood most stuff has been on a simple evolutionary path by the SoC providers since then.Yeah, and most of the innovation in the automobile industry came about before Henry Ford came into the business. Doesn't change the fact that cars would probably have been an asterisk in the history books under \"toys for rich people\" if it weren't for him.The same applies to to mobile computing for Apple, Samsung, et al.\nSheldonRoss wrote:Lagrange wrote:The question for the future is how the economics will stack up when overall device costs fall significantly and there is a big downward pressure on SoC prices. In that situation, can Intel still justify bringing their A-game to a market where products are essentially commoditised and you have processors selling for a only a few dollars each?The lesson from their previous involvement in the DRAM market is that they probably won't want to be involved because there isn't enough money to be made to justify manufacturing phone SoCs on a cutting edge, or near cutting edge process. In that scenario, Intel may not totally abandon the market but they might just stick to manufacturing SoCs on nodes that are a step or two behind the state of the art.I think the processing is a bigger advantage than many realize. If Intel can stay ahead in process design - which this article seems to indicate - they should have a major advantage. All else being equal a 14nm chip should be significantly faster and more efficient than the same chip at 22nm. Add in the fact that yields increase geometrically - you can fit a lot more 14nm chips on a given wafer size vs 22nm (or 32nm for the other manufacturers.) and you have a very appealing proposition. And then add in the fact that Intel actually has a pretty good graphics stack and IP. My point was that Intel might have a one or two process advantage over the rest of the industry at the cutting edge but that doesn't mean that they can afford to manufacture on those processes for very low margin parts. If the SoC market becomes increasingly commoditised, there isn't going to be the money to justify making them in a state of the art fab.Remember that one of the big selling points of Itanium was that it would make use of process advantages that were effectively paid for by the mainstream x86 market. That didn't quite work out in practice and Itanium processors were often well behind Xeons in process technology.\npaul5ra wrote:As someone who has worked in the semiconductor industry for longer than contemporary fanboys have existed, I'm getting a bit fed-up seeing many of these articles which rewrite history with analyses distorted by the lens of the contemporary Apple or Samsung fanboy.The mobile industry moved to horizontal integration a long time ago. Better indicators than these odd contemporary obsessions with the relatively non-innovative Samsung and Apple where when Motorola spun out its semiconductor division as Freescale, Nokia stopped making it's own custom designs with TI and ST, and Ericsson spun out it's Ericsson Mobile Platforms division and formed ST-Ericsson with ST.The true innovation in the mobile market was done a decade or more ago mainly by Moto/Nokia/Ericsson/TI/Qualcomm, and Samsung and Apple had little to do with it. Under the hood most stuff has been on a simple linear evolutionary path by the SoC providers since then. The phone manufacturers have then mainly been simply sticking these off-the-shelf SoCs (and their included low-level software stacks) in a box, a job made all the easier with the SoC manufacturer collaboration providing the bulk of the work for the AOSP.Just goes to show that people who have worked in an industry for a long time don't always understand what that industry is doing.You haven't been working in it long enough to seem to know that it was Acorn and Apple that invented the mobile ARM CPU in the first place. All those companies you've mentioned have just been working off Acorn and Apple's pioneering work. Now, Apple is back at it again, very successfully, and all the companies you mentioned that produce chips with ARM IP in them are licensing them from the company Acorn and Apple formed—ARM.\nLast edited by melgross on Wed Feb 13, 2013 11:13 am\nMark Havel wrote:ggeezz wrote:Intel cannot abandon the phone/tablet market. Desktop/laptop sales are stagnating/decreasing and phones/tablets are on the rise. This trend is only going to increase going forward.But you're right, they're going to have use their fabs that are a step or two behind the cutting the edge. But they're going to have to up their game in the tablet space to even be able to do that.The word you're looking for is Haswell, as far as I know.If tablets move into the $100-200 range, is there going to be room for Haswell?So long as there is a higher-end tablet market, then Haswell will be able to shine, but it's going to be a much more powerful and costly part than the sort of ARM based hardware that often runs tablets. If we see a race to the bottom where price is the dominant motivator behind purchases, then a high performance SoC will struggle to make its mark.\nmelgross wrote:paul5ra wrote:As someone who has worked in the semiconductor industry for longer than contemporary fanboys have existed, I'm getting a bit fed-up seeing many of these articles which rewrite history with analyses distorted by the lens of the contemporary Apple or Samsung fanboy.The mobile industry moved to horizontal integration a long time ago. Better indicators than these odd contemporary obsessions with the relatively non-innovative Samsung and Apple where when Motorola spun out its semiconductor division as Freescale, Nokia stopped making it's own custom designs with TI and ST, and Ericsson spun out it's Ericsson Mobile Platforms division and formed ST-Ericsson with ST.The true innovation in the mobile market was done a decade or more ago mainly by Moto/Nokia/Ericsson/TI/Qualcomm, and Samsung and Apple had little to do with it. Under the hood most stuff has been on a simple linear evolutionary path by the SoC providers since then. The phone manufacturers have then mainly been simply sticking these off-the-shelf SoCs (and their included low-level software stacks) in a box, a job made all the easier with the SoC manufacturer collaboration providing the bulk of the work for the AOSP.Just goes to show that people who have worked in an industry for a long time don't always understand what that industry is doing.You haven't been working in it long enough to seem to know that it was Acorn and Apple that invented the mobile ARM CPU in the first place. All those companies you've mentioned have just been working off Acorn and Apple's pioneering work. Now, Apple is back at it again, very successfully, and all the companies you mentioned that produce chips with ARM IP in them are licensing them from the company Acorn and Apple formed—ARM.Of course I realise ARM IP has indeed been a major driving factor too (though only one if several architectures before ARM became dominant), though I see ARM's influence on the mobile industry as having nothing to do with modern day Apple and only one small piece of the puzzle. My point is that the hard electrical engineering, mathematics, DSP, semiconductor physics/chemistry, RF engineering, analogue design, CAD etc. that make modern telecommunications possible has very little to do with the fashion companies who consumers (and unfortunately much of the tech media) associate with it and give the credit (though in this respect Samsung does deserve a bit more credit for their work on NAND flash and displays). The industry simply would not exist TODAY without the overwhelming horizontal integration that already dominates.\nQuote:In the long term, mobile devices are likely to evolve similarly to the PC and favor a horizontal business model. The real advantage is one of flexibility; as costs drop and the market expands, it will be increasingly necessary for vendors like HTC to offer a wide range of phones based on radically different SoCs. You don't mention in the article that each SoC necessarily requires a bit of parallel dev work unlike the PC. In the PC world there is a standard BIOS and HW architecture that allows for pluggable designs. On a highly integrated SoC this is untrue. HTC suffers because it has to support radically different SoCs, their drivers and boot loaders, etc. Quote:While a vertically integrated company like Apple can focus and maintain leadership in a specific (and highly lucrative) niche, it would be very difficult to expand in many growing areas of the market. The differences between an iPhone 6 and a $20 feature phone are tremendous and would be very difficult for a single company to bridge.It's only difficult because Apple chooses to ignore that market, not because they can't. If they can release a $99 Apple TV, they can surely cobble together a $20 feature phone if they chose to eschew 8GB of NAND, BT, WiFi, a specialized dock connector, LTE, and their specialized processors. In other words, build the equivalent of an iPod shuffle with a horrible screen and no OS to speak of.\npaul5ra wrote:melgross wrote:paul5ra wrote:As someone who has worked in the semiconductor industry for longer than contemporary fanboys have existed, I'm getting a bit fed-up seeing many of these articles which rewrite history with analyses distorted by the lens of the contemporary Apple or Samsung fanboy.The mobile industry moved to horizontal integration a long time ago. Better indicators than these odd contemporary obsessions with the relatively non-innovative Samsung and Apple where when Motorola spun out its semiconductor division as Freescale, Nokia stopped making it's own custom designs with TI and ST, and Ericsson spun out it's Ericsson Mobile Platforms division and formed ST-Ericsson with ST.The true innovation in the mobile market was done a decade or more ago mainly by Moto/Nokia/Ericsson/TI/Qualcomm, and Samsung and Apple had little to do with it. Under the hood most stuff has been on a simple linear evolutionary path by the SoC providers since then. The phone manufacturers have then mainly been simply sticking these off-the-shelf SoCs (and their included low-level software stacks) in a box, a job made all the easier with the SoC manufacturer collaboration providing the bulk of the work for the AOSP.Just goes to show that people who have worked in an industry for a long time don't always understand what that industry is doing.You haven't been working in it long enough to seem to know that it was Acorn and Apple that invented the mobile ARM CPU in the first place. All those companies you've mentioned have just been working off Acorn and Apple's pioneering work. Now, Apple is back at it again, very successfully, and all the companies you mentioned that produce chips with ARM IP in them are licensing them from the company Acorn and Apple formed—ARM.Of course I realise ARM IP has indeed been a major driving factor too (though only one if several architectures before ARM became dominant), though I see ARM's influence on the mobile industry as having nothing to do with modern day Apple and only one piece of the puzzle. My point is that the hard electrical engineering, mathematics, DSP, semiconductor physics/chemistry, RF engineering, analogue design,etc. that make modern telecommunications possible has very little to do with the fashion companies who consumers (and unfortunately much of the tech media) associate with it and give the credit (though in this respect Samsung does deserve a bit more credit for their work on NAND flash and displays). The industry simply would not exist TODAY without the overwhelming horizontal integration that already dominates.Yes the efforts of these companies getting cellular communications standardized were immense. And the technology matured. And then they didn't do much with it. It took some youngin's to look at the problem fresh and add the UI that make today's smartphones work. As we have all seen, the moment your technology has matured is the moment you are screwed because someone else now has the opportunity to look at it as a black box and make something new. Each of those manufacturers knew that smartphones would eventually be awesome, but none of them had the UI and software design to make a truly breakout product. Imagine if Motorola would have been smart enough to buy the Android guys instead of Google. Instead, Google bought a bunch of patents on that cellular black box to try to defend it's platform.And when you think about it, which consumes more man years of engineering effort per year at this point.... iterating that cellular black box or developing the OS, services and apps that power today's smartphones?\nIntel had better decide that they are competing in this space \"for real\", or they are screwed. They've already let the Atom languish for five years, during which ARM has completely caught up in performance.Just like Tim Cook said, if you don't cannibalize your own markets someone else will do it for you.Whether Intel will embrace that concept in time remains to be seen. Personally, I hope they don't; if Intel transforms into a chipless Fab company (like TSMC) everyone benefits.\nI still think Samsung has the advantage long term because they have both the SOC and the memory products. As mentioned in the article, TSV's (Through Silicon Via's) are going to be quite a disruption. Today, people normally stack an LPDDR2 package on top of their SOC package (POP or Package On Package). Within the LPDDR2 package, you could have a stack of DRAM die typically with wire bonding connecting the die within the package.Once you more to TSV's, you can have a LOT more connections between the SOC and its DRAM's. While this is being standardized through JEDEC (http://www.jedec.org/category/technolog ... a/3d-ics-0), Samsung has all the pieces in house to do whatever they want. You could see a 512 bit or higher bus from the SOC to the memory. The trick is that the memory and the SOC need to line up with each other when you stack them. This gives Samsung an inherent advantage.This isn't just going to impact mobile either. Take a look at that JEDEC link. It also lists High Bandwidth Memory (HBM). This is a 1024 bit bus that provides 128GBytes/s to 256GBytes/s of bandwidth to a stack of up to 8 DRAM's. Here is your processor that includes 8-16 cores and 4GBytes of really, really, fast DRAM... No DIMMs required. How many of them do you want in your server rack?If I was Intel or Apple, I would be thinking seriously about making some investments in Micron to guarantee they make some compelling DRAM's to integrate with their SOC's and processors... otherwise Samsung is going to blow them out of the water on bandwidth.\nGreat_Scott wrote:Intel had better decide that they are competing in this space \"for real\", or they are screwed. They've already let the Atom languish for five years, during which ARM has completely caught up in performance.Just like Tim Cook said, if you don't cannibalize your own markets someone else will do it for you.Whether Intel will embrace that concept in time remains to be seen. Personally, I hope they don't; if Intel transforms into a chipless Fab company (like TSMC) everyone benefits.It's true that Atom has stood still for too long, but honestly it's pretty amazing how Atom is still competitive with current ARM chips. The Z2760 is even 32nm vs 28nm of the latest Krait and A15 chips.But that's all changing with Atom moving to the tick tock schedule this year. It wouldn't even surprise me to see Apple move to Intel chips for IOS.And I don't see how Intel moving to a chipless Fab company would help everyone. It certainly wouldn't help Intel.\nMabsark wrote:ggeezz wrote:Intel cannot abandon the phone/tablet market. Desktop/laptop sales are stagnating/decreasing and phones/tablets are on the rise. This trend is only going to increase going forward.But you're right, they're going to have use their fabs that are a step or two behind the cutting the edge. But they're going to have to up their game in the tablet space to even be able to do that.Actually, that trend will not simply keep increasing going forward. The reason desktop/laptop sales are stagnating/decreasing is due to the fact that most people already have one and therefore don't need to buy another one. The exact same thing will happen with tablets as well. Sales are increasing now because people without tablets are buying them. When most people already own a tablet, they won't be buying a new one every year and therefore sales will stagnate/decrease.The PC market is saturated and in a couple of years, the tablet market will be saturated too. Basically, in order to increase sales in a saturated market, you need to increase the population growth or decrease the longevity of the product.Yes and no. I'm not sure the tablet market will saturate in a \"couple of years.\" It may be more like 5 years. But that's a quibble.Here's the real issue. Right now Apple wants you to own an iPhone AND iPad AND Macbook AND iWatch AND Apple TV. Microsoft, OTOH, is making the Surface so that you could ditch your laptop and just use a Surface. Not everyone, but some people.If 5 years from now, we're in a world where a significant number of people use a Surface-type device instead of a laptop, then the PC market is going to contract significantly. Maybe some of the tablet-like devices will use moderately expensive Intel chips, but some of them are going to use cheaper chips.\nGravyGraphics wrote:I still think Samsung has the advantage long term because they have both the SOC and the memory products. As mentioned in the article, TSV's (Through Silicon Via's) are going to be quite a disruption. Today, people normally stack an LPDDR2 package on top of their SOC package (POP or Package On Package). Within the LPDDR2 package, you could have a stack of DRAM die typically with wire bonding connecting the die within the package.Once you more to TSV's, you can have a LOT more connections between the SOC and its DRAM's. While this is being standardized through JEDEC (http://www.jedec.org/category/technolog ... a/3d-ics-0), Samsung has all the pieces in house to do whatever they want. You could see a 512 bit or higher bus from the SOC to the memory. The trick is that the memory and the SOC need to line up with each other when you stack them. This gives Samsung an inherent advantage.This isn't just going to impact mobile either. Take a look at that JEDEC link. It also lists High Bandwidth Memory (HBM). This is a 1024 bit bus that provides 128GBytes/s to 256GBytes/s of bandwidth to a stack of up to 8 DRAM's. Here is your processor that includes 8-16 cores and 4GBytes of really, really, fast DRAM... No DIMMs required. How many of them do you want in your server rack?If I was Intel or Apple, I would be thinking seriously about making some investments in Micron to guarantee they make some compelling DRAM's to integrate with their SOC's and processors... otherwise Samsung is going to blow them out of the water on bandwidth.Why not AMD? Last I checked they still made memory...and processors/GPUs.", "answers": ["Smartphones are more compact and power constrained."], "length": 7568, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "7d357dddfddb09bf71c6ce3d2c6780b3380d725f534d0b91"} {"input": "What is the security parameter for the AES-256 block cipher?", "context": "\\section{Introduction\\label{sct::intro}}\nSymmetric, public-key (asymmetric) and hash-based cryptography constitute a fundamental pillar of modern cryptography. \nSymmetric cryptography includes symmetric-key encryption, where a shared secret key is used for both encryption and decryption. Cryptographic hash functions map arbitrarily long strings to strings of a fixed finite length. Currently deployed public-key schemes are\nused to establish a common secret key between two remote parties. They are based on factoring large numbers or solving the discrete logarithm problem over a finite group. For more details about modern cryptography the interested reader can consult one of the many excellent references on the topic, e.g.~\\cite{Katz:2007:IMC:1206501}.\n\nIn contrast to asymmetric schemes based on factoring or solving the discrete logarithm problem and which are completely broken by a quantum adversary via Shor's algorithm~\\cite{SJC.26.1484}, symmetric schemes and hash functions are less vulnerable to quantum attacks. The best known quantum attacks against them are based on Grover's quantum search algorithm~\\cite{PhysRevLett.79.325}, which offers a quadratic speedup compared to classical brute force searching. Given a search space of size $N$, Grover's algorithm finds, with high probability, an element $x$ for which a certain property such as $f(x)=1$ holds, for some function $f$ we know how to evaluate (assuming such a solution exists). The algorithm evaluates $f$ a total of $\\mathcal{O}(\\sqrt{N})$ times. It applies a simple operation in between the evaluations of $f$, so the $\\mathcal{O}(\\sqrt{N})$ evaluations of $f$ account for most of the complexity. In contrast, any classical algorithm that evaluates $f$ in a similar ``black-box'' way requires on the order of $N$ evaluations of $f$ to find such an element.\n\nAny quantum algorithm can be mapped to a quantum circuit, which can be implemented on a quantum computer. The quantum circuit represents what we call the ``logical layer\". Such a circuit can always be decomposed in a sequence of ``elementary \ngates\", such as Clifford gates (CNOT, Hadamard etc.~\\cite{NC00}) augmented by a non-Clifford gate such as the T gate.\n\nRunning a logical circuit on a full fault-tolerant quantum computer is highly non-trivial. The sequence of logical gates have to be mapped to \nsequences of surface code measurement cycles (see e.g.~\\cite{PhysRevA.86.032324} for extensive details). By far, the most resource-consuming (in \nterms of number of qubits required and time) is the T gate\\footnote{Clifford gates are ``cheap\", i.e. they require relatively small overhead for implementation in the surface code, but are not universals, hence a non-Clifford gate is required. One such gate is the T gate. There are other possible choices, however all of the non-Clifford gates require special techniques such as magic state distillation~\\cite{1367-2630-14-12-123011,PhysRevA.86.052329} and significant overhead (order of magnitudes higher than Clifford gates) to be implemented in the surface code. In fact, to a first order approximation, for the purpose of resource estimation, one can simply ignore the overhead introduced by the Clifford gates and simply focus only on the T gates.}. \nIn comparison with surface code defects and braiding techniques~\\cite{PhysRevA.86.032324}, novel lattice surgery \ntechniques~\\cite{2018arXiv180806709F,1808.02892,1367-2630-14-12-123011} reduce the spatial overhead required for implementing T gates via magic state distillation by approximately a factor of 5, while also modestly improving the running time. \n\nIn this paper we first analyze the security of symmetric schemes and hash functions against large-scale fault-tolerant quantum adversaries, using surface code defects and braiding techniques. We take into account the time-space trade-offs with parallelizing quantum search, down to the fault-tolerant layer. Naively, one might hope that $K$ quantum computers (or quantum ``processors'', as we will call them later in the paper) running in parallel reduce the number the circuit depth down to $\\mathcal{O}(\\sqrt{N})/K$ steps, similar to the classical case of distributing a search space across $K$ classical processors. However quantum searching does not parallelize so well, and the required number of steps\nfor parallel quantum searching is of the order $\\mathcal{O}(\\sqrt{N/K})$~\\cite{quantph.9711070}. This is a factor of $\\sqrt{K}$ larger than $\\mathcal{O}(\\sqrt{N})/K$ . As shown in~\\cite{quantph.9711070}, the optimal way of doing parallel quantum search is to partition the search space into $N/K$ parts, and to perform independent quantum searches on each part.\n\nSecondly, we investigate the security of public-key cryptographic schemes such as RSA and ECC against \nquantum attacks, using the latest developments in theory of fault-tolerant quantum error correction, i.e. novel lattice surgery \ntechniques~\\cite{2018arXiv180806709F,1808.02892,1367-2630-14-12-123011}.\n\nThe remainder of this paper is organized as follows. In Sec.~\\ref{sct::method}, we provide an overview of the methodology used in our analysis. In Sec.~\\ref{sct::ciphers} we investigate the security of the AES family of modern symmetric ciphers. In Sec.~\\ref{sct::hash} we analyze the security of the SHA family of hash functions. In Sec.~\\ref{sct::bitcoin} we investigate the security of Bitcoin's~\\cite{satoshi:bitcoin} proof-of-work consensus mechanism. We conclude our investigation of symmetric and hash-based cryptographic schemes in Sec.~\\ref{sct::intrinsic_parallel_grover}, where we evaluate the intrinsic cost of running the Grover algorithm with a trivial oracle (i.e., an oracle with a unit cost of 1 for each invocation).\n\nIn the subsequent sections we analyze public-key cryptographic schemes. In Sec.~\\ref{sct::rsa} and Sec.~\\ref{sct::ecc} we examine the most common public-key establishment schemes, such as RSA and ECC, respectively. In the subsequent sections we analyze public-key cryptographic schemes. In Sec.~\\ref{sct::rsa} and Sec.~\\ref{sct::ecc} we examine the most common public-key establishment schemes, such as RSA and ECC, respectively. Finally we summarize our findings and conclude in Sec.~\\ref{sct::conclusion}.\n\\section{Methodology\\label{sct::method}}\n\n\\subsection{Symmetric cryptography and hash functions\\label{sct::symmetric}}\nThe methodology, sketched in Fig.~\\ref{fgr:flowchart_lite} and Fig.~\\ref{fgr:full_algorithm}, follows the same lines as the one described in great detail in our earlier paper~\\cite{10.1007/978-3-319-69453-5_18}, which we refer the interested reader to for more details.\n\\begin{figure}[htb]\n\t\\centering\n \\includegraphics[width=0.35\\textwidth]{figures/flowchart_lite.pdf}\n \\caption{Analyzing an attack against a symmetric cryptographic function with a fault-tolerant quantum adversary. Our resource estimation methodology takes into account several of the layers between the high level description of an algorithm and the physical hardware required for its execution. Our approach is modular should assumptions about any of these layers change, and hence it allows one to calculate the impact of improvements in any particular layer.}\n \\label{fgr:flowchart_lite}\n\\end{figure}\n\\begin{figure}\n\t\\centering\n\t \\includegraphics[width=0.46\\textwidth]{figures/grover_vertical.pdf}\n \\caption{Grover searching with an oracle for $f : \\{0,1\\}^k \\rightarrow \\{0,1\\}^k$. The algorithm makes $\\lfloor \\frac{\\pi}{4} 2^{N/2}\\rfloor$ calls to\n$G$, the \\emph{Grover iteration}, or, if parallelized on $K$ processors, $\\lfloor \\frac{\\pi}{4} 2^{N/(2K)}\\rfloor$ calls to $G$. The Grover iteration has two\nsubroutines. The first, $U_g$, implements the predicate $g : \\{0,1\\}^k\n\\rightarrow \\{0,1\\}$ that maps $x$ to $1$ if and only if $f(x) = y$. Each call to $U_g$ involves two calls to a reversible implementation of $f$ and one call to a comparison circuit that checks whether $f(x) = y$.}\n \\label{fgr:full_algorithm}\n\\end{figure}\n\nWe assume a surface-code based fault-tolerant architecture~\\cite{PhysRevA.86.032324}, using Reed-Muller distillation schemes~\\cite{Fowler:2013aa}. For each scheme we vary the possible physical error rates per gate from $10^{-4}$ to $10^{-7}$. We believe that this range of physical error rates is wide enough to cover both first generation quantum computers as well as more advanced future machines.\nIn comparison to surface code defects and braiding methods~\\cite{PhysRevA.86.032324}, lattice surgery \ntechniques~\\cite{2018arXiv180806709F,1808.02892,1367-2630-14-12-123011} mostly impact the physical footprint of the fault-tolerant layer required to run a specific quantum algorithm, reducing the distillation overhead by approximately a factor of 5. The temporal overhead (i.e. the number of surface code cycles) is reduced less drastically. For this reason, lattice surgery has less significant effects in estimating the security of symmetric schemes or hash functions, reducing the security parameter\\footnote{The security parameter is defined as the logarithm base two of the number of fundamental operations (in our case surface code cycles) required to break the scheme.} by at most 1 and decreasing the spatial overhead by at most a factor of 5. Therefore when estimating the security of symmetric and hash-based cryptographic schemes we use surface code defects and braiding techniques.\n\nFor each cryptographic primitive, we display four plots, in the following order:\n\\begin{enumerate}\n\\item We plot the total number of surface code cycles per CPU (where a CPU is a quantum computer capable of executing a single instance of Grover's quantum search algorithm) as a function of the number of CPUs. We directly tie the quantum security parameter to the total number of surface code cycles (see~\\cite{10.1007/978-3-319-69453-5_18} for more details). We also add to the plot the theoretical lower bound achievable by quantum search in the cases of: a) considering the oracle a black box of unit cost (lower line), and b) considering the oracle as composed of ideal quantum gates, each of unit cost (upper line). Note that the difference between b) and a) represents the intrinsic cost of logical overhead (i.e. the overhead introduced by treating the oracle as a logical circuit and not a blackbox), whereas the difference between the upper lines and b) represents the intrinsic cost introduced by the fault-tolerant layer.\n\n\\item We plot the total wall-time per CPU (i.e. how long will the whole computation take on a parallel quantum architecture) as a function of the number of CPUs. The horizontal dashed line represents the one-year time line, i.e. the $x$ coordinate of the intersection point between the ``Total time per CPU'' line and the one-year time line provides the number of processors required to break the system within one year (in $\\log_2$ units).\n\n\\item We plot the total physical footprint (number of qubits) per CPU, as a function of the number of CPUs.\n\\item Finally we plot the total physical footprint (number of qubits) of all quantum search machines (CPUs) running in parallel.\n\\end{enumerate}\n\nIn the following sections we proceed to analyze symmetric ciphers (AES, Sec.~\\ref{sct::ciphers}), hash functions (SHA-256, SHA3-256, Sec.~\\ref{sct::hash}, Bitcoin's hash function, Sec.~\\ref{sct::bitcoin}), and finally the minimal resources required for running Grover's algorithm with a trivial oracle~\\ref{sct::intrinsic_parallel_grover} (e.g. the identity gate) on search spaces of various sizes.\n\nNote that in some ranges of the plots from sections~\\ref{sct::ciphers},~\\ref{sct::hash},~\\ref{sct::intrinsic_parallel_grover} and~\\ref{sct::bitcoin} the total physical footprint increases slightly with the number of processors, which may seem counter-intuitive. This happens due to the fact that with more processors the required code distances decrease, and in some instances one can pipeline more magic states factories in parallel into the surface code, which in effect causes an increase in the overall physical footprint. Note that the total time per CPU is monotonically decreasing, as parallelizing distilleries does not increase the wall time. For more details see~\\cite{10.1007/978-3-319-69453-5_18}. \n\n\\subsection{Public-key cryptography\\label{sct::pk}}\n\nMost of the recent progress in quantum cryptanalysis is related to the fault-tolerant layer in Fig.~\\ref{fgr:flowchart_lite}. New methods and techniques\nbased on surface code lattice surgery~\\cite{2018arXiv180806709F,1808.02892,1367-2630-14-12-123011} allow a significant decrease of the overall \nfootprint (number of qubits, or space) taken by the quantum computation, and also a relatively modest decrease in time, in comparison with methods based on surface code defects and braiding~\\cite{PhysRevA.86.032324,Fowler:2013aa}.\n\nWe consider the best up-to-date optimized logical quantum circuits for attacking RSA and ECC public-key \nschemes~\\cite{1706.06752,PhysRevA.52.3457,cuccaro04,Beauregard:2003:CSA:2011517.2011525} then perform a physical footprint resource estimation\nanalysis using lattice surgery techniques. We remark that the overall time required to run the algorithm depends on the level of parallelization \nfor the magic state factories\\footnote{Every T gate in the circuit must be implemented by a specialized magic state factory, each of which occupies a \nsignificant physical footprint. One can implement more magic states in parallel if one is willing to increase the physical footprint of the computation.}. \n\nFor each public-key cryptogrpric scheme, we analyze the space/time tradeoffs and plot the results on a double logarithmic scale. We fit the data using a third degree \npolynomial\\footnote{A third degree polynomial fits the data very precisely, providing a coefficient of determination $R^2$ greater than 0.997.} and obtain an analytical closed-form formula for the relation between the time and the number of qubits required to attack the scheme, in \nthe form\n\n\\begin{equation}\\label{eqn1}\ny(x) = \\alpha x^3 + \\beta x^2 + \\gamma x + \\delta,\n\\end{equation}\nwhere $y$ represents logarithm base 2 of the number of qubits and $x$ represents the logarithm base 2 of the time (in seconds). For example,\nthe quantity \n\\begin{equation}\\label{eqn2}\ny\\left(\\log_2(24\\times 3600)\\right) \\approx y(16.3987)\n\\end{equation}\nrepresents how many qubits are required to break the scheme in one day (24 hours) for a fixed physical error rate per gate $p_g$, assuming a \nsurface code cycle time of 200ns. Note that the computation time scales linearly with the surface code cycle time, e.g. a 1000ns surface code cycle \ntime will result in a computation that is 5 times longer than a $200ns$ surface code cycle time. Therefore, for a specific cryptographic scheme for \nwhich we plotted the space/time tradeoffs using a surface code cycle time of $200ns$ and a fixed physical error rate per gate $p_g$, the number of \nqubits required to break a specific scheme in a time $t$ using an alternative surface code cycle time $t_c$ is given by\n\n\\begin{equation}\\label{eqn3}\ny\\left(\\log_2\\left(\\frac{200ns}{t_c}t\\right)\\right),\n\\end{equation}\nwhere $t$ is expressed in seconds and $t_c$ is expressed in nanoseconds.\n\nWe assume a surface code cycle time of 200ns, in conformance with~\\cite{PhysRevA.86.032324}. For each scheme we analyze, we compare its security using the more conservative (and realistic in the short term) $p_g=10^{-3}$ and also the more optimistic $p_g=10^{-5}$. Note that assuming the more optimistic assumption from a quantum computing perspective is the more conservative assumption from a cybersecurity perspective.\n\nFurthermore, in this analysis, we are reporting the full physical footprint, including the memory required for magic state distillation.\nUsing present-day techniques, the memory required for generating these generic input states accounts for a substantial fraction of the total memory cost and thus we are including these in the total cost estimate and will track the impact of improved methods.\n\n\\section{Symmetric ciphers\\label{sct::ciphers}}\nBelow we analyze the security of AES family of symmetric ciphers against large-scale fault-tolerant quantum adversaries. We used the highly optimized logical circuits produced in\n\\cite{10.1007/978-3-319-29360-8_3}. \n\n\\subsection{AES-128}\n\n \\includegraphics[width=0.429\\textwidth]{figures/AES-128_cycles.pdf}\n \t\\captionof{figure}{AES-128 block cipher. Required surface clock cycles per processor, as a function of the number of processors ($\\log_2$ scale). The bottom brown line (theoretical lower bound, black box) represents the minimal number of queries required\n\tby Grover's algorithm, the cost function being the total number of queries to a black-box oracle, each query assumed to have unit cost, and a completely error-free circuit. The purple line (ideal grover, non-black-box) takes into consideration the structure of the oracle, the cost function being the total number of gates in the circuit, each gate having unit cost; the quantum circuit is assumed error-free as well. Both brown and magenta lines are displayed only for comparisons; for both of them, the $y$ axis should be interpreted as number of logical queries (operations, respectively).\t\nThe curves above the purple line show the overhead introduced by fault tolerance (in terms of required surface code cycles, each surface code cycle assumed to have unit cost). More optimization at the logical layer will shift the purple line down, whereas more optimization at the fault-tolerant layer will move the upper curves closer to the purple line. Similar remarks to the above hold for the remaining plots in this manuscript.}\n \t\\label{fgr:aes_128_cycles}\n\t\n\tFor example, the plots in Fig.~\\ref{fgr:aes_128_cycles} tells us that if we have $2^{50}$ quantum computers running Grover's algorithm in parallel, with no physical errors, then it would take about $2^{63}$ gate calls (where the purple line intersects the vertical line at $50$), where we assume each gate to have unit cost. Still with no errors, a trivial cost for implementing the cryptographic function (oracle) would bring the cost down to about $2^{38}$ oracle calls per quantum computer. Keeping the actual function implementation, but adding the fault-tolerant layer with a physical error rate of $10^{-7}$ (with appropriate assumptions and using state-of-the-art quantum error correction) pushes the cost up to around $2^{76}$ surface code cycles per quantum computer (where now each code cycle is assumed to have unit cost). Similar remarks hold for the remaining plots in this manuscript.\n \\includegraphics[width=0.429\\textwidth]{figures/AES-128_time.pdf}\n \t\\captionof{figure}{AES-128 block cipher. Required time per processor, as a function of the number of processors ($\\log_2$ scale). The horizontal dotted line indicates one year. The $x$-axis is deliberately extended to show the necessary number of CPUs for a total time of one year. Thus the figure shows that it would take, with the stated assumptions, over $2^{80}$ parallel quantum searches to break AES-128 in a year. Similar remarks to the above hold for the remaining plots in this manuscript.}\n \t\\label{fgr:aes_128_time}\n \\includegraphics[width=0.429\\textwidth]{figures/AES-128_phys.pdf}\n\t\\captionof{figure}{AES-128 block cipher. Physical footprint (physical qubits) per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:aes_128_phys}\n \\includegraphics[width=0.429\\textwidth]{figures/AES-128_phys_total.pdf}\n\t\\captionof{figure}{AES-128 block cipher. Total physical footprint (physical qubits), as a function of the number of processors ($\\log_2$ scale). Note that the qubits are not correlated across processors.}\n \t\\label{fgr:aes_128_phys_total}\n\n\\subsection{AES-192}\n\n\t\n \\includegraphics[width=0.429\\textwidth]{figures/AES-192_cycles.pdf}\n \t\\captionof{figure}{AES-192 block cipher. Required surface clock cycles per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:aes_192_cycles}\n\t\n \\includegraphics[width=0.429\\textwidth]{figures/AES-192_time.pdf}\n \t\\captionof{figure}{AES-192 block cipher. Required time per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:aes_192_time}\n\t\n \\includegraphics[width=0.429\\textwidth]{figures/AES-192_phys.pdf}\n\t\\captionof{figure}{AES-192 block cipher. Physical footprint (physical qubits) per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:aes_192_phys}\n\t\n \\includegraphics[width=0.429\\textwidth]{figures/AES-192_phys_total.pdf}\n\t\\captionof{figure}{AES-192 block cipher. Total physical footprint (physical qubits), as a function of the number of processors ($\\log_2$ scale). Note that the qubits are not correlated across processors.}\n \t\\label{fgr:aes_192_phys_total}\n\n\n\\subsection{AES-256}\n\n\t\n \\includegraphics[width=0.429\\textwidth]{figures/AES-256_cycles.pdf}\n \t\\captionof{figure}{AES-256 block cipher. Required surface clock cycles per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:aes_256_cycles}\n\t\n \\includegraphics[width=0.429\\textwidth]{figures/AES-256_time.pdf}\n \t\\captionof{figure}{AES-256 block cipher. Required time per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:aes_256_time}\n\t\n \\includegraphics[width=0.429\\textwidth]{figures/AES-256_phys.pdf}\n\t\\captionof{figure}{AES-256 block cipher. Physical footprint (physical qubits) per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:aes_256_phys}\n\t\n \\includegraphics[width=0.429\\textwidth]{figures/AES-256_phys_total.pdf}\n\t\\captionof{figure}{AES-256 block cipher. Total physical footprint (physical qubits), as a function of the number of processors ($\\log_2$ scale). Note that the qubits are not correlated across processors.}\n \t\\label{fgr:aes_256_phys_total}\n\n\\section{Hash functions\\label{sct::hash}}\nIn this section we study the effect of parallelized Grover attacks on the SHA-256~\\cite{SHA2} snd SHA3-256~\\cite{SHA3} family of hash functions. We used the highly optimized logical circuits produced in~\\cite{10.1007/978-3-319-69453-5_18}.\n\n\\subsection{SHA-256}\n\n \\includegraphics[width=0.429\\textwidth]{figures/SHA-256_cycles.pdf}\n \t\\captionof{figure}{SHA-256 cryptographic hash function. Required surface clock cycles per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:sha_256_cycles}\n \\includegraphics[width=0.429\\textwidth]{figures/SHA-256_time.pdf}\n \t\\captionof{figure}{SHA-256 cryptographic hash function. Required time per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:sha_256_time}\n \\includegraphics[width=0.429\\textwidth]{figures/SHA-256_phys.pdf}\n\t\\captionof{figure}{SHA-256 cryptographic hash function. Physical footprint (physical qubits) per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:sha_256_phys}\n \\includegraphics[width=0.429\\textwidth]{figures/SHA-256_phys_total.pdf}\n\t\\captionof{figure}{SHA-256 cryptographic hash function. Total physical footprint (physical qubits), as a function of the number of processors ($\\log_2$ scale). Note that the qubits are not correlated across processors.}\n \t\\label{fgr:sha_256_phys_total}\n\n\n\\subsection{SHA3-256}\n\n \\includegraphics[width=0.429\\textwidth]{figures/SHA3-256_cycles.pdf}\n \t\\captionof{figure}{SHA3-256 cryptographic hash function. Required surface clock cycles per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:sha3_256_cycles}\n \\includegraphics[width=0.429\\textwidth]{figures/SHA3-256_time.pdf}\n \t\\captionof{figure}{SHA3-256 cryptographic hash function. Required time per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:sha3_256_time}\n \\includegraphics[width=0.429\\textwidth]{figures/SHA3-256_phys.pdf}\n\t\\captionof{figure}{SHA3-256 cryptographic hash function. Physical footprint (physical qubits) per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:sha3_256_phys}\n \\includegraphics[width=0.429\\textwidth]{figures/SHA3-256_phys_total.pdf}\n\t\\captionof{figure}{SHA3-256 cryptographic hash function. Total physical footprint (physical qubits), as a function of the number of processors ($\\log_2$ scale). Note that the qubits are not correlated across processors.}\n \t\\label{fgr:sha3_256_phys_total}\n\\section{Bitcoin~\\label{sct::bitcoin}}\nIn this section we analyze the security of Bitcoin's~\\cite{satoshi:bitcoin} proof-of-work protocol, which is based on finding a hash\\footnote{The hash function being used by the protocol is H($x$) := SHA-256(SHA-256($x$).} pre-image which that starts\nwith a certain number of zeros. The latter is dynamically adjusted by the protocol so that the problem is on average solved by\nthe whole network in 10 minutes. Currently, it takes around $2^{75}$ classical hashing operations~\\cite{btc_difficulty} for finding a desired hash pre-image successfully via brute-force search with specialized hardware.\n \\includegraphics[width=0.429\\textwidth]{figures/SHA-256-Bitcoin_cycles.pdf}\n \t\\captionof{figure}{Bitcoin's cryptographic hash function H($x$) := SHA-256(SHA-256($x$)). Required surface clock cycles per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:sha_256_bitcoin_cycles}\n \\includegraphics[width=0.429\\textwidth]{figures/SHA-256-Bitcoin_time.pdf}\n \t\\captionof{figure}{Bitcoin's cryptographic hash function H($x$) := SHA-256(SHA-256($x$)). Required time per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:sha_256_bitcoin_time}\n \\includegraphics[width=0.429\\textwidth]{figures/SHA-256-Bitcoin_phys.pdf}\n\t\\captionof{figure}{Bitcoin's cryptographic hash function H($x$) := SHA-256(SHA-256($x$)). Physical footprint (physical qubits) per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:sha_256_bitcoin_phys}\n \\includegraphics[width=0.429\\textwidth]{figures/SHA-256-Bitcoin_phys_total.pdf}\n\t\\captionof{figure}{Bitcoin's cryptographic hash function H($x$) := SHA-256(SHA-256($x$)). Total physical footprint (physical qubits), as a function of the number of processors ($\\log_2$ scale). Note that the qubits are not correlated across processors.}\n \t\\label{fgr:sha_256_bitcoin_phys_total}\n\n\n\\section{Intrinsic cost of parallelized Grover's algorithm\\label{sct::intrinsic_parallel_grover}}\n\nMore efficient quantum implementations of AES and SHA imply more efficient cryptanalysis. In this section, we aim to bound how much further optimized implementations of these cryptographic functions could help. We do so by assuming a trivial cost of $1$ for each function evaluation.\n\n\\subsection{Searching space of size $2^{56}$}\n\n \\includegraphics[width=0.429\\textwidth]{figures/MinimalGrover56bits_cycles.pdf}\n \t\\captionof{figure}{Running Grover's algorithm with a trivial oracle, for a searching space of size $2^{56}$. Required surface clock cycles per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:minimal_grover_56_cycles}\n \\includegraphics[width=0.429\\textwidth]{figures/MinimalGrover56bits_time.pdf}\n \t\\captionof{figure}{Running Grover's algorithm with a trivial oracle, for a searching space of size $2^{56}$. Required time per processor, as a function of the number of processors ($\\log_2$ scale). The dotted horizontal line indicates one year. }\n \t\\label{fgr:minimal_grover_56_time}\n \\includegraphics[width=0.429\\textwidth]{figures/MinimalGrover56bits_phys.pdf}\n\t\\captionof{figure}{Running Grover's algorithm with a trivial oracle, for a searching space of size $2^{56}$. Physical footprint (physical qubits) per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:minimal_grover_56_phys}\n \\includegraphics[width=0.429\\textwidth]{figures/MinimalGrover56bits_phys_total.pdf}\n\t\\captionof{figure}{Running Grover's algorithm with a trivial oracle, for a searching space of size $2^{56}$. Total physical footprint (physical qubits), as a function of the number of processors ($\\log_2$ scale). Note that the qubits are not correlated across processors.}\n \t\\label{fgr:minimal_grover_56_phys_total}\n\n\\subsection{Searching space of size $2^{64}$}\n\n \\includegraphics[width=0.429\\textwidth]{figures/MinimalGrover64bits_cycles.pdf}\n \t\\captionof{figure}{Running Grover's algorithm with a trivial oracle, for a searching space of size $2^{64}$. Required surface clock cycles per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:minimal_grover_64_cycles}\n \\includegraphics[width=0.429\\textwidth]{figures/MinimalGrover64bits_time.pdf}\n \t\\captionof{figure}{Running Grover's algorithm with a trivial oracle, for a searching space of size $2^{64}$. Required time per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:minimal_grover_64_time}\n \\includegraphics[width=0.429\\textwidth]{figures/MinimalGrover64bits_phys.pdf}\n\t\\captionof{figure}{Running Grover's algorithm with a trivial oracle, for a searching space of size $2^{64}$. Physical footprint (physical qubits) per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:minimal_grover_64_phys}\n \\includegraphics[width=0.429\\textwidth]{figures/MinimalGrover64bits_phys_total.pdf}\n\t\\captionof{figure}{Running Grover's algorithm with a trivial oracle, for a searching space of size $2^{64}$. Total physical footprint (physical qubits), as a function of the number of processors ($\\log_2$ scale). Note that the qubits are not correlated across processors.}\n \t\\label{fgr:minimal_grover_64_phys_total}\n\n\\subsection{Searching space of size $2^{128}$}\n\n \\includegraphics[width=0.429\\textwidth]{figures/MinimalGrover128bits_cycles.pdf}\n \t\\captionof{figure}{Running Grover's algorithm with a trivial oracle, for a searching space of size $2^{128}$. Required surface clock cycles per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:minimal_grover_128_cycles}\n \\includegraphics[width=0.429\\textwidth]{figures/MinimalGrover128bits_time.pdf}\n \t\\captionof{figure}{Running Grover's algorithm with a trivial oracle, for a searching space of size $2^{128}$. Required time per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:minimal_grover_128_time}\n \\includegraphics[width=0.429\\textwidth]{figures/MinimalGrover128bits_phys.pdf}\n\t\\captionof{figure}{Running Grover's algorithm with a trivial oracle, for a searching space of size $2^{128}$. Physical footprint (physical qubits) per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:minimal_grover_128_phys}\n \\includegraphics[width=0.429\\textwidth]{figures/MinimalGrover128bits_phys_total.pdf}\n\t\\captionof{figure}{Running Grover's algorithm with a trivial oracle, for a searching space of size $2^{128}$. Total physical footprint (physical qubits), as a function of the number of processors ($\\log_2$ scale). Note that the qubits are not correlated across processors.}\n \t\\label{fgr:minimal_grover_128_phys_total}\n\n\n\\subsection{Searching space of size $2^{256}$}\n\n \\includegraphics[width=0.429\\textwidth]{figures/MinimalGrover256bits_cycles.pdf}\n \t\\captionof{figure}{Running Grover's algorithm with a trivial oracle, for a searching space of size $2^{256}$. Required surface clock cycles per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:minimal_grover_256_cycles}\n \\includegraphics[width=0.429\\textwidth]{figures/MinimalGrover256bits_time.pdf}\n \t\\caption{Running Grover's algorithm with a trivial oracle, for a searching space of size $2^{256}$. Required time per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:minimal_grover_256_time}\n \\includegraphics[width=0.429\\textwidth]{figures/MinimalGrover256bits_phys.pdf}\n\t\\caption{Running Grover's algorithm with a trivial oracle, for a searching space of size $2^{256}$. Physical footprint (physical qubits) per processor, as a function of the number of processors ($\\log_2$ scale).}\n \t\\label{fgr:minimal_grover_256_phys}\n \\includegraphics[width=0.429\\textwidth]{figures/MinimalGrover256bits_phys_total.pdf}\n\t\\caption{Running Grover's algorithm with a trivial oracle, for a searching space of size $2^{256}$. Total physical footprint (physical qubits), as a function of the number of processors ($\\log_2$ scale). Note that the qubits are not correlated across processors.}\n \t\\label{fgr:minimal_grover_256_phys_total}\n\n\n\\section{RSA schemes\\label{sct::rsa}}\nIn the following section we compute the space/time tradeoffs for attacking public-key cryptographic schemes based on factoring large numbers, \nnamely RSA-1024, RSA-2048, RSA-3072, RSA-4096, RSA-7680 and RSA-15360.\nFor each scheme, we plot the space/time tradeoff points then fit it with a third degree polynomial, for $p_g=10^{-3}$ and $p_g=10^{-5}$, respectively.\n\n\\subsection{RSA-1024}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus3/RSA1024.png}\n\\captionof{figure}{RSA-1024 space/time tradeoffs with physical error rate per gate $p_g=10^{-3}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 3.01\\times 10^7$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $3.01\\times 10^{11}$, the corresponding number of logical qubits is 2050, and the total number of surface code cycles is $5.86\\times 10^{13}$. The quantity $R^2$ represents the coefficient of determination (closer to 1, better the fitting). The classical security parameter is approximately 80 bits.}\n\\label{fgr:rsa1024a} \n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus5/RSA1024.png}\n\\captionof{figure}{RSA-1024 space/time tradeoffs with physical error rate per gate $p_g=10^{-5}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 2.14\\times 10^6$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $3.01\\times 10^{11}$, the corresponding number of logical qubits is 2050, and the total number of surface code cycles is $2.93\\times 10^{13}$. The classical security parameter is approximately 80 bits.}\n\\label{fgr:rsa1024b}\n\n\n\\subsection{RSA-2048}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus3/RSA2048.png}\n\\captionof{figure}{RSA-2048 space/time tradeoffs with physical error rate per gate $p_g=10^{-3}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 1.72\\times 10^8$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $2.41\\times 10^{12}$, the corresponding number of logical qubits is 4098, and the total number of surface code cycles is $4.69\\times 10^{14}$. The classical security parameter is approximately 112 bits.}\n\\label{fgr:rsa2048a}\n\n\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus5/RSA2048.png}\n\\captionof{figure}{RSA-2048 space/time tradeoffs with physical error rate per gate $p_g=10^{-5}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 9.78\\times 10^6$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $2.41\\times 10^{12}$, the corresponding number of logical qubits is 4098, and the total number of surface code cycles is $2.35\\times 10^{14}$. The classical security parameter is approximately 112 bits.}\n\\label{fgr:rsa2048b}\n\n\n\\subsection{RSA-3072}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus3/RSA3072.png}\n\\captionof{figure}{RSA-3072 space/time tradeoffs with physical error rate per gate $p_g=10^{-3}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 6.41\\times 10^8$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $8.12\\times 10^{12}$, the corresponding number of logical qubits is 6146, and the total number of surface code cycles is $1.58\\times 10^{15}$. The classical security parameter is approximately 128 bits.}\n\\label{fgr:rsa3072a}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus5/RSA3072.png}\n\\captionof{figure}{RSA-3072 space/time tradeoffs with physical error rate per gate $p_g=10^{-5}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 2.55\\times 10^7$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $8.12\\times 10^{12}$, the corresponding number of logical qubits is 6146, and the total number of surface code cycles is $7.91\\times 10^{14}$. The classical security parameter is approximately 128 bits.}\n\\label{fgr:rsa3072b}\n\n\n\\subsection{RSA-4096}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus3/RSA4096.png}\n\\captionof{figure}{RSA-4096 space/time tradeoffs with physical error rate per gate $p_g=10^{-3}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 1.18\\times 10^9$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $1.92\\times 10^{13}$, the corresponding number of logical qubits is 8194, and the total number of surface code cycles is $3.75\\times 10^{15}$. The classical security parameter is approximatively approximately 156 bits.}\n\\label{fgr:rsa4096a}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus5/RSA4096.png}\n\\captionof{figure}{RSA-4096 space/time tradeoffs with physical error rate per gate $p_g=10^{-5}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 5.70\\times 10^7$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $1.92\\times 10^{13}$, the corresponding number of logical qubits is 8194, and the total number of surface code cycles is $1.88\\times 10^{15}$. The classical security parameter is approximatively approximately 156 bits.}\n\\label{fgr:rsa4096b}\n\n\n\\subsection{RSA-7680}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus3/RSA7680.png}\n\\captionof{figure}{RSA-7680 space/time tradeoffs with physical error rate per gate $p_g=10^{-3}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 7.70\\times 10^{10}$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $1.27\\times 10^{14}$, the corresponding number of logical qubits is 15362, and the total number of surface code cycles is $2.64\\times 10^{16}$. The classical security parameter is approximately 192 bits.}\n\\label{fgr:rsa7680a}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus5/RSA7680.png}\n\\captionof{figure}{RSA-7680 space/time tradeoffs with physical error rate per gate $p_g=10^{-5}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 7.41\\times 10^{9}$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $1.27\\times 10^{14}$, the corresponding number of logical qubits is 15362, and the total number of surface code cycles is $2.47\\times 10^{16}$. The classical security parameter is approximately 192 bits.}\n\\label{fgr:rsa7680b}\n\n\n\\subsection{RSA-15360}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus3/RSA15360.png}\n\\captionof{figure}{RSA-15360 space/time tradeoffs with physical error rate per gate $p_g=10^{-3}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 4.85\\times 10^{12}$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $1.01\\times 10^{15}$, the corresponding number of logical qubits is 30722, and the total number of surface code cycles is $2.24\\times 10^{17}$. The classical security parameter is approximately 256 bits.}\n\\label{fgr:rsa15360a}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus5/RSA15360.png}\n\\captionof{figure}{RSA-15360 space/time tradeoffs with physical error rate per gate $p_g=10^{-5}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 7.64\\times 10^{10}$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $1.01\\times 10^{15}$, the corresponding number of logical qubits is 30722, and the total number of surface code cycles is $1.98\\times 10^{17}$. The classical security parameter is approximately 256 bits.}\n\\label{fgr:rsa15360b}\n\n\n\\section{Elliptic curve schemes\\label{sct::ecc}}\nIn the following section we compute the space/time tradeoffs for attacking public-key cryptographic schemes based on solving the discrete logarithm \nproblem in finite groups generated over elliptic curves, namely NIST P-160, NIST P-192, NIST P-224, NIST P-256, NIST P-384 and NIST P-521. For \neach scheme, we plot the space/time tradeoff points then fit it with a third degree polynomial, for $p_g=10^{-3}$ and $p_g=10^{-5}$, respectively. We \nused the logical circuits from~\\cite{1706.06752}.\n\n\\subsection{NIST P-160}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus3/P160.png}\n\\captionof{figure}{NIST P-160 elliptic curve space/time tradeoffs with physical error rate per gate $p_g=10^{-3}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 1.81\\times 10^7$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $2.08\\times 10^{11}$, the corresponding number of logical qubits is 1466, and the total number of surface code cycles is $4.05\\times 10^{13}$. The classical security parameter is 80 bits.}\n\\label{fgr:p160a}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus5/P160.png}\n\\captionof{figure}{NIST P-160 elliptic curve space/time tradeoffs with physical error rate per gate $p_g=10^{-5}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 1.38\\times 10^6$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $2.08\\times 10^{11}$, the corresponding number of logical qubits is 1466, and the total number of surface code cycles is $2.03\\times 10^{13}$. The classical security parameter is 80 bits.}\n\\label{fgr:p160b}\n\n\n\\subsection{NIST P-192}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus3/P192.png}\n\\captionof{figure}{NIST P-192 space/time tradeoffs with physical error rate per gate $p_g=10^{-3}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 3.37\\times 10^7$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $3.71\\times 10^{11}$, the corresponding number of logical qubits is 1754, and the total number of surface code cycles is $7.23\\times 10^{13}$. The classical security parameter is 96 bits.}\n\\label{fgr:p192a}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus5/P192.png}\n\\captionof{figure}{NIST P-192 space/time tradeoffs with physical error rate per gate $p_g=10^{-5}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 2.18\\times 10^6$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $3.71\\times 10^{11}$, the corresponding number of logical qubits is 1754, and the total number of surface code cycles is $3.62\\times 10^{13}$. The classical security parameter is 96 bits.}\n\\label{fgr:p192b}\n\n\n\\subsection{NIST P-224}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus3/P224.png}\n\\captionof{figure}{NIST P-224 elliptic curve space/time tradeoffs with physical error rate per gate $p_g=10^{-3}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 4.91\\times 10^7$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $5.90\\times 10^{11}$, the corresponding number of logical qubits is 2042, and the total number of surface code cycles is $1.15\\times 10^{14}$. The classical security parameter is 112 bits.}\n\\label{fgr:p224a}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus5/P224.png}\n\\captionof{figure}{NIST P-224 elliptic curve space/time tradeoffs with physical error rate per gate $p_g=10^{-5}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 3.24\\times 10^6$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $5.90\\times 10^{11}$, the corresponding number of logical qubits is 2042, and the total number of surface code cycles is $5.75\\times 10^{13}$. The classical security parameter is 112 bits.}\n\\label{fgr:p224b}\n\n\n\\subsection{NIST P-256}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus3/P256.png}\n\\captionof{figure}{NIST P-256 elliptic curve space/time tradeoffs with physical error rate per gate $p_g=10^{-3}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 6.77\\times 10^7$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $8.82\\times 10^{11}$, the corresponding number of logical qubits is 2330, and the total number of surface code cycles is $1.72\\times 10^{14}$. The classical security parameter is 128 bits.}\n\\label{fgr:p256a}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus5/P256.png}\n\\captionof{figure}{NIST P-256 elliptic curve space/time tradeoffs with physical error rate per gate $p_g=10^{-5}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 4.64\\times 10^6$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $8.82\\times 10^{11}$, the corresponding number of logical qubits is 2330, and the total number of surface code cycles is $8.60\\times 10^{13}$. The classical security parameter is 128 bits.}\n\\label{fgr:p256b}\n\n\n\\subsection{NIST P-384}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus3/P384.png}\n\\captionof{figure}{NIST P-384 elliptic curve space/time tradeoffs with physical error rate per gate $p_g=10^{-3}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 2.27\\times 10^8$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $3.16\\times 10^{12}$, the corresponding number of logical qubits is 3484, and the total number of surface code cycles is $6.17\\times 10^{14}$. The classical security parameter is 192 bits.}\n\\label{fgr:p384a}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus5/P384.png}\n\\captionof{figure}{NIST P-384 elliptic curve space/time tradeoffs with physical error rate per gate $p_g=10^{-5}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 1.28\\times 10^7$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $3.16\\times 10^{12}$, the corresponding number of logical qubits is 3484, and the total number of surface code cycles is $3.08\\times 10^{14}$. The classical security parameter is 192 bits.}\n\\label{fgr:p384b}\n\n\\subsection{NIST P-521}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus3/P521.png}\n\\captionof{figure}{NIST P-521 elliptic curve space/time tradeoffs with physical error rate per gate $p_g=10^{-3}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 6.06\\times 10^8$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $7.98\\times 10^{12}$, the corresponding number of logical qubits is 4719, and the total number of surface code cycles is $1.56\\times 10^{15}$. The classical security parameter is 256 bits.}\n\\label{fgr:p521a}\n\n\\includegraphics[width=0.475\\textwidth]{figures/10minus5/P521.png}\n\\captionof{figure}{NIST P-521 elliptic curve space/time tradeoffs with physical error rate per gate $p_g=10^{-5}$. The scale is logarithmic (base 2). Approximately $y(16.3987) \\approx 2.30\\times 10^7$ physical qubits are required to break the scheme in one day (24 hours). The number of T gates in the circuit is $7.98\\times 10^{12}$, the corresponding number of logical qubits is 4719, and the total number of surface code cycles is $7.78\\times 10^{14}$. The classical security parameter is 256 bits.}\n\\label{fgr:p521b}\n\n\n\n\n\\section{Summary and conclusions}\\label{sct::conclusion}\nWe analyzed the security of several widely used symmetric ciphers and hash functions against parallelized quantum adversaries. We computed the security parameter, wall-time and physical footprint for each cryptographic primitive. Our attack model was based on a brute force searching via a parallelized version of Grover's algorithm, assuming a surface-code fault-tolerant architecture based on defects and braiding techniques.\n\nIt is worth noting that throughout we are assuming that brute-force search where we treat the cryptographic function as a black-box is essentially the optimal attack against SHA and AES, which is currently believed to be the case.\n\nSome symmetric key algorithms are susceptible in a model that permits ``superposition attacks''~\\cite{quantph.1602.05973}. In most realistic instances, these attacks are not practical, however they do shed light on the limitations of certain security proof methods in a quantum context, and remind us that we shouldn't take for granted that non-trivial attacks on symmetric key cryptography may be possible.\nFor example, very recently, there have been several cryptanalysis results~\\cite{1712.06239} and~\\cite{1802.03856} that attempt to reduce breaking some symmetric algorithms to solving a system of non-linear equations. Solving these non-linear equations is then attacked using a modified version of the quantum linear equation solver algorithm~\\cite{PhysRevLett.103.150502}. The results are heavily dependent on the condition number of the non-linear system, which turns to be hard to compute (it is not known for most ciphers and hash functions such as AES or SHA). Provided the condition number is relatively small, then one may get an advantage compared to brute-force Grover search. However at this time it is not clear whether this is indeed the case, and we do not have large-scale quantum computers to experiment with.\n\nThe quantum security parameter (based on our assumptions of using state-of-the-art algorithms and fault-tolerance methods) for symmetric and hash-based cryptographic schemes is summarized in Table~\\ref{tbl1}. For more details about space/time tradeoffs achievable via parallelization of Grover's algorithm please see the corresponding Sec.~\\ref{sct::ciphers}, Sec.~\\ref{sct::hash} and Sec.~\\ref{sct::bitcoin}, respectively.\n\\begin{table}[h!]\n\\begin{tabular}{ll}\n\\hline\nName & qs \\\\\n\\hline\nAES-128 & 106 \\\\\nAES-192 & 139 \\\\\nAES-256 & 172 \\\\\n\\hline\nSHA-256 & 166 \\\\\nSHA3-256\t &167 \\\\\nBitcoin's PoW & 75\\\\\n\\hline\n\\end{tabular}\n\\caption{Quantum security parameter ($qs$) for the AES family of ciphers, SHA family of hash functions, and Bitcoin, assuming a conservative physical error rate per gate $p_g=10^{-4}$.}\n\\label{tbl1}\n\\end{table}\n\nWe also analyzed the security of asymmetric (public-key) cryptography, in particular RSA and ECC, in the light of new improvements in fault-tolerant \nquantum error correction based on surface code lattice surgery techniques. We computed the space/time tradeoff required to attack \nevery scheme, using physical error rates of $10^{-3}$ and $10^{-5}$, respectively. We fitted the data with a third degree polynomial, which resulted in an analytical formula of the number of qubits required to break the \nscheme as a function of time.\n\nThe total number of physical qubits required to break the RSA schemes in 24 hours, together with the required number of $T$ gates, corresponding number of surface code cycles and corresponding classical security parameter is summarized in Table~\\ref{tbl2}. For more details about possible space/time tradeoffs please see the corresponding Section~\\ref{sct::rsa} of the manuscript.\n\\begin{table}[]\n\\begin{tabular}{lllll}\n\\hline\nName & nq & Tc & scc & s \\\\\n\\hline\nRSA-1024 & $3.01 \\times 10^7$ & $3.01 \\times 10^{11}$ & $5.86 \\times 10^{13}$ & 80\\\\\nRSA-2048 & $1.72 \\times 10^8$ & $2.41 \\times 10^{12}$ & $4.69 \\times 10^{14}$ & 112\\\\\nRSA-3072 & $6.41 \\times 10^8$ & $8.12 \\times 10^{12}$ & $1.58 \\times 10^{15}$ & 128\\\\\nRSA-4096 & $1.18 \\times 10^9$ & $1.92 \\times 10^{13}$ & $3.75 \\times 10^{15}$ & 156\\\\\nRSA-7680 & $7.70 \\times 10^{10}$ & $1.27 \\times 10^{14}$ & $2.64 \\times 10^{16}$ & 192\\\\\nRSA-15360 & $4.85 \\times 10^{12}$ & $1.01 \\times 10^{15}$ & $2.24 \\times 10^{17}$ & 256\\\\\n\\hline\n\\end{tabular}\n\\caption{The total physical footprint ($nq$) required to break the RSA schemes in 24 hours, together with the required number of $T$ gates ($Tc$), the corresponding number of surface code cycles ($scc$), and the corresponding classical security parameter ($s$).\nWe assume a very conservative physical error rate per gate $p_g=10^{-3}$, more likely to be achievable by the first generations of fault-tolerant quantum computers.}\n\\label{tbl2}\n\\end{table}\n\nThe total number of physical qubits required to break the ECC schemes in 24 hours, together with the required number of $T$ gates, corresponding number of surface code cycles and corresponding classical security parameter is summarized in in Table~\\ref{tbl3}. For more details about possible space/time tradeoffs please see the corresponding Section~\\ref{sct::ecc} of the manuscript. As observed before in~\\cite{1706.06752}, breaking RSA schemes demands more quantum resources in comparison with elliptic curve-based schemes, for the same level of classical security.\n\\begin{table}[]\n\\begin{tabular}{lllll}\n\\hline\nName & nq & Tc & scc & s \\\\\n\\hline\nP-160 & $1.81 \\times 10^7$ & $2.08 \\times 10^{11}$ & $4.05 \\times 10^{13}$ & 80\\\\\nP-192 & $3.37 \\times 10^7$ & $3.71 \\times 10^{11}$ & $7.23 \\times 10^{13}$ & 96\\\\\nP-224 & $4.91 \\times 10^7$ & $5.90 \\times 10^{11}$ & $1.15 \\times 10^{14}$ & 112\\\\\nP-256 & $6.77 \\times 10^7$ & $8.82 \\times 10^{11}$ & $1.72 \\times 10^{14}$ & 128\\\\\nP-384 & $2.27 \\times 10^8$ & $3.16 \\times 10^{12}$ & $6.17 \\times 10^{14}$ & 192\\\\\nP-521 & $6.06 \\times 10^8$ & $7.92 \\times 10^{12}$ & $1.56 \\times 10^{15}$ & 260\\\\\n\\hline\n\\end{tabular}\n\\caption{The total physical footprint ($nq$) required to break the ECC schemes in 24 hours, together with the required number of $T$ gates ($Tc$), the corresponding number of surface code cycles ($scc$), and the corresponding classical security parameter ($s$). We assume a very conservative physical error rate per gate $p_g=10^{-3}$, more likely to be achievable by the first generations of fault-tolerant quantum computers.}\n\\label{tbl3}\n\\end{table}\n\nRecent developments in the theory of fault-tolerant quantum error correction have great impact on evaluating the effective strength of cryptographic\nschemes against quantum attacks, as the fault-tolerant layer of a quantum computation is the most resource-intensive part of running a quantum \nalgorithm. Therefore, monitoring the advances in the theory of quantum error correction is of crucial importance when estimating the strength (or \nweakness) of a cryptographic scheme against a quantum adversary. This work serves as a benchmark against which the impact of future advances can be compared.\n\n\\begin{acknowledgments} \nMost of this work is based on research supported by the Global Risk Institute for its members.\nWe also acknowledge support from NSERC and CIFAR. IQC and the Perimeter Institute are supported in part by the \nGovernment of Canada and the Province of Ontario. Vlad Gheorghiu thanks Austin Fowler for helpful discussions \nand clarifications regarding lattice surgery methods.\n\\end{acknowledgments}\n\n\\bibliographystyle{aipnum4-1}\n\n", "answers": ["172."], "length": 6956, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "2646315b4135b08675c4ab2110dc544058659b9b6aab4752"} {"input": "What is the main methodology used in the research?", "context": "Paper Info\n\nTitle: On the Role of Emergent Communication for Social Learning in Multi-Agent Reinforcement Learning\nPublish Date: Unkown\nAuthor List: Seth Karten, Siva Kailas, Huao Li, Katia Sycara\n\nFigure\n\nFigure1.By using contrastive learning, our method seeks similar representations between the state-message pair and future states while creating dissimilar representations with random states.Thus satisfying the utility objective of the information bottleneck.The depicted agents are blind and cannot see other cars.\nFigure 2.An example of two possible classes, person and horse, from a single observation in the Pascal VOC game.\nFigure 3. Blind Traffic Junction Left: Our method uses compositional complexity and contrastive utility to outperform other baselines in terms of performance and sample complexity.The legend provides the mean ± variance of the best performance.Right: Top: success, contrastive, and complexity losses for our method.Right, Bottom: success, autoencoder loss for ae-comm with supervised pretraining.\nFigure 4. Pascal VOC Game Representing compositional concepts from raw pixel data in images to communicate multiple concepts within a single image.Our method significantly outperforms ae-comm and no-comm due to our framework being able to learn composable, independent concepts.\nFigure 5. Blind Traffic Junction Social shadowing enables significantly lower sample complexity when compared to traditional online MARL.\nBeta ablation: Messages are naturally sparse in bits due to the complexity loss.Redundancy measures the capacity for a bijection between the size of the set of unique tokens and the enumerated observations and intents.Min redundancy is 1.0 (a bijection).Lower is better.\n\nabstract\n\nExplicit communication among humans is key to coordinating and learning. Social learning, which uses cues from experts, can greatly benefit from the usage of explicit communication to align heterogeneous policies, reduce sample complexity, and solve partially observable tasks. Emergent communication, a type of explicit communication, studies the creation of an artificial language to encode a high task-utility message directly from data.\nHowever, in most cases, emergent communication sends insufficiently compressed messages with little or null information, which also may not be understandable to a third-party listener. This paper proposes an unsupervised method based on the information bottleneck to capture both referential complexity and task-specific utility to adequately explore sparse social communication scenarios in multi-agent reinforcement learning (MARL).\nWe show that our model is able to i) develop a natural-language-inspired lexicon of messages that is independently composed of a set of emergent concepts, which span the observations and intents with minimal bits, ii) develop communication to align the action policies of heterogeneous agents with dissimilar feature models, and iii) learn a communication policy from watching an expert's action policy, which we term 'social shadowing'.\n\nINTRODUCTION\n\nSocial learning agents analyze cues from direct observation of other agents (novice or expert) in the same environment to learn an action policy from others. However, observing expert actions may not be sufficient to coordinate with other agents. Rather, by learning to communicate, agents can better model the intent of other agents, leading to better coordination.\nIn humans, explicit communication for coordination assumes a common communication substrate to convey abstract concepts and beliefs directly , which may not be available for new partners. To align complex beliefs, heterogeneous agents must learn a message policy that translates from one theory of mind to another to synchronize coordination.\nEspecially when there is complex information to process and share, new agent partners need to learn to communicate to work with other agents. Emergent communication studies the creation of artificial language. Often phrased as a Lewis game, speakers and listeners learn a set of tokens to communicate complex observations .\nHowever, in multi-agent reinforcement learning (MARL), agents suffer from partial observability and non-stationarity (due to unaligned value functions) , which aims to be solved with decentralized learning through communication. In the MARL setup, agents, as speakers and listeners, learn a set of tokens to communicate observations, intentions, coordination, or other experiences which help facilitate solving tasks .\nAgents learn to communicate effectively through a backpropagation signal from their task performance . This has been found useful for applications in human-agent teaming , multirobot navigation , and coordination in complex games such as StarCraft II . Communication quality has been shown to have a strong relationship with task performance , leading to a multitude of work attempting to increase the representational capacity by decreasing the convergence rates .\nYet these methods still create degenerate communication protocols , which are uninterpretable due to joined concepts or null (lack of) information, which causes performance degradation. In this work, we investigate the challenges of learning a arXiv:2302.14276v1 LG] 28 Feb 2023 messaging lexicon to prepare emergent communication for social learning (EC4SL) scenarios.\nWe study the following hypotheses: H1) EC4SL will learn faster through structured concepts in messages leading to higher-quality solutions, H2) EC4SL aligns the policies of expert heterogeneous agents, and H3) EC4SL enables social shadowing, where an agent learns a communication policy while only observing an expert agent's action policy.\nBy learning a communication policy, the agent is encouraged to develop a more structured understanding of intent, leading to better coordination. The setting is very realistic among humans and many computer vision and RL frameworks may develop rich feature spaces for a specific solo task, but have not yet interacted with other agents, which may lead to failure without alignment.\nWe enable a compositional emergent communication paradigm, which exhibits clustering and informativeness properties. We show theoretically and through empirical results that compositional language enables independence properties among tokens with respect to referential information. Additionally, when combined with contrastive learning, our method outperforms competing methods that only ground communication on referential information.\nWe show that contrastive learning is an optimal critic for communication, reducing sample complexity for the unsupervised emergent communication objective. In addition to the more human-like format, compositional communication is able to create variable-length messages, meaning that we are not limited to sending insufficiently compressed messages with little information, increasing the quality of each communication.\nIn order to test our hypotheses, we show the utility of our method in multi-agent settings with a focus on teams of agents, high-dimensional pixel data, and expansions to heterogeneous teams of agents of varying skill levels. Social learning requires agents to explore to observe and learn from expert cues.\nWe interpolate between this form of social learning and imitation learning, which learns action policies directly from examples. We introduce a 'social shadowing' learning approach where we use first-person observations, rather than third-person observations, to encourage the novice to learn latently or conceptually how to communicate and develop an understanding of intent for better coordination.\nThe social shadowing episodes are alternated with traditional MARL during training. Contrastive learning, which works best with positive examples, is apt for social shadowing. Originally derived to enable lower complexity emergent lexicons, we find that the contrastive learning objective is apt for agents to develop internal models and relationships of the task through social shadowing.\nThe idea is to enable a shared emergent communication substrate (with minimal bandwidth) to enable future coordi-nation with novel partners. Our contributions are deriving an optimal critic for a communication policy and showing that the information bottleneck helps extend communication to social learning scenarios.\nIn real-world tasks such as autonomous driving or robotics, humans do not necessarily learn from scratch. Rather they explore with conceptually guided information from expert mentors. In particular, having structured emergent messages reduces sample complexity, and contrastive learning can help novice agents learn from experts.\nEmergent communication can also align heterogeneous agents, a social task that has not been previously studied.\n\nMulti-Agent Signaling\n\nImplicit communication conveys information to other agents that is not intentionally communicated . Implicit signaling conveys information to other agents based on one's observable physical position . Implicit signaling may be a form of implicit communication such as through social cues or explicit communication such as encoded into the MDP through \"cheap talk\" .\nUnlike implicit signaling, explicit signaling is a form of positive signaling that seeks to directly influence the behavior of other agents in the hopes that the new information will lead to active listening. Multi-agent emergent communication is a type of explicit signaling which deliberately shares information.\nSymbolic communication, a subset of explicit communication, seeks to send a subset of pre-defined messages. However, these symbols must be defined by an expert and do not scale to particularly complex observations and a large number of agents. Emergent communication aims to directly influence other agents with a learned subset of information, which allows for scalability and interpretability by new agents.\n\nEmergent Communication\n\nSeveral methodologies currently exist to increase the informativeness of emergent communication. With discrete and clustered continuous communication, the number of observed distinct communication tokens is far below the number permissible . As an attempt to increase the emergent \"vocabulary\" and decrease the data required to converge to an informative communication \"language\", work has added a bias loss to emit distinct tokens in different situations .\nMore recent work has found that the sample efficiency can be further improved by grounding communication in observation space with a supervised reconstruction loss . Information-maximizing autoencoders aim to maximize the state reconstruction accuracy for each agent. How-ever, grounding communication in observations has been found to easily satisfy these input-based objectives while still requiring a myriad more samples to explore to find a task-specific communication space .\nThus, it is necessary to use task-specific information to communicate informatively. This will enable learned compression for task completion rather than pure compression for input recovery. Other work aims to use the information bottleneck to decrease the entropy of messages . In our work, we use contrastive learning to increase representation similarity with future goals, which we show optimally optimizes the Q-function for messages.\n\nNatural Language Inspiration\n\nThe properties of the tokens in emergent communication directly affect their informative ability. As a baseline, continuous communication tokens can represent maximum information but lack human-interpretable properties. Discrete 1-hot (binary vector) tokens allow for a finite vocabulary, but each token contains the same magnitude of information, with equal orthogonal distance to each other token.\nSimilar to word embeddings in natural language, discrete prototypes are an effort to cluster similar information together from continuous vectors . Building on the continuous word embedding properties, VQ-VIB , an information-theoretic observation grounding based on VQ-VAE properties , uses variational properties to provide word embedding properties for continuous emergent tokens.\nLike discrete prototypes, they exhibit a clustering property based on similar information but are more informative. However, each of these message types determines a single token for communication. Tokens are stringed together to create emergent \"sentences\".\n\nPreliminaries\n\nWe formulate our setup as a decentralized, partially observable Markov Decision Process with communication (Dec-POMDP-Comm). Formally, our problem is defined by the tuple, S, A, M, T , R, O, Ω, γ . We define S as the set of states, A i , i ∈ [1, N ] as the set of actions, which includes task-specific actions, and M i as the set of communications for N agents.\nT is the transition between states due to the multi-agent joint action space T : S × A 1 , ..., A N → S. Ω defines the set of observations in our partially observable setting. Partial observability requires communication to complete the tasks successfully. O i : M 1 , ..., M N × Ŝ → Ω maps the communications and local state, Ŝ, to a distribution of observations for each agent.\nR defines the reward function and γ defines the discount factor.\n\nArchitecture\n\nThe policy network is defined by three stages: Observation Encoding, Communication, and Action Decoding. The best observation encoding and action decoding architecture is task-dependent, i.e., using multi-layer perceptrons (MLPs), CNNs , GRUs , or transformer layers are best suited to different inputs.\nThe encoder transforms observation and any sequence or memory information into an encoding H. The on-policy reinforcement learning training uses RE-INFORCE or a decentralized version of MAPPO as specified by our experiments. Our work focuses on the communication stage, which can be divided into three substages: message encoding, message passing (often considered sparse communication), and message decoding.\nWe use the message passing from . For message decoding, we build on a multiheaded attention framework, which allows an agent to learn which messages are most important . Our compositional communication framework defines the message encoding, as described in section 4.\n\nObjective\n\nMutual information, denoted as I(X; Y ), looks to measure the relationship between random variables, which is often measured through Kullback-Leibler divergence , I(X; Y ) = D KL (p(x, y)||p(x) ⊗ p(y)). The message encoding substage can be defined as an information bottleneck problem, which defines a tradeoff between the complexity of information (compression, I(X, X)) and the preserved relevant information (utility, I( X, Y )).\nThe deep variational information bottleneck defines a trade-off between preserving useful information and compression . We assume that our observation and memory/sequence encoder provides an optimal representation H i suitable for sharing relevant observation and intent/coordination information. We hope to recover a representation Y i , which contains the sufficient desired outputs.\nIn our scenario, the information bottleneck is a trade-off between the complexity of information I(H i ; M i ) (representing the encoded information exactly) and representing the relevant information I(M j =i ; Y i ), which is signaled from our contrastive objective. In our setup, the relevant information flows from other agents through communication, signaling a combination of the information bottleneck and a Lewis game.\nWe additionally promote complexity through our compositional independence objective, This is formulated by the following Lagrangian, where the bounds on mutual information Î are defined in equations 1, 2, and 10. Overall, our objective is,\n\nComplexity through Compositional Communication\n\nWe aim to satisfy the complexity objective, I(H i , M i ), through compositional communication. In order to induce complexity in our communication, we want the messages to be as non-random as possible. That is, informative with respect to the input hidden state h. In addition, we want each token within the message to share as little information as possible with the preceding tokens.\nThus, each additional token adds only informative content. Each token has a fixed length in bits W . The total sequence is limited by a fixed limit, L l W l ≤ S, of S bits and a total of L tokens. We use a variational message generation setup, which maps the encoded hidden state h to a message m; that is, we are modeling the posterior, π i m (m l |h).\nWe limit the vocabulary size to K tokens, e j ∈ R D , j ∈ [1, K] ⊂ N, where each token has dimensionality D and l ∈ [1, L] ⊂ N. Each token m l is sampled from a categorical posterior distribution, 0 otherwise such that the message m l is mapped to the nearest neighbor e j . A set of these tokens makes a message m.\nTo satisfy the complexity objective, we want to use m i to well-represent h i and consist of independently informative m i l .\n\nIndependent Information\n\nWe derive an upper bound for the interaction information between all tokens. Proposition 4.1. For the interaction information between all tokens, the following upper bound holds: The proof is in Appendix A.1. Since we want the mutual information to be minimized in our objective, we minimize,\n\nInput-Oriented Information\n\nIn order to induce complexity in the compositional messages, we additionally want to minimize the mutual information I(H; M ) between the composed message m and the encoded information h. We derive an upper bound on the mutual information that we use as a Lagrangian term to minimize. Proposition 4.2. For the mutual information between the composed message and encoded information, the following upper bound holds:\nThe proof is in Appendix A.1. Thus, we have our Lagrangian term, Conditioning on the input or observation data is a decentralized training objective.\n\nSequence Length\n\nCompositional communication necessitates an adaptive limit on the total length of the sequence. Corollary 4.3. Repeat tokens, w, are redundant and can be removed. Suppose one predicts two arbitrary tokens, w k and w l . Given equation 1, it follows that there is low or near-zero mutual information between w k and w l .\nA trivial issue is that the message generator will predict every available token as to follow the unique token objective. Since the tokens are imbued with input-oriented information (equation 2), the predicted tokens will be based on relevant referential details. Thus, it follows that tokens containing irrelevant information will not be chosen.\nA nice optimization objective that follows from corollary 4.3 is that one can use self-supervised learning with an end-ofsequence (EOS) token to limit the variable total length of compositional message sequences. (3) Algorithm 1 Compositional Message Gen.(h t ) m i ∼ N ( ĥ; µ, σ) 9: end for 10: return m\n\nMessage Generation Architecture\n\nNow, we can define the pipeline for message generation. The idea is to create an architecture that can generate features to enable independent message tokens. We expand each compressed token into the space of the hidden state h (1-layer linear expansion) since each token has a natural embedding in R |h| .\nThen, we perform attention using a softmin to help minimize similarity with previous tokens and sample the new token from a variational distribution. See algorithm 1 for complete details. During execution, we can generate messages directly due to equation 1, resolving any computation time lost from sequential compositional message generation.\n\nUtility through Contrastive Learning\n\nFirst, note that our Markov Network is as follows: H j → M j → Y i ← H i . Continue to denote i as the agent identification and j as the agent ID such that j = i. We aim to satisfy the utility objective of the information bottleneck, I(M j ; Y i ), through contrastive learning as shown in figure 1. Proposition 5.1.\nUtility mutual information is lower bounded by the contrastive NCE-binary objective, The proof is in Appendix A.1. This result shows a need for gradient information to flow backward across agents along communication edge connections.\n\nExperiments and Results\n\nWe condition on inputs, especially rich information (such as pixel data), and task-specific information. When evaluating an artificial language in MARL, we are interested in referential tasks, in which communication is required to complete the task. With regard to intent-grounded communication, we study ordinal tasks, which require coordination information between agents to complete successfully.\nThus, we consider tasks with a team of agents to foster messaging that communicates coordination information that also includes their observations. To test H1, structuring emergent messages enables lower complexity, we test our methodology and analyze the input-oriented information and utility capabilities.\nNext, we analyze the ability of heterogeneous agents to understand differing communication policies (H2)). Finally, we consider the effect of social shadowing (H3), in which agents solely learn a communication policy from an expert agent's action policy. We additionally analyze the role of offline reinforcement learning for emergent communication in combination with online reinforcement learning to further learn emergent communication alongside an action policy.\nWe evaluate each scenario over 10 seeds.\n\nEnvironments\n\nBlind Traffic Junction We consider a benchmark that requires both referential and ordinal capabilities within a team of agents. The blind traffic junction environment requires multiple agents to navigate a junction without any observation of other agents. Rather, they only observe their own state location.\nTen agents must coordinate to traverse through the lanes without colliding into agents within their lane or in the junction. Our training uses REINFORCE . Pascal VOC Game We further evaluate the complexity of compositional communication with a Pascal VOC . This is a two-agent referential game similar to the Cifar game but requires the prediction of multiple classes.\nDuring each episode, each agent observes a random image from the Pascal VOC dataset containing exactly two unique labels. Each agent must encode information given only the raw pixels from the original image such that the other agent can recognize the two class labels in the original image. An agent receives a reward of 0.25 per correctly chosen class label and will receive a total reward of 1 if both agents guess all labels correctly.\nSee figure 2. Our training uses heterogeneous agents trained with PPO (modified from MAPPO repository). For simplicity of setup, we consider images with exactly two unique labels from a closed subset of size five labels of the original set of labels from the Pascal VOC data. Furthermore, these images must be of size 375 × 500 pixels.\nThus, the resultant dataset comprised 534 unique images from the Pascal VOC dataset.\n\nBaselines\n\nTo evaluate our methodology, we compare our method to the following baselines: (1) no-comm, where agents do not communicate; (2) rl-comm, which uses a baseline communication method learned solely through policy loss ; (3) ae-comm, which uses an autoencoder to ground communication in input observations ; (4) VQ-VIB, which uses a variational autoencoder to ground discrete communication in input observations and a mutual information objective to ensure low entropy communication .\nWe provide an ablation of the loss parameter β in table 1 in the blind traffic junction scenario. When β = 0, we use our compositional message paradigm without our derived loss terms. We find that higher complexity and independence losses increase sample complexity. When β = 1, the model was unable to converge.\nHowever, when there is no regularization loss, the model performs worse (with no guarantees about referential representation). We attribute this to the fact that our independence criteria learns a stronger causal relationship. There are fewer spurious features that may cause an agent to take an incorrect action.\nIn order to understand the effect of the independent concept representation, we analyze the emergent language's capacity for redundancy. A message token m l is redundant if there exists another token m k that represents the same information. With our methodology, the emergent 'language' converges to the exact number of observations and intents required to solve the task.\nWith a soft discrete threshold, the independent information loss naturally converges to a discrete number of tokens in the vocabulary. Our β ablation in table 1 yields a bijection between each token in the vocabulary and the possible emergent concepts, i.e., the enumerated observations and intents. Thus for β = 0.1, there is no redundancy.\nSparse Communication In corollary 4.3, we assume that there is no mutual information between tokens. In practice, the loss may only be near-zero. Our empirical results yield independence loss around 1e − 4. In table 1, the size of the messages is automatically compressed to the smallest size to represent the information.\nDespite a trivially small amount of mutual information between tokens, our compositional method is able to reduce the message size in bits by 2.3x using our derived regularization, for a total of an 8x reduction in message size over non-compositional methods such as ae-comm. Since the base unit for the token is a 32-bit float, we note that each token in the message may be further compressed.\nWe observe that each token uses three significant digits, which may further compress tokens to 10 bits each for a total message length of 20 bits.\n\nCommunication Utility Results\n\nDue to coordination in MARL, grounding communication in referential features is not enough. Finding the communication utility requires grounding messages in ordinal information. Overall, figure shows that our compositional, contrastive method outperforms all methods focused on solely input-oriented communication grounding.\nIn the blind traffic junction, our method yields a higher average task success rate and is able to achieve it with a lower sample complexity. Training with the contrastive update tends to spike to high success but not converge, often many episodes before convergence, which leaves area for training improvement.\nThat is, the contrastive update begins to find aligned latent spaces early in training, but it cannot adapt the methodology quickly enough to converge. The exploratory randomness of most of the early online data prevents exploitation of the high utility f + examples. This leaves further room for improvement for an adaptive contrastive loss term.\nRegularization loss convergence After convergence to high task performance, the autoencoder loss increases in order to represent the coordination information. This follows directly from the information bottleneck, where there exists a tradeoff between utility and complexity. However, communication, especially referential communication, should have an overlap between utility and complexity.\nThus, we should seek to make the complexity loss more convex. Our compositional communication complexity loss does not converge before task performance convergence. While the complexity loss tends to spike in the exploratory phase, the normalized value is very small. Interestingly, the method eventually converges as the complexity loss converges below a normal- ized 0.3.\nAdditionally, the contrastive loss tends to decrease monotonically and converges after the task performance converges, showing a very smooth decrease. The contrastive f − loss decreases during training, which may account for success spikes prior to convergence. The method is able to converge after only a moderate decrease in the f + loss.\nThis implies empirical evidence that the contrastive loss is an optimal critic for messaging. See figure 3.\n\nHeterogeneous Alignment Through Communication\n\nIn order to test the heterogeneous alignment ability of our methodology to learn higher-order concepts from highdimensional data, we analyze the performance on the Pascal VOC game. We compare our methodology against ae-comm to show that concepts should consist of independent information directly from task signal rather than compression to reconstruct inputs.\nThat is, we show an empirical result on pixel data to verify the premise of the information bottleneck. Our methodology significantly outperforms the observation-grounded ae-comm baseline, as demonstrated by figure 4. The ae-comm methodology, despite using autoencoders to learn observation-grounded communication, performs only slightly better than no-comm.\nOn the other hand, our methodology is able to outperform both baselines significantly. It is important to note that based on figure 4, our methodology is able to guess more than two of the four labels correctly across the two agents involved, while the baseline methodologies struggle to guess exactly two of thew four labels consistently.\nThis can be attributed to our framework being able to learn compositional concepts that are much more easily discriminated due to mutual independence.\n\nSocial Shadowing\n\nCritics of emergent communication may point to the increased sample complexity due to the dual communication and action policy learning. In the social shadowing scenario, heterogeneous agents can learn to generate a communication policy without learning the action policy of the watched expert agents. To enable social shadowing, the agent will alternate between a batch of traditional MARL (no expert) and (1st-person) shadowing an expert agent performing the task in its trajectory.\nThe agent only uses the contrastive objective to update its communication policy during shadowing. In figure , the agent that performs social shadowing is able to learn the action policy with almost half the sample complexity required by the online reinforcement learning agent. Our results show that the structured latent space of the emergent communication learns socially benevolent coordination.\nThis tests our hypothesis that by learning communication to understand the actions of other agents, one can enable lower sample complexity coordination. Thus, it mitigates the issues of solely observing actions.\n\nDiscussion\n\nBy using our framework to better understand the intent of others, agents can learn to communicate to align policies and coordinate. Any referential-based setup can be performed with a supervised loss, as indicated by the instant satisfaction of referential objectives. Even in the Pascal VOC game, which appears to be a purely referential objective, our results show that intelligent compression is not the only objective of referential communication.\nThe emergent communication paradigm must enable an easy-to-discriminate space for the game. In multi-agent settings, the harder challenge is to enable coordination through communication. Using contrastive communication as an optimal critic aims to satisfy this, and has shown solid improvements. Since contrastive learning benefits from good examples, this method is even more powerful when there is access to examples from expert agents.\nIn this setting, the communication may be bootstrapped, since our optimal critic has examples with strong signals from the 'social shadowing' episodes. Additionally, we show that the minimization of our independence objective enables tokens that contain minimal overlapping information with other tokens.\nPreventing trivial communication paradigms enables higher performance. Each of these objectives is complementary, so they are not trivially minimized during training, which is a substantial advantage over comparative baselines. Unlike prior work, this enables the benefits of training with reinforcement learning in multi-agent settings.\nIn addition to lower sample complexity, the mutual information regularization yields additional benefits, such as small messages, which enables the compression aspect of sparse communication. From a qualitative point of view, the independent information also yields discrete emergent concepts, which can be further made human-interpretable by a post-hoc analysis .\nThis is a step towards white-box machine learning in multi-agent settings. The interpretability of this learned white-box method could be useful in human-agent teaming as indicated by prior work . The work here will enable further results in decision-making from high-dimensional data with emergent concepts.\nThe social scenarios described are a step towards enabling a zero-shot communication policy. This work will serve as future inspiration for using emergent communication to enable ad-hoc teaming with both agents and humans.\n\nAppendix\n\nA.1. Proofs Proposition 4.1 For the interaction information between all tokens, the following upper bound holds: Proof. Starting with the independent information objective, we want to minimize the interaction information, which defines the conditional mutual information between each token and, Let π i m (m l |h) be a variational approximation of p(m l |h), which is defined by our message encoder network.\nGiven that each token should provide unique information, we assume independence between m l . Thus, it follows that our compositional message is a vector, m = [m 1 , . . . , m L ], and is jointly Gaussian. Moreover, we can define q( m|h) as a variational approximation to p(m|h) = p(m 1 ; . . . , m L |h).\nWe can model q with a network layer and define its loss as || m − m|| 2 . Thus, transforming equation 4 into variational form, we have, it follows that q( m|h) log q( m|h)d m ≥ q( m|h) log Thus, we can bound our interaction information, Proposition 4.2 For the mutual information between the composed message and encoded information, the following upper bound holds:\nProof. By definition of mutual information between the composed messages M and the encoded observations H, we have, Substituting q( m|h) for p( m|h), the same KL Divergence identity, and defining a Gaussian approximation z( m) of the marginal distribution p( m), it follows that, In expectation of equation 1, we have,\nThis implies that, for m = [m 1 , . . . , m L ], there is probabilistic independence between m j , m k , j = k. Thus, expanding, it follows that, where z(m l ) is a standard Gaussian. Proposition 5.1. Utility mutual information is lower bounded by the contrastive NCE-binary objective, Proof. We suppress the reliance on h since this is directly passed through.\nBy definition of mutual information, we have, Our network model learns π R + (y|m) from rolled-out trajectories, R + , using our policy. The prior of our network state, π R − (y), can be modeled from rolling out a random trajectory, R−. Unfortunately, it is intractable to model π R + (y|m) and π R − (y) directly during iterative learning, but we can sample y + ∼ π R + (y|m) and y − ∼ π R − (y) directly from our network during training.\nIt has been shown that log p(y|m) provides a bound on mutual information , with the expectation over l p(m l , y l ). However, we need a tractable understanding of the information Y . In the information bottleneck, Y represents the desired outcome. In our setup, y is coordination information that helps create the desired output, such as any action a − .\nThis implies, y =⇒ a − . Since the transition is known, it follows that a − =⇒ s − f , a random future state. Thus, we have, π This is similar to the proof for lemma A.5, but requires assumptions on messages m from the emergent language. We note that when m is random, the case defaults to lemma A.5. Thus, we assume we have at least input-oriented information in m given sufficiently satisfying equation 2. Given a sufficient emergent language, it follows that y =⇒ a + , where a + is an intention action based on m.\nSimilarly, since the transition is known, a + =⇒ s + f , a desired goal state along the trajectory. Thus, we have, π R + (y|m) = p(s = s + f |y, m). Recall the following (as shown in ), which we have adapted to our communication objective, Proposition A.3 (rewards → probabilities). The Q-function for the goal-conditioned reward function r g (s t , m t ) = (1 − γ)p(s = s g |y t ) is equivalent to the probability of state s g under the discounted state occupancy measure:\nand Lemma A.4. The critic function that optimizes equation 8 is a Q-function for the goal-conditioned reward function up to a multiplicative constant 1 The critic function f (s, m, s f ) = y enc(s f ) represents the similarity between the encoding y = enc(s, m) and the encoding of the future rollout s f .\nGiven lemmas A.5 A.6 A.8 and proposition A.7, it follows that equation 8 is the NCE-binary (InfoMAX ) objective, Î(M j , Y i ) = log σ(f (s, m, s + f )) + log 1 − σ(f (s, m, s − f )) which lower bounds the mutual information, I(M j , Y i ) ≥ Î(M j , Y i ). The critic function is unbounded, so we constrain it to [0, 1] with the sigmoid function, σ( * ).\nWe suppress the reliance on h since this is directly passed through. By definition of mutual information, we have, Our network model learns π R + (y|m) from rolled-out trajectories, R + , using our policy. The prior of our network state, π R − (y), can be modeled from rolling out a random trajectory, R−.\nUnfortunately, it is intractable to model π R + (y|m) and π R − (y) directly during iterative learning, but we can sample y + ∼ π R + (y|m) and y − ∼ π R − (y) directly from our network during training. It has been shown that log p(y|m) provides a bound on mutual information , with the expectation over l p(m l , y l ).\nHowever, we need a tractable understanding of the information Y . Lemma A.5. π R − (y) = p(s = s − f |y). In the information bottleneck, Y represents the desired outcome. In our setup, y is coordination information that helps create the desired output, such as any action a − . This implies, y =⇒ a − . Since the transition is known, it follows that a − =⇒ s − f , a random future state.\nThus, we have, π R − (y) = p(s = s − f |y). Lemma A.6. π R + (y|m) = p(s = s + f |y, m). This is similar to the proof for lemma A.5, but requires assumptions on messages m from the emergent language. We note that when m is random, the case defaults to lemma A.5. Thus, we assume we have at least input-oriented information in m given sufficiently satisfying equation 2. Given a sufficient emergent language, it follows that y =⇒ a + , where a + is an intention action based on m.\nSimilarly, since the transition is known, a + =⇒ s + f , a desired goal state along the trajectory. Thus, we have, π R + (y|m) = p(s = s + f |y, m). Recall the following (as shown in ), which we have adapted to our communication objective, Proposition A.7 (rewards → probabilities). The Q-function for the goal-conditioned reward function r g (s t , m t ) = (1 − γ)p(s = s g |y t ) is equivalent to the probability of state s g under the discounted state occupancy measure:\nand Lemma A.8. The critic function that optimizes equation 8 is a Q-function for the goal-conditioned reward function up to a multiplicative constant 1 p(s f ) : exp(f * (s, m, s f ) = 1 p(s f ) Q π s f (s, m). The critic function f (s, m, s f ) = y enc(s f ) represents the similarity between the encoding y = enc(s, m) and the encoding of the future rollout s f .\nGiven lemmas A.5 A.6 A.8 and proposition A.7, it follows that equation 8 is the NCE-binary (InfoMAX ) objective, which lower bounds the mutual information, I(M j , Y i ) ≥ Î(M j , Y i ). The critic function is unbounded, so we constrain it to [0, 1] with the sigmoid function, σ( * ).", "answers": ["An unsupervised method based on the information bottleneck and contrastive learning."], "length": 6235, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "120cb783c796fbedbc76f04cf9be3318e54a63cd642c4401"} {"input": "What is the problem encountered when building the fuselage sides?", "context": "Probably one of the most frustrating things about building experimental aircraft, especially when starting with a minimum of pre-fabricated parts, is to start building and ending up with an unexpected result. Every builder starts a new project by wanting it to go \"perfectly.\" So when things aren't going well, especially at the beginning, the frustration can lead to an unfinished airplane.\nThis is the first article in a series dedicated to helping builders of the Rand Robinson KR series planes build a straight and true fuselage -- the first part of the construction process. Borrowing from modern boatbuliding techniques, focus will be on the KR-2S, but the principles apply to the entire lineup of KR-1 & KR-2 series planes.\nWhile building the KR-2(s) a common surprise is encountered by builders when the completed fuselage sides are laid into position to form the fuselage box section. With many hours spent building the sides flat, finding the once straight longerons that now bow up from the building surface, form a most dissatisfying \"banana\" shape. Especially when using the preformed fiberglass parts, this curve in the top longeron is not acceptable. The builder is left wondering what went wrong and no amount of clamping or brute force forming will solve the problem to any degree of satisfaction. The problem is not the builder's fault. The solution starts by understanding the three dimensional relationship of the assembled parts being built.\nFirst understand that the plans show the finished form of the plane. They show the \"projected\" form as you would expect to see it if viewing an actual plane from the top, ends and from the side. Since the sides are sloped (flared) outward, looking from the side, the distances given by measuring the profile drawing are \"foreshortened\" and don't give the proper shape for building the fuselage with a flat top longeron. What needs to be done is to \"develop\" the \"true\" distances and shape of the flat panel so that when it is curved into position, the longerons lay flat.\nSecond, understand that the dimensions called for in the plans put a twist in the sides that tends to work the panel in two directions of curvature. This twist makes the panel \"undevelopable\" meaning that that shape cannot be unrolled into an equivalent flat shape. This is important when laying out the side and bottom panels onto flat plywood. To illustrate this, try forming a piece of paper around a soda can. The paper can be formed flat around the can either straight or at a diagonal to it's length. It has only one direction of curvature and is by definition \"developable\". Now try to form the same piece of paper around a baseball. It won't lie flat on the surface without some deformation (folding, wrinkling or tearing) of the paper. The ball has curvature in more that one direction and is a \"compounded\" shape. Paper (or plywood) can only be readily formed in developable shapes as opposed to aluminum or other metal which can accept in plane deformation. A developable surface is needed to lay out a curved surface when the materials used can't be deformed with any degree of in-plane strain.\nInitially, the fuselage sides are laid out flat with reference to the top longeron measured to a straight chalk line. The bowing problem starts when the side panels are bent and sloped to form the fuselage box section. If the sides were not sloped (tumbled home), the section formed would be cylindrical and the longerons would lie flat. Since the sides are tumbled home, the section formed is now conical. When a conical shape is cut with a plane (building surface) not perpendicular to it's axis, the shape formed is elliptical -- exactly what happens with the top longeron. When it's built flat, bent to form a cylindrical section, and sloped to form a conical section, it takes on an elliptical shape firewall to tailstock.\nThis method borrows heavily from proven techniques used in the marine trades. It should be stressed at this point that although the layout procedure is not complicated, it is important to take your time. If the layout is not going well initially, start over! Better to erase layout errors now than to have them built it and cause surprises later.\nLayout to ensure a fair and true fuselage starts by drawing a reference line (baseline) on the building surface. Refer to figures 2 & 3 and use a wire guide to draw a very straight baseline. About 500 lbs. Of tension should be adequate. One could use a chalk line, but we're talking airplanes here, not house framing.\nThe main layout difference is that the baseline isn't used as a reference for the top longeron. The baseline references the mid point of the firewall for the developed (and true dimensioned) side panel. Although the baseline will still be the reference, the top and bottom longerons will be laid separately.\nLayout differences don't end there. Each of the stations (vertical members) will be laid out with a calculated separation so that when the panels are formed into position, they land on the spacing called for in the plans. Another major difference is that the bottom & side panels are applied after forming the fuselage box section. This is mainly to obtain the ability to \"fair\" the side and bottom surfaces and insure a straight and true shape.\nRefer to figure 1 for the layout of the new developed side panel. The firewall (station a) is layed out perpendicular to the baseline. Longitudinal (station) measurements are given along the length of the baseline from the firewall. Vertical dimensions are given to reference the angle and breadths of the station at the baseline.\nNotice that the top longeron is bowed outward and that the stations are spaced slightly greater than called out in the plans. When the panels are formed into the box frame section ,they will work into the dimensions specified in the plans.\nStrike a centerline, longer than is needed on the building surface using a wire guide. Draw off the firewall line perpendicular to the centerline at one end.\nUsing the distances listed in the balloons, mark them off on the centerline. Distances are measured to the nearest sixteenth of an inch. Take time to mark them off carefully. Don't mark off the distances in a cumulative fashion. Use the firewall as a common reference.\nUsing the angles listed at each station, mark off a station line longer than is needed. The angles are measured to the nearest hundredth of a degree. Take time to mark them off carefully.\nAt each station, start by marking off each short (bottom longeron) line distance from the centerline. Use your set of trammels or beam compass for doing this. Mark the intersection of the short line with the station line.\nAt each station, mark off each long (top longeron) line distance from the intersection of the short line distance and the station line. Again the trammels or beam compass is best for completing this step. Mark the intersection of the long line distance with the station line.\nUsing the longeron as a batten, trace out the inside and outside curves of the longeron. After the batten is secure, in between each station, fasten a keeper block inside and outside to preserve the shape of the longeron taking care to avoid potential future interference with the diagonal members to be installed later. The fairing blocks can be removed or left in place if they won't interfere with building. The vertical station members and their diagonals can now be measured and positioned. Remember to refer to the plans for the material thickness direction.\nAfter vertical and diagonal members are cut and fitted, take time to draw their outlines on the building surface to cut down on time and confusion when laying out the opposite side.\nFinishing the side panel is accomplished in a manner similar to that called for in the handbook with the exception that the side and bottom skin panels will be attached later.\nThe next article in the series will discuss jigging and building techniques to ensure alignment and straightness of the flat built side panels. Also covered will be building a \"strongback\" jig to assure alignment of the side panels when they are formed into their final shape.\nPart 3 in the series will cover assembly of the side panels using the jigs. Some joint details will be discussed that will ensure a stronger and more fair fuselage assembly. Also covered will be the layout & attachment of the side and bottom ply skins.\nU.S. Mail: Densmore Associates, inc.\nANSI \"D\" size, computer generated plots of all the layout drawings in this series are available from the author for $30 plus postage & handling. Full (true size) scale plots may be made available depending on demand.\n\"Scarfing\" is the practice of splicing plywood so that short pieces of plywood can be used to span long distances. On the KR, it is required on both the fuselage skins and spar webs. The angle of the splice should be 10 to 12 degrees to maintain strength across the joint. Also, joints should coincide with structural members, such as spar webs or fuselage truss members.\nThis scarfer is made by mating a regular plunge router (this one costs about $50) to a table saw. Obviously, you really only need a table saw to cut the chamfer, but it does make a nice heavy table for scarfing. You could just as easily use a large work table as the base.First, set the table saw for a 5.5 degree cut (for a 1:12 joint, or 6.5 degree cut for a 10:1 joint), and run a 1 x 6 through on edge to chamfer a corner on the board. Then drill the board for three router mounting holes (two are countersunk) and connect the assembly to the table saw with two 1/4 inch bolts. Use a long (2-3 inch) straight cutting bit to do the cutting. Adjust the bit so it doesn't interfere with your table top, and go to town. Keep pressure on the plywood to ensure contact with the table while you're scarfing. Make sure you feed your material from the same end as you would if you were sawing, or the router will take your plywood away from you and put a big dent in your garage door.\nIn the late 60's Ken Rand and Stuart Robinson were working as flight system engineers for Douglas Avionics. Ken was working as an electrical engineer, having previously worked for Sperry as an autopilots project engineer, while Stu's degree was in aeronautical engineering from Northrop University. They were two of the guys at the end of the DC-8,9, and 10 assembly lines responsible for correcting some of the nits and picks in various systems before delivery to the customer.\nThey both wanted to build a fast, inexpensive airplane which was also economical to maintain. Several designs were considered, and plans were bought first for the Jeanie's Teenie and then the Taylor Monoplane. The Monoplane was more to their liking, but would require some modification to fit their needs. A cooperative redesign effort ensued, with virtually no dimensions left untouched. Only the basic fuselage structure, airfoil, and powerplant were retained. The tail shape was Stu's, and came directly from the big DC-8s parked on the ramp outside his office window. The landing gear was designed by Ken, after seeing the gear on a Dewey Bird at Santa Paula airport.\nKen was killed in his KR2 a short time later while flying over Cajon Pass in what was apparently a bad weather / low fuel accident. Ken's wife Jeanette became owner of RR overnight, and stepped up to keep the plans and parts coming. Much of the engineering needs are handled by Bill Marcy of Denver, who's been helping out since early '79.\nTo date, almost 6000 KR1, 9200 KR2, and 760 KR2S plan sets have been sold. 1200 KR2s are estimated to be flying, with 5 KR2Ss now in the air. Much of the development work done on KR's is now done by the builders themselves. KR builders tend to be innovative, which leads to some interesting modifications. Some of the mods that work eventually creep into the plans. The KR2S is a case in point. Many builders who'd heard of the pitch sensitivity and tight cabin of the KR2 began to build an enlarged version, with the length determined by the most commonly available longeron material. The result is a KR2 that is stretched 2\" between firewall and main spar, and 14\" behind the main spar. Higher gross weights dictated more wing area, with the new standard becoming the Diehl wing skin. Those who plan to carry passengers commonly stretch the cabin width a few inches, although 1.5 inches is the limit if you still want to use RR's premolded parts.\nMike Stearns addresses the KR Forum crowd.\nThis year's KR Forum featured guest speakers Mike Stearns, Steve Trentman, and Bill Marcey. Mike Stearns spoke on several topics, including the many sources for KR and homebuilding information available on the Internet. He also mentioned KRNet, the list server devoted entirely to KR aircraft, as well as several notable World Wide Web home pages. He also brought a sample of the new Rand Robinson wing skins with him, and discussed their high temperature core prepreg construction. His KR2S will receive the first set, which is currently being installed at Hinson Composites.\nSteve Trentman spoke on his turbine installation. It uses a turbine engine which saw duty as an A7 attack jet starter engine. Total weight is about 85 pounds, while putting out around 90 horsepower. There is a small stockpile of these engines available from government surplus. sources. This engine can only be throttled back to 52% power, which leads to some pretty interesting landings. One inflight failure has been logged so far, with very little damage to the aircraft. More on this exciting development in next month's issue of KROnline.\nLes Palmer's KR2 N202LP won Best KR2, Best Engine Installation, and People's Choice awards at the 1995 KR Gathering at Columbia, TN. After researching the KR series, and reading Neil Bingham's \"A Critical Analysis of the KR2\" (Jan 88 Sport Aviation), Les decided to build his as a single seater, stretched 24\" in the tail, while maintaining a stock width firewall. His fuselage is made from Douglas fir, which weighs in at 4 lbs heavier than if constructed from spruce. It is skinned with 1/8\" birch plywood. Spars are covered with plywoood on both fore and aft sides, ala KR2S. Diehl wing skins provide the lift. Horizontal stabilizer and elevator were stretched 7\" longer on each side, while the vertical stabilizer and rudder were stretched 8\" taller. . The fuselage to cowling junction was made more graceful by adding 1.5 inches to the height of the firewall end of the fuselage sides.\nLes's canopy is a Dragonfly, using a four linkage system to swing forward when opening. The canopy frame fits snugly into a recess in the foward deck, providing an excellent wind and water seal. The fiberglass work is exemplary.\nSeating is luxurious for one.\nThe cowling is also a work of art, and uses NACA ducts for efficiency. Female molds were made for all the fiberglass parts on Les's plane, so he could proabably be persuaded to make more, if demand dictates. Les also machines a multitude of KR aluminum and steel parts which he now offers for sale.\nThe firewall was reinforced with aluminum brackets and angles bolted between the longerons in anticipation of the 200 lb Subaru EA-81 engine installation. His 100 HP Asian version is outfitted with an American Holley 5200 caburetor and manifold. It uses a PSRU of Les's own design, featuring two spur gears with a 1.69:1 reduction ratio and a toothed belt. Other than tapping the crank for larger bolts to mount the redrive, no other engine modifications were required. Also, this is probably the only air conditioned KR2 on the planet. The prop is a 60/63 Hegy.\nOriginally built as a taildragger, the fixed gear is made from 4130 steel tubing. Custom cast 6.00x6 aluminum wheels and steel rotors are mated with 6\" Cleveland calipers for braking. An early taxi test accident damaged the main gear, and prompted Les to change to tricycle gear. Again, he designed his own fiberglass main gear, and uses a Diehl nose wheel fork with a 4130 strut and 6\" wheel up front.\nEarly tests revealed cooling problems, which prompted a radiator move from the firewall to a lower cowling location.\nThe first flight was almost a disaster, as test pilot Randy Smith lost power right after takeoff. He managed a 180 with a safe downwind landing with only minor nosewheel pant damage. The culprit proved to be a spark plug with too much reach, which was quickly remedied. Subsequent flights have shown water temp to be about 210 degrees, oil temp is 220-230, and airspeed is about 180 mph.\nShopping for the Partially Built KR.\nThis story starts about twenty years ago when I first started looking at the KR-2 as the plane I'd like to build. The only problem at that time was a lack of money, lack of knowledge, and a lack of job stability. I liked the design, except for the low ground clearance of the retractable gear and that a KR was going to be a tight fit for me to fly.\nOver the past twenty years I've owned a number of planes, but still always wanted to build my own. I needed one that would fit me, my budget requirements, and have the speed and performance that I wanted. When \"KITPLANES\" published the article featuring Roy Marsh's new KR-2S, it was the first I had heard of any major modifications or improvements to the same old KR design. I believe that article and Roy Marsh's workmanship have probably been the greatest boon to Rand Robinson (RR) in the last twenty years. It certainly caught my eye! Here was the same design I had decided I wanted to build twenty years ago, with all of the improvements I wanted. It was sitting on fixed gear with some reasonable ground clearance. It had the capability to be built large enough to accommodate me. It has enough prefab parts available that it didn't have to be 100% scratch built if I decided to hurry the project along. And it had the speed I wanted. I knew that Roy's published speeds were probably not realistic expectations for the average KR, but after knocking around for the last three years in my Champ, anything over 90 mph seems pretty fast to me.\nAfter purchasing the info kit and the sales video from Rand Robinson, the next step after deciding for sure to build this plane was to order the KR-2 plans and the KR-2S addendum. I finally got my plans and was putting together my first order to start the plane, when my partner in the Champ pointed out that there was a partially completed KR-2S for sale in Trade-a-plane. My initial answer was \"No, I don't even want to look at it. I want to build my own from scratch.\" My partner insisted that for the advertised price and the fact that it wasn't too far away, I ought to at least give the guy a call and investigate it. \"No, I don't think I want to buy someone else's problems,\" I persisted. That night I went home and crunched up some numbers on the calculator and finally came to the conclusion that for the sake of my budget for the next several years, I really should give this guy a call.\nThree days later, I flew to his place about 400 miles away to take a look at his project. At this point I should probably mention that I consider myself to be fairly knowledgeable about airplane construction, although the vast majority of my experience is with tube and fabric. The rest of this article deals with what I looked for and more importantly what I missed and have had to repair in the last year since I purchased the project.\nWhen we went to the seller's house, I found that the left wing was built using the Dan Diehl wing skins and the right wing skins were leaning against the wall inside the house. Also the canopy was in the house with the canopy covered with paper and tape. I wanted to inspect the fuselage first, so off we went to the shop.\nThere I found a fuselage sitting on it's gear painted in primer gray. The first step was to inspect the quality of workmanship of what could be seen as it sat. The interior of the fuselage looked as if it had been built with a great deal of care. The fit and finish of all of the interior wood was very nice. Even the gussets looked like they had been painstakingly perfectly fitted. The glass work on the turtle back also looked very precise and clean. It was evenly faired into the vertical and horizontal stabs. The tail also appeared to be well built with the exception of a depression directly over the front and rear spars in the horizontal stabs. He explained that when he moved recently, that he had shot the plane with gray primer to protect it from the weather since he wouldn't have ready access to a shop to put it in right away. It ended up sitting out in the hot south Texas summer sun for a few weeks before he got a shop rented to work in. That caused the glass (or possibly the foam inside the horizontal stab) to swell, except that it held onto the spar, so it was slightly ballooned in front of and behind the spars. His recommendation was to fill it back smooth with micro.\nI also found a small linear crack in the lower left wing spar cap on the left wing stub. It appeared to be from over tightening the rear spar wing attach fitting bolts. His explanation was that the crack wasn't important because the rear spars only job is to keep the wings from folding back. I also noticed that the holes for attaching the outer wing to the wing stub were badly rounded out on the rear spar. He explained that the Diehl wing skins require the rear spar to be swept slightly more forward than the stock wings. This won't allow you to use the rear spar attach fittings from RR and that I would need to fabricate a new set of rear spar attach fittings.\nI also found that the aileron bellcranks were not built or installed as per plans, but found that they looked professional. I couldn't check for function since the right bellcrank and sheeve wasn't installed, the left wing also wasn't installed, and the right wing didn't exist yet.\nNext we pulled the inspection panels off of the fuselage and tail and looked at everything I could see with a good flashlight. I didn't find anything else that might be questionable about the fuselage except for a cracked elevator trim tab that was damaged when it fell off it's hanging place on the wall.\nNext we spent some time going over his builders log and builders photo album. I still hadn't seen anything that would dissuade me from buying this project.\nAt this point it was starting to get late and my ride down needed to get airborne for the flight home. I needed to make a decision about whether I wanted this project or not, but I hadn't inspected the wings and canopy yet. I took a cursory look at the left wing and saw lots on micro built up on it and some bubbles in the leading edge, but nothing that looked seriously wrong to my amateur eye. The right wing was only a set of spars in the shop and the Diehl wing skins in the house, so there wasn't much to look at there. The canopy was wrapped in paper and tape, so there wasn't much to look at there either. I decided that even if there were serious problems in the wing that was built, I would be money ahead to go ahead and buy the project. For the advertised price, I could build a new set of wings and still be way ahead financially. We negotiated a final price, shook hands, took my ride to the airport, and started off in search of a U-haul to haul the project home.\nNow, at this point, some of you are thinking about what I surely must have forgotten to inspect and why didn't I take a local A & P or EAA member along for the ride. First of all, I don't know any mechanics locally that have any experience with glass and our EAA chapter of which I am VP is woefully lacking in fiberglass knowledge. Secondly, as you will see, I missed plenty. Some by ignorance, some by just not looking close enough.\nNow for a list of the problems that I found over the last year and a few of the fixes that I came up with.\nI found that the lower set of rear spar attach fittings on the left rear spar were installed backwards with the longer spaced hole towards the fuselage. Since this is the same place that also had the cracked spar cap, it required a major change. Also in the same area he had drilled through the rear spar with a hole saw to create a place for the aileron cable to pass through and managed to cut out the second from the outside vertical brace in the spar. Then he chose to install the aileron bellcranks in front of the rear spar, and cut another hole through the rear spar for the aileron push rod. He also managed to cut out the outside vertical brace in the spar. Since the holes were already drilled through the spar, the choices were to either cut out that section of spar cap and scarf a new piece in, cut the whole rear spar carrythrough out of the fuselage including ruining the left lower wing skin, or do something else creative to reinforce the spar cap and install a custom built set of attach fittings.\nI also found that after I built and installed the right side wing stub ribs and skin that the aileron bellcrank setup would not work as installed. The cable that crosses between the two bellcranks had a sharp uphill from the sheeve to the bellcrank in the last 12 inches on either side. This combined with the radius that the bellcranks turn caused the cross cable to pull up tight when the ailerons were pushed to either end of their travel, but allowed the cables to go very slack when the ailerons were centered. Also the Aileron pushrods needed to pass directly through the lower set of rear wing attach fittings to attach to the aileron. This whole rear spar and aileron bellcrank setup was going to either have to be redesigned or cut out and built to plans. The bottom line is that the problems I observed when I inspected this part were much more serious than expected when I had to fix it.\nI decided that I had to remove the rear fittings from the left wing to be replaced with the new set that my neighborhood machinist was cutting out for me. When I put the wing on the work bench to start removing the rear fittings, I thought I had better take a closer look at the bubbles in the leading edge. I found that as I pushed on the leading edge, it delaminated between the glass lay-up on top and the upper and lower wing skin edges that were floxed together underneath. I concluded that that area had to come apart and took a belt sander to the leading edge. What I found was that the leading edge had been floxed together and glassed over, but the mold release had never been scrubbed off the leading edge of the wing. It peeled apart for rebuild quite easily.\nWhen I got back to removing the rear spar attach fittings, I noticed that the woodwork inside the wing looked awfully dull. The reason was that the wing had been closed up without varnishing any of the woodwork. This was rectified with a small hole saw, a number of extensions and a modified undercoating sprayer.\nI also found that the aluminum drain fitting in the bottom of the left wing tank had been glassed into place upside down. The tapered pipe threads were tapered the wrong way to install the draincock into the tank. Retapping the fitting the right direction seemed to be a good fix for that problem.\nWhen I finally got around to attaching the wing to the fuselage, I found that the front spar attach fittings were badly misaligned. Although they could be forced into alignment, I didn't think I needed that kind of preload on the main spar fittings. This problem was fixed by calling on my local neighborhood machinist to build me an aligning fixture and reaming the attach holes to the next larger size and ordering the new sized bolts.\nOn the fuselage I found that although it had new Cleveland wheels and brakes on it, one of the brakes had a severe wobble to it. I must complement the manufacturers for taking care of that problem. One call to the Cleveland factory and they shipped me a new set of wheels and brakes even though the receipt for this set was over four years old and in the original builders name. Their only concern was that this set had never been placed in service yet.\nI chose to sand the load of micro off the left wing to see what it was covering. When I got down to the glass, I found that there was no glass for the aft inch and a half of the underside of the wing in front of the aileron hinge. With the Diehl wing skins, you build the wings, then cut the ailerons out of trailing edge of the wing. He had mismeasured and cut too much material off the bottom side of the trailing edge in front of the aileron. It was filled by floxing a piece of spruce into the gap to fill the space between the back edge of the fiberglass and the aileron mount. I chose to wrap the trailing edge of that wing, and the other wing to match with a couple of lay-ups of glass.\nWhen I sanded the primer off the aforementioned damaged trim tab, I found that the hinge was floxed to the leading edge of the foam insides of the tab, but not the glass. I also chose to wrap the front of the trim tab with a lay-up of glass.\nI decided to pull the paper off the canopy and take a look at it before I'm ready to bolt it on and fly. The original builder had blown his own canopy and after some of the previous problems, I was beginning to have some concerns about not having looked it over closely enough. The canopy turned out to have been blow a little too large. It ended up with a little larger bubble for headroom, which I didn't object to. However, it had more headroom on the right side than the left. Yes, it was just a little bit lopsided. The main problem was that the canopy is stretched thin enough that it can be easily pushed in with one hand when the weather is warm.. My fear was that this is just thin enough that it may decide to lay on my head or in my lap when flying on a warm day. It will have to be replaced.\nI'm sure that many that are reading this could see several of the potential problems before I mentioned them, but some others may not have and I'm sure that there could have been many other problems that didn't but could have existed on this project. This is also not intended to be critical of the gentleman that started this project as many parts of it, especially the wood work are better than I could have done and much of his work is outstanding. I prefer to think that I'll end up with a better plane with his woodwork combined with my glasswork. This article is intended to feature some of the problems that you may run into in buying someone else's project.\nThe final question is, knowing what I have found over the past year, would I have still purchased this project. The answer is yes, but primarily because the price was right in that I am still money and work ahead of where I would be if I had started the project from scratch. There are a few things that I would have done differently, but nothing that I can't live with. Although I won't be able to say that I built it all from scratch, I have built and rebuild enough of the plane that I should have no problem qualifying under the 51% rule.\nYou can send comments directly to the author via e-mail at \"jscott@LANL.GOV\".\nHere is an brief explanation of how I built my turtledecks. The jig was constructed from scrap plywood and a few 1x4s that I ripped into stringers. I made two temporary bulkheads from the plywood, one for each end. Remember the forward bulkhead needs to be shaped in a way that will closely match the aft end of your canopy frame. Make an aft bulkhead by placing a straight edge at the top of your forward bulkhead and the trailing edge of your horizontal stabilizer. This will give you an idea of how tall your aft bulkhead needs to be. As far as location, I placed my aft bulkhead just forward of the lower/front of my vertical fin. I constructed the jig on the fuselage, it is glued together with automotive bondo.\nAfter the bulkheads were bondoed to the fuselage I used the stringers that I ripped from the 1x4s and bondoed them to the bulkheads. This gave me a male form to cover with thin plastic or posterboard. I stapled two layers of posterboard to the jig(thin plastic would work better). The posterboard wraps down two inches onto the fuselage. After I was satisfied with the way it looked, I then covered the entire thing with duct tape (fiberglass will not stick to duct tape) On top of this I wetout one layer of tri-ply cloth (22oz) that I had left over from an earlier project, and one layer of 8oz. bid. Remember to mask off your fuselage so you don't get epoxy on it. If you are not familiar with composite lay-ups, you should plan on razor cutting your lay-ups 4 to 6 hours after wetout while the lay-up is still soft enough to cut with a razorblade.\nAfter the lay-up cured (2 or 3 days) it was removed from the jig, and the jig was removed from the fuselage and discarded. (be careful, the bondo sticks very well to the spruce, you could splinter your wood during removal) I now have a fiberglass skin that tends to hold the shape of the jig but is still flexible enough to work with. I made two bulkheads out of 1/4 last-a-foam (AS&S) using the plywood formers from the jig as a guide. I covered these foam bulkheads with one 8oz layer of glass on each side, with a glass to glass edge on the bottom. After cure these bulkheads were bondoed into place (to the fuselage)and the fiberglass skin was pulled down tight and floxed to the bulkheads. When the flox cured the bondo joints were broken, again being careful not to harm the wood. The turtledeck was removed from the fuselage and 2 inch tapes added to the bulkheads inside and out.\nAt this point the turtledeck looked great and only weighed about 5lbs. but I noticed you could deform the skin by pushing hard on the outside. So I flipped the turtledeck over and from 1/4 inch last-a-foam, I cut two inch wide strips that would run the entire length, forward and aft inside the turtledeck. In effect these would act as composite stringers, I made enough of these two inch wide strips to make up three stringers. One down the center (sort of a backbone) and one on each side of the \"backbone\" half the distance to the edge of the turtledeck. I sanded the edge of the foam so that when covered with a layer of bid @ 45degrees there would be a nice transition from the turtledeck skin up onto the foam and then back onto the turtledeck I scuff sanded and glued the foam stringers in with micro. I covered the foam stringers with one layer of 8oz bid @ 45degrees.", "answers": ["The longerons bow up from the building surface, forming a \"banana\" shape."], "length": 6240, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "9ac60cf60d4bae1dc93b09f374b74b3dec8b1b333397d7cd"} {"input": "What is the effect of accounting for path preference on the robot's belief update?", "context": "Paper Info\n\nTitle: Incorporating Human Path Preferences in Robot Navigation with Minimal Interventions\nPublish Date: 16 Mar 2023\nAuthor List: Oriana Peltzer, Dylan Asmar, Mac Schwager, Mykel Kochenderfer\n\nFigure\n\nHyperplane arrangement of a twodimensional space containing two obstacles (colored in gray).The robot is located inside the pink polytope, surrounded by three adjacent obstacle-free polytopes.Each hyperplane on the boundary of the robot's polytope corresponds to one of the nonredundant constraints in eq.(4).(b)Graph derived from the hyperplane arrangement.The nodes on the graph designate polytopes, and edges designate transitions to adjacent polytopes.To estimate the human's preference, the robot updates a posterior over the goal and over which of the graph transitions φ 1 , φ 2 and φ 3 is preferred by the human.(c)Example preference defined over the graph.The location of the goal is indicated in yellow in the lower right polytope.For each node, the outgoing pink arrow designates the edge on the graph corresponding to the preferred transition between polytopes.\nSimple, 10 × 10, 8 polytopes.(b) Map 2: Office, 10 × 10, 56 polytopes.(c) Map 3: Classroom, 20 × 20, 73 polytopes.(d) Sampled observations and robot's executed trajectories.\nFig.5: Maps used for simulating the robot navigation problem with path preferences.In (d), the heading angles observed are indicated with arrows.The goal is indicated with a pink circle, and the orange robot corresponds to the starting location.The blue robot follows a policy that accounts for path preference, while the green robot does not.The opacity of the robots increases with time.\nMap 1 problem setup and example realizations for goal-only (green) and path preference (blue) solution methods.The robot starts at the lower left corner of the environment, and the goal of the task (pink circle) is in the upper left area.The robot does not know which goal, among 10 options (shown in light blue squares), is the correct goal.The human provides noisy observations, indicated by arrows, at each iteration.The green robot selects actions according to the goal-only baseline, and the blue robot uses our proposed method to infer path preferences.The polytopes composing G are drawn in blue.Probability of correct goal.WLPHVWHS +J (c) Entropy of goal distribution g.\nFig. 6: Probability of the correct goal, fig.6b, and entropy of the goal belief distribution P (g), fig.6c, for the same problem setup, fig.6a.In this problem instance, the human's preference is to go to the goal by passing on the right side of the obstacle.Results are averaged over 50 runs and the area filled represents one standard deviation above and below the mean value.The goal-only baseline shows an over-confident prediction (shown by the strong reduction in belief entropy) that the correct goal is less likely, making it more difficult to reach the correct goal compared to a method that accounts for path preference.\nSuccess rates in the simple environment (Map 1).The results are averaged over 6 randomly sampled problem instances (start location, goal location, and goal possibilities), and over 50 runs per problem instance.∆T is the number of time steps separating two consecutive human inputs.The robot's mission time is Tmax = 30 time steps.We selected γ h = 1.5, corresponding to relatively noisy human inputs and making the problem more difficult to solve for the robot.\nComputation times for Goal Only and Path Preference methods on Map 1 (fig.5a),Map 2 (fig.5b), and Map 3 (fig.5c),averaged over 100 runs with randomly sampled problem instances.The 95 % confidence interval is provided with the mean.We evaluate computation time at the first iteration of each run (where the search depth takes on its highest value Tmax).\n\nabstract\n\nRobots that can effectively understand human intentions from actions are crucial for successful human-robot collaboration. In this work, we address the challenge of a robot navigating towards an unknown goal while also accounting for a human's preference for a particular path in the presence of obstacles.\nThis problem is particularly challenging when both the goal and path preference are unknown a priori. To overcome this challenge, we propose a method for encoding and inferring path preference online using a partitioning of the space into polytopes. Our approach enables joint inference over the goal and path preference using a stochastic observation model for the human.\nWe evaluate our method on an unknown-goal navigation problem with sparse human interventions, and find that it outperforms baseline approaches as the human's inputs become increasingly sparse. We find that the time required to update the robot's belief does not increase with the complexity of the environment, which makes our method suitable for online applications.\n\nINTRODUCTION\n\nCollaboration between humans and robots has become increasingly important and one key aspect of this collaboration is the ability for robots to adapt to human decisions. In many scenarios, such as a robot navigating through a busy room to deliver an item, it is important for the robot to take into account human preferences.\nFor instance, humans may prefer a specific path that would allow their colleagues to notice the item being delivered, but this preference may change dynamically based on various factors such as changes in the environment or unforeseen circumstances. While some preferences can be incorporated into the path-planning process, accommodating dynamic user preferences in real-time remains challenging.\nIn this paper, we propose a way to enable robots to adapt to human preferences dynamically by leveraging real-time feedback to inform decision-making. In this work, we tackle the problem of robot navigation in which the robot cannot observe the goal or the preferred path to the goal, but must make navigation decisions that are influenced by humans through recommended actions.\nPrior work has explored how to adapt to a human's preference through feedback, but such approaches often require a high level of intervention, which can be time-consuming and impractical in real-world scenarios. To optimize the use of human input and quickly infer the human's preference, Fig. : An autonomous robot navigates in a simulated classroom towards a goal location (pink circle).\nAt the start of its mission, it receives direction indications (arrows) from a human that indicate which path it should take to get to the goal. In this scenario, the human wants the robot to go around the desks on the right side of the classroom. A robot that does not reason over path preferences (green) will take the shortest path to the goal regardless of the human's input.\nOur method (blue) infers the human's path preference from these indications and adapts to their recommendations. we propose an approach that leverages probabilistic representations of human preference and incorporates real-time feedback. Previous research by Bajcsy et al. considered an online adaptation problem in a manipulation task, where the person can apply forces to the robot to indicate their preferences.\nBy allowing the robot to continue its task while taking into account a probabilistic representation of human preference, their approach does not require frequent inputs. Building on this idea, we adopt a similar approach to adapt to a human's preference in the context of a robot autonomously navigating through a known environment, such as a cluttered office space.\nSpecifically, we focus on allowing the human to influence the robot's trajectory with respect to obstacles, by providing guidance on preferred routes or paths, while the robot continues to execute its task. Paths can be represented using homotopy classes . However, homotopies can pose computational challenges when used to encode and infer human preferences.\nWhen the robot maintains a belief over homotopy classes, the inference problem can become exponentially complex with the number of obstacles in the space. Additionally, when the goal is unknown, the number of variables increases with the number of candidate destinations. This complexity can render the decision-making problem intractable.\nOur solution is to encode path preference based on a partitioning of the environment into polytopes . This representation allows path preferences to be expressed as sets of preferred transitions between adjacent polytopes. Paths belonging to different homotopy classes correspond to different sequences of transitions.\nBy leveraging conditional independence assumptions, we can make the Bayesian inference problem tractable. These assumptions exploit the fact that human actions provide information about the path in a piece-wise manner. For example, indicating a preference for navigating around a particular obstacle only provides information about the local area and not the entire path.\nFinally, after updating its belief representation over the human's preference, the robot can adapt to indications by replanning online. Our contributions are as follows. • We formulate the human-robot collaboration problem as a Partially Observable Markov Decision Process (POMDP) where both the goal of the task and the human's path preference are unknown random variables.\n• We propose an encoding of a human's path preference using a partitioning of the environment into polytopes, along with conditional independence assumptions that make the Bayesian inference problem tractable to infer the task goal and path preference online. • Through simulations in two environments of different sizes and complexity, we show that our method is effective for solving problems where the robot must reach a goal that is unknown a-priori while simultaneously adapting to a human's indications.\nOur method shows higher success rates compared to baseline approaches when the human inputs are sparse. Our approach enables a robot to make effective navigation decisions in collaboration with a human, even when the goal and path preference are not known in advance, and with minimal human input. In recent years, there has been a growing interest in shared autonomy and interactive systems, where humans and robots work together to accomplish tasks.\nSeveral approaches have been proposed to address the challenge of enabling effective collaboration between human and robot agents while still achieving high task performance. Losey et al. and Jeon, Losey, and Sadigh propose a framework where a human operator is given control of a task-relevant latent action space while an autonomous system handles the rest.\nDragan and Srinivasa present a formalism for arbitrating between a user's input and a robot's policy when both human and robot share control of the same action space. Cognetti et al. [7] provide a method for real-time modifications of a path, . . . Fig. : We model the intent inference problem with the above diagram.\nAt each step in time, the robot receives an observation ot from the human conditioned on its current location st, the intended goal g, and the human's path preference θ. The robot updates its belief over g and θ and transitions to a next location st+1. while Hagenow et al. present a method that allows an outside agent to modify key robot state variables and blends the changes with the original control.\nHowever, a common challenge of these approaches is the high level of intervention required from humans. Best and Fitch propose a method for predicting an agent's intended trajectory from observations. Rather than maintaining a belief over the agent's future path, they infer the agent's intended goal among a set of candidate locations at the boundary of the space.\nThis approach provides information on where the agent is heading and generates a distribution of candidate future trajectories for the agent. Inferring the goal of the task among a discrete set of candidates is also relevant to the area of shared autonomy. Javdani, Srinivasa, and Bagnell propose a formalism for shared control of a robotic arm, where the robot must assist the human in picking up an object but needs to infer which object the human has chosen from joystick inputs.\nPlanning with homotopy class constraints is useful in problems where the robot's requirements are given with respect to obstacles, and Yi, Goodrich, and Seppi consider topological constraints provided by human operators. Bhattacharya propose an efficient algorithm for solving pathplanning problems under homotopic constraints.\nHowever, the number of homotopy classes for a given problem can be infinite, and as the robot changes location and updates its representation of the world, carrying out inference over homotopy classes in a dynamic environment requires recomputing the set of homotopies at every iteration, making the belief update challenging.\nPrior work has addressed the challenge of shared autonomy by considering how robots can infer a human's intended goal, or how they can infer the preferred path to a goal. However, we argue that inferring the goal and the path as separate problems can lead to over-confidence in incorrect beliefs about the user's preferences.\nTo illustrate this point, consider the following scenario: a robot and a human are collaborating to move an object from one end of a room to Fig. : Using the hyperplanes composing the H-representation of each obstacle, we construct a hyperplane arrangement of the obstacle-free space (a). We define the human's preference for the robot's one step action choices as the posterior distribution (given all human input up to that point) over transitions from the current to the neighboring polytopes, i.e. edges on the graph.\nEach time the robot transitions to a new polytope, the set of neighbor polytopes and the distribution over human preferences are updated. another, but there is an obstacle in the way. The human would like the robot to take a path around the obstacle on the left, even though the goal is on the right. If the robot only infers the goal from the human's inputs, it may incorrectly assume that the goal is on the right, and become over-confident in this belief.\nOn the other hand, if the robot only infers the preferred path, it may mistakenly assume that the goal is on the left, leading to a failure in completing the task. To overcome these challenges, our work proposes a joint inference approach that considers both the human's intended goal and their preferred path to that goal.\nSpecifically, we model the human's preference over different homotopy classes and leverage a conditional independence assumption to provide a tractable solution. In our approach, we assume that the human's inputs are noisily rational conditioned on both the goal and the preference. By jointly inferring the goal and path preference, we can avoid over-confidence in incorrect beliefs about the user's preferences, leading to improved system performance.\nWe consider the problem of robot navigation in a known environment to an unknown destination, where a human can intervene and provide a heading direction to the robot using a joystick or force cues. The human also has a preference on which path the robot should take with respect to obstacles, and our objective is for the robot to understand the human's intentions and execute the task with minimal interventions.\nLet g be a discrete random variable denoting the goal of the task, belonging to a set of candidates Ω g , and let θ be a discrete-valued random variable representing the human's path preference, belonging to a set of possible preferences Θ. The physical location of the robot at time index t is denoted by s t ∈ R 2 , and the robot's action at time index t, belonging to some action space A, is denoted by a t .\nThe transition model T (s t+1 | s t , a t ) is deterministic, meaning the robot has full control over its future location. At any time step, the human may provide an observation to the robot. When the human intervenes, the robot receives a direction (heading angle) that can be mapped to a future location in space.\nMore specifically, we map the direction to an intended location, which is the resulting robot location after advancing in the indicated direction for one time step. For simplicity, we consider that the robot directly makes an observation o t of the location indicated by the human. We assume that the robot has a stochastic observation model for the human P (o t | s t , g, θ) that is conditioned on both the goal of the task g and the human's preferred path θ.\nWe further assume that having chosen a goal and path preference, the human takes actions to noisily minimize a cost function C g,θ that measures the cost of moving from the robot's current location to the goal along the preferred path. For example, C g,θ (s t , o t ) can be the length of the shortest path from location s t to the goal g after taking a first step to o t , and constrained by path preference θ.\nWe use C g,θ to induce a probability distribution over observations, given by: where γ h is a hyperparameter that designates the rationality coefficient. This model assumes the human will pick the lowest cost action with the highest probability and the likelihood of an action decreases exponentially with the increase in cost .\nOur inclusion of the path preference θ sets our approach apart from . The model is shown in fig. represented as a Bayesian Network.\n\nInference\n\nAt each time step where the human provides an observation, the posterior P (g, θ) is given through the Bayesian update We note that the number of Bayesian updates required at each iteration to update the belief is equal to the cardinality of Ω g × Θ. In addition, each Bayesian update involves computing C g,θ ( .\n, . ) in eq. ( ), which involves solving an optimization problem (such as a shortest path problem). In section IV, we propose a specific encoding of preference θ for resolving eq. ( ), while ensuring the number of computations of the cost C g,θ (., .) per update does not grow exponentially with the number of obstacles.\n\nDecision Making\n\nWe consider a navigation problem where the robot receives reward according to the model R(s t , g, θ, a t ). We wish to find the optimal policy π that maximizes the expected discounted sum of future rewards, with discount factor γ. The above problem is a Partially Observable Markov Decision Process (POMDP) .\nIn this section, we propose an encoding of human's path preference θ for computing the posterior in eq. ( ). Devifrom the concept of homotopy classes, we define the preference according to a partitioning of the environment into polytopes, as shown in fig. , creating a hyperplane arrangement of the space.\nHyperplane arrangements have been used by Vincent and Schwager in the context of Neural Network verification. In our setting, we leverage this representation to define path preferences as preferred transitions between adjacent regions of the space.\n\nHyperplane Arrangement\n\nWe assume a two-dimensional environment composed of m polytopic obstacles, each defined by their half-space representation (H-representation) where A i ∈ R di×2 and b i ∈ R di , and where d i is the number of edges (hyperplanes) composing polytope i. Let n = i d i be the total number of hyperplanes. We leverage each obstacle's H-representation to construct a hyperplane arrangement of the environment as shown in fig.\n.e. a partitioning of the space into polytopes. More specifically, each location in space belongs to a polytope j for which we can write an H-representation of the form where α j i ∈ {−1, 1} di is a vector specific to polytope j and obstacle i corresponding to the relative position of any point in the set with respect to each hyperplane in O i .\nFig. : Intent inference model in a hyperplane arrangement of the obstacle free space. We spatially decompose the preference θ into a set of preferred neighboring polytopes per region of the space. Within each polytope j, the human preference pj is a discrete distribution over the preferred neighbor in N (j).\nWe assume that for a location st belonging to polytope j, and given goal g and preference pj, the observation ot and any other preference p i,i =j are conditionally independent. Concatenating elements from each obstacle's Hrepresentation, we can write polytope j's H-representation as where Some of the constraints in eq. ( ) (corresponding to rows of A, b and α j ) are redundant, i.e. the set P j does not change upon their removal.\nWe can further reduce the Hrepresentation of a polytope to include only non-redundant constraints. By removing the rows corresponding to redundant constraints, we obtain new matrices A j e , b j e and α j e such that we can write the polytope's reduced H-representation as The non-redundant constraints correspond to edges of the polytope.\nIn other words, as the robot continually moves in space, the first hyperplane that it will cross upon exiting the polytope will correspond to one of the polytope's nonredundant constraints. Vincent and Schwager outline an iterative method for removing redundant constraints by solving n linear programs.\nWe use this method in practice for computing α j e for each polytope. We can now characterize each polytope by a vector α j e ∈ {−1, 1} n j e , where n j e ≤ n is the number of essential constraints of the polytope. The polytopes P j partition the environment into a hyperplane arrangement.\n\nPath Preference\n\nIn this section, we provide a definition of preference θ according to a graphical representation of the environment based on the hyperplane arrangement. Under this representation, a path preference corresponds to a set of preferred transitions. In other words, for each polytope in the space, the human will have a preference to which neighboring polytope they wish to transition.\nLet G := (V, E) be an undirected graph, where vertices are obstacle-free polytopes, and edges connect two adjacent polytopes. Each polytope is described by a unique vector α j as defined in eq. ( ). Two polytopes are adjacent if they share non-redundant constraints (rows in eq. ( )) corresponding to the same hyperplane (i.e. they are on opposite sides of the hyperplane).\nLet N (v) be the set of neighbors of a vertex v. For each vertex, we denote p v the discrete-valued random variable describing which edge in N (v) the human intends to transition to. Using this formalism, we define a path preference as the set of preferred transitions over all nodes in the graph, Let m θ = v∈V |N (v)| be the cardinality of Θ, and m g = |Ω g | the number of possible goals.\nA priori, the number of Bayesian updates required to update the belief at every iteration should be m θ × m g . Now, let us assume the conditional independence relationships described by the new problem diagram in fig. . More specifically, we introduce the assumption that conditioned on a robot location s t , the goal g, and the preference for the corresponding vertex p v in the graph, the observation o t and the preference for any other vertex are conditionally independent.\nIn other words, the observations the human provides can be defined conditioned only on the robot location, the goal, and the human's preference for its current vertex p v . By introducing this assumption, each update step only requires updating the joint (p v , g), reducing the number of cost computations to |N (v)| × m g .\nWe can notice that by introducing this assumption, we removed the direct relationship between the number of polytopes in the environment and the complexity of the Bayesian update in eq. ( ). In practice, components of θ are not mutually independent. For example, if the human preference at a vertex v 1 is\n, it is unlikely that the human will also prefer p v2 = (v 2 , v 1 ) (turning back). We can improve our model by assuming a dependent relationship between preferences for adjacent edges, which does not significantly increase the complexity of the inference problem. An interesting property of our encoding is that any two paths that belong to different homotopy classes will cross different sequences of polytopes, i.e. they correspond to a different sequence of edges on G.\nThis can be proved by contradiction. Let us suppose that two continuous trajectories ξ 1 and ξ 2 , with the same start and end points and that do not intersect any obstacle, traverse the same regions in G in the same order. From the construction of the hyperplane arrangement, each polytope that the paths traverse through is obstacle-free.\nTherefore, within each polytope, there is no obstacle in the area located in between the portions of ξ 1 and ξ 2 that belong to the region. A smooth transformation of ξ 1 into ξ 2 can be obtained by transforming each portion of ξ 1 belonging to the polytopes it intersects into the corresponding portion of ξ 2 for the same polytopes, where the extremities of the trajectory portions are connected to one another along the polytope's edges (where the same edge is crossed by both paths).\nAlong this transformation, the paths do not intersect any obstacle, and therefore ξ 1 and ξ 2 belong to the same homotopy class.\n\nEXPERIMENTS\n\nWe evaluate our model on a simulated navigation task where the robot must reach a goal that is unknown a priori while respecting the path preferences indicated by a human. The robot navigates in a grid world containing obstacles. The transition model is deterministic: the robot selects an adjacent location on the grid to reach at the next time step.\nThe robot is also allowed to take diagonal actions. Each location s t in the map can be mapped to a vertex v t ∈ G. Therefore, the actions leading to locations mapped to different vertices correspond to edges on the graph. We note f (s t , a t ) the edge crossed by taking action a t from location s t .\nThe robot is given a mission time limit T max for reaching the goal. In this problem, we assume that the human selects actions to noisily minimize a cost function C g,θ , where θ is defined as per eq. ( ), corresponding to the length of the shortest path to the goal constrained by the preference (where the robot is only allowed to make transitions on G along preferred edges).\nMore specifically, where δ(s t , g | o t , p vt ) designates the length of the shortest path from s t to g passing by o t and constrained by preference p vt . This is a slight variant of the cost function proposed by Best and Fitch , where we add in a conditioning on the path preference. We compute costs by running the A path planning algorithm on the environment maps (grid worlds with diagonal actions) and impose preference constraints by pruning invalid transitions from the search tree.\nReward model. At each step in time, the robot receives a reward which is a sum of three components: a goal-specific reward a preference-specific reward or penalty We compute solutions to the POMDP defined in section III-B with the online solver POMCP , and with the particularity that within the rollouts, the robot does not expect to collect human inputs.\nEach time a solution is computed, the robot takes an action and may receive an observation. If it does, it updates its belief distribution over the unknown problem variables and resolves the POMDP over a receding horizon.\n\nBaselines\n\n• Goal only. The robot solves the POMDP while ignoring the effects of path preference. Similarly to , we assume the human is taking action to minimize a goaldependent cost C g (s t , o t ) = δ(s t , g | o t ), where the conditioning on the preference is removed. We also omit the path preference's contribution to the reward R pref .\n• Compliant. The robot complies with the human input, but does not take an initiative. If the user stops providing information, the robot continues in the last direction indicated for 5 time steps (conserving its momentum), then stops. • Blended. We designed an arbitration function to decide between our proposed policy (accounting for path preferences) and the user's recommendation when the robot receives inputs.\nOur metric to evaluate confidence in the robot's prediction for the purpose of arbitration is the entropy of the intention distribution H(g, p i ), where p i denotes the preferred neighbor for the current region. Because our representation of the world is discrete, the arbitration is given by a step function.\nDenoted by U , the action corresponding to the human's input, and P , the robot's prediction for the optimal action, we write the policy where we chose h = 1.6 as the confidence threshold.\n\nResults\n\nWhen evaluating the algorithm, we consider that a run is successful if the robot reached the goal within its allocated mission time T max and only made transitions between graph vertices corresponding to the human's preferences. We vary the time delay between human inputs, from constant guidance (∆ T = 1) to only a single observation (∆ T ≥ T max ).\nSuccess rates. Table I reports the success rates for experiments conducted over six randomly sampled problem instances and 50 runs per instance in Map 1 (fig. ). When the human provides inputs at every iteration, the compliant policy shows the highest success rates. However, as ∆ T increases, the compliant robot is not able to accomplish the task within the allotted time as it does not receive sufficient inputs to do so, and performance decreases compared to the autonomous baselines.\nWe find that in these runs, accounting for path preference consistently improves performance compared with the goal-only baseline. Results also show that blending the user's input with the robot's policy (Path Preference + Blend) when the human provides information leads to improved performance. Belief entropy.\nFigure shows a challenging problem instance where the directions the human provides do not align directly with the shortest path to the goal. By ignoring the effects of preferences in the problem model (goal only), the robot quickly infers from observations that the upper left goal is less likely than others (P (g) drops).\nThe strong decrease in entropy shows that the robot becomes overconfident in this prediction. Overconfidence in an incorrect goal will prevent the agent from finding the correct goal once the human's indications directly align with it, as it needs to correct for the wrong predictions, as shown in the path realization (fig.\n). In this realization, the goal-only method (green robot) fails to search the upper left area within the allotted time. By accounting for path preferences in its model, the blue robot's entropy over the goal distribution decreases more steadily, allowing for it to leverage the human's latest observations and reach the goal successfully.\nshows an over-confident prediction (shown by the strong reduction in belief entropy) that the correct goal is less likely, making it more difficult to reach the correct goal compared to a method that accounts for path preference. Computation time. In table II we provide the time required to solve the POMDP, and the time required to update the robot's belief as it receives new observations.\nWe compute solutions on three maps: a simple 10 × 10 grid world with 8 polytopes (fig. ), a 10 × 10 grid world with 56 polytopes (fig. ), and a 20×20 grid world with 73 polytopes (fig. ). The latter environment being larger, we increase the mission time and the depth of the search tree in POMCP from T max = 30 (Map 1 and Map 2) to T max = 60 (Map 3).\nWe do not notice an increase in the time required to update the robot's belief with an increase in problem complexity, which is consistent with our observation that the complexity of the Bayesian update should not increase with the number of obstacles or polytopes. On the contrary, the belief update time on Map 2 and Map 3, containing more obstacles, is reduced compared to the first map.\nMore obstacles result in fewer iterations when solving the constrained shortest path problem with A . Adding constraints due to the obstacles and polytopes reduces the size of the A search tree. C. Limitations Simulation environments. In our simulations, we hardcoded the preference policy over the maps (e.g. in Map 1, go around the table counter-clockwise).\nWe randomly sampled problem instances (start and goal locations, and goal options) to reduce the bias introduced by these preference choices. To best evaluate and compare the different approaches, it would be best to sample preferences among a distribution of preferences chosen by a human (for example, from benchmarks resulting from a collection of data).\nCreating such a benchmark is an interesting direction for future work. Hyperplane arrangement construction. The main limitation of our approach is that the size and geometry of each polytope depends strongly on the geometry of the obstacles, as seen in fig. . Because of this, the robot can make predictions over preferences that are too refined compared with the topology of the environment.\nA direct consequence is that when the size of the polytopes is small, the information provided by the human can be incorrectly interpreted as a preference on the robot's immediate action. Our method can be improved by changing the structure of the hyperplane arrangement so that it relies on the topology of the environment, but does not vary strongly with the geometry of the features in the environment.\nFor this purpose, topometric maps and region construction algorithms are promising directions. We presented an approach for encoding and inferring a human's path preference in an environment with obstacles. By leveraging a partitioning of the space into polytopes and a stochastic observation model, our method allows for joint inference over the goal and path preference even when both are unknown a-priori.\nOur experiments on an unknown-goal navigation problem with sparse human interventions demonstrate the effectiveness of our approach and its suitability for online applications. The time required to update the robot's belief does not increase with the complexity of the environment, which further highlights the practicality of our method.", "answers": ["The belief entropy decreases more steadily."], "length": 5655, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "f1b59c798ab323ea8409ba09709de506bd83699885ce61a5"} {"input": "How can players skip dialogue on the quest map?", "context": "Hey folks! Here is the shiny new Changelog thread. We're including the archived patch notes from the old forums, so that they are preserved for anyone that would like to reference back to them. We will continue to update this thread with new notes as the patches are released.\nChat is now accessible from the quest board, upgrade screen, and many other menus.\nTapping on objects and menus may reveal helpful hints about that object.\nTeam PI is now colored red if lower than recommended for the quest.\nMany text fixes and consistency improvements.\n• A new Basic Catalyst found in Special Events is used in every recipe!\nSeveral heroes have received improvements to their base stats.\nThe abilities of all Champions have increased in effectiveness.\nA new Critical Boost buff has been introduced.\nIron Fist and Spiderman now have the ability to Armor Break with their Critical Hits.\nDeadpool’s ability to Regenerate is more powerful, but only triggers once per fight.\nScarlet Witch now has a chance to trigger Nullify off of any Critical Hit.\nJuggernaut and Rhino now have a layer of Armor.\nPunisher and Winter Solider now may also trigger Fury in addition to Bleed.\nColossus now further increases his base Armor with the Armor Up ability.\nThor and Ronan no longer Armor Break; instead, base stats and Stun durations have improved.\nWe reduced the effectiveness of the Revive items in order to give away more as rewards.\nA bonus of 50% for using ISO-8 matching your Champion’s Class can now be previewed on the Upgrade screen.\nIt’s now possible sell Champions in exchange for ISO-8 and Gold. The amount received increases proportionately to the Rank and Level of the sold Champion.\n-You can now skip dialogue on the quest map by pressing ‘SKIP’.\n-Added a ‘Quit’ button directly on the quest interface.\n-The Back button on the Top Bar now returns the player to the Home screen.\n-Various game balance and cosmetic improvements to the available quests.\n• PVP energy has been replaced with Hero Stamina. Each Hero has their own Stamina values, meaning the more Heroes you have the more you can play in PVP.\n• Each Hero has 1 Stamina and takes 2 hours to recharge.\nWe have removed the Next Quest button for a much more favorable and flavorful approach to teaching and informing people about Marvel : Contest of Champions. In the Main Menu(Bottom Right Corner) you will now see an image of the Collector showing you what the best or recommended actions that you should preform. This can be anything from opening Crystals, Continuing a Quest, Ranking Up Champions if the difficulty is too hard, Tips where to obtain items, and Playing Versus/Arenas.\n• Adjusted the PI calculation for Power Burn and Power Drain abilities to improve accuracy.\n• Significantly increased the Power Burn multiplier as well as the amount of Power burned. Prior to this change, Vision's Special Attack damage output was far below the curve. Vision's Special Damage is distinct from other heroes in that the dependency on opponents' Power levels cause the damage dealt to be highly variable, and sometimes quite low; however, when striking an opponent with high Power levels, Vision has the potential to deal very high amounts of direct, Armor-ignoring damage.\n• Slightly adjusted the Armor Break trigger to be less punishing to opponents with the Armor Up ability without sacrificing PI or damage output.\n• Slightly increased his base Health and, in turn, the amount of Health recovered by Regeneration. This improvement is reflected by an increase to PI of about 1%.\n• Slightly reduced the damage from Bleeding, but slightly increased the amount of Power drained by E.M.P. Arrow to compensate. This added utility strengthens the choice between whether to offensively Bleed the enemy or defensively drain their Power. These changes may modify PI by +/-1%.\n• Slightly reduced the frequency of Nullify for basic attacks, but slightly increased the chance a Special Attack is critical. Chaotic Bombardment no longer has a chance to critical, and instead has a 100% chance to Nullify the target. This is less punishing to opponents with beneficial effects, while providing a more reliable source of Nullify. Overall, her PI has decreased by about 2%.\n• Decreased base Health and Attack by 2% each to bring his PI in line with other Champions without compromising Special Attack effectiveness.\n• Slightly increased base Health by 2% to bring his PI in line with other Champions. This change may result in a PI increase of up to 1%.\n• Fixed a bug with her Bleed ability scaling incorrectly. This has no effect on PI.\n• User's on iPhone 4 devices will no longer encounter a progression blocker after fighting Iron Man in the tutorial.\n• Fixed an issue where player's Hero would disappear after using a special move.\n• Fixed an issue where very rarely a character would lose all functionality when dashing.\n• Added additional Network support to better diagnose disconnects. The game should resolve and recover much more gracefully than in previous updates.\n• Adjusted some of the touch sensitivity while fighting. Heroes moves should feel more responsive. This is something that is going to be an ongoing process. Please let us know how you think it feels.\n• Fixed various issues with Chat.\n• We have updated Open GL versions/drivers for iOS devices that support Open GL 3.0.\n• User's will no longer receive delayed Game Center notifications. This caused some weirdness to occur while opening Crystals in the Crystal Vault.\n• The Crystal Vault has received another polish pass and should now feel much more responsive, thank you for all your feedback on this feature!\n• Many more minor bug fixes were included in this update.\n• Special Attack 1 base damage increased by +25% Attack Rating.\n• Heavy Attack base Power gained reduced to 63 points.\nWe recently improved the functionality of Heavy Attacks, so they’re easier to use. Their base Power has been reduced to normal levels – previously, they generated Power at a higher rate to compensate for their difficult execution. Special Attacks have been adjusted to give the unlucky recipients more of a fighting chance. These changes bring these attacks in line with existing damage-to-power ratios.\n*NOTE: Special Attacks only generate Power for the target struck, not for the user; this prevents infinite loops and helps serve as a comeback mechanic.\nVersus Crystal prizes have been adjusted due to the Champion Stamina changes.\nArena Crystal prizes have been increased to help balance the adjustments to the Versus Crystal.\nPayouts have significantly increased when receiving a duplicate Champion with a Star rating of two or more. The boosted amount increases based on Star rating. We apologize for any inconvenience caused by delivering each reward individually, and are working to get a fix to you as soon as possible. In the meantime, using the “Skip” button avoids the inconvenience.\n• We fixed a bug where finding a new match could cost a player Units.\n• Spending Units to find a new opponent will now return opponents with lower ratings.\n• Chapters 3 and 4 of Act 2 Story Quests are now available. A mysterious opponent awaits you at the end of Act 2!\n*NOTE: This caused some players' progress to reset for a brief time, but that issue should now be corrected.\n• Event Quest difficulty has been adjusted to match Catalyst availability.\n• Rank-Up Recipes have been adjusted to be more accessible across all ranks.\n• Bosses for the Monday through Saturday Daily Events now have a small chance to drop a Class Catalyst. This is in addition to the drop chance from Chests.\n• Ambush Rates have been adjusted on all Event Quests.\n• Increased Catalyst drops for the Collector Free-For-All Event Quest.\n• Alpha Catalysts now have a chance to drop from Chests in Medium and Hard difficulties of The Collector Free-For-All event.\n• The unobtainable chest in Act 1, Chapter 1, Quest 6 has been removed from the Battlerealm.\nIncreased the amount of Gold awarded by the Arena Crystal.\nSlightly reduced the cost to level-up a 3-Star Champion at Rank 1 to cleanly align with ISO-8 chunk values.\nFixed a bug with Billion-Dollar Punch not triggering Armor Break.\n• Duplicate 2-Star, 3-Star, and 4-Star Champions now awaken a brand new ability unique to that Champion in addition to the rare ISO8 they currently give. Duplicates thereafter continue to level-up this ability to make it stronger. When a Champion is awakened, their Stars turn bright and glow, making them easy to identify (and look pretty cool too). These new abilities can be quite powerful, so please fight responsibly!\n• Various other improvements, including rank and level information for opponents, find match options in team select, and animation tuning.\n• There is now a chance to encounter the elusive Treasure Adaptoid, who divulges his hoard of ISO8 and Gold to those able to defeat him in battle.\n• Class Relationships can be viewed by tapping “Enemy Classes” before entering a quest, and preview the number of enemies in that quest for each class type.\n• You can also now see rewards for completion and exploration on the Edit Team screen.\n• Opponents are more aware of the distance between you and them, improving their interaction with knockback effects, such as that from Heavy Attacks.\nMutant Champions are now effective against Skill Champions.\n• The high Special Attack damage and regenerative abilities of Mutant Champions are effective against Skill Champions, which typically rely on Bleed damage from their weaponry. We think of this relationship as if the X-Gene grants Mutant Champions superpowers that evolved to be stronger than Champions that are merely “Skilled”.\nSkill Champions are now effective against Science Champions.\n• While scientists fiddle in their cute little laboratories to create flasks full of serums to turn even frail young men into super-soldiers, Skill Champions were just born that way baby. Often donning sharp weaponry to make their opponents Bleed, Skill Champions enjoy watching the high base attributes of Science Champions just melt away.\nCosmic Champions are now effective against Tech Champions.\n• Tech Champions construct durable robots and thick suits of Armor to outlast their opponents in battles of tank-the-nuke...which gives Cosmic Champions extra time to build up stacks of beneficial effects to overrun Tech Champions using their peculiar alien enhancements.\n• Tech Champions are still effective against Mutant Champions.\nTech Champions typically excel at Armor, Resistance, and Power manipulation, which is effective against the high Special Attack damage of Mutant Champions. Think of the robotic Sentinels adapting for tactical advantages in the war against Mutantkind!\nScience Champions are still effective against Mystic Champions.\n• Science Champions – a Class of behemoths like Hulk and super-soldiers like Captain America – typically have above average base attributes like Health, Attack, and Armor. These raw stats cannot be affected by pesky Mystics and their removal abilities: Nullify and Purge.\nMystic Champions are still effective against Cosmic Champions.\n• Cosmic Champions explore strange new beneficial effects to seek out new power and new abilities, to boldly take their attributes where no class has gone before. Well, not if Mystic Champions – who are fully capable of stripping Cosmic Champions of their beneficial effects – have anything to say about it! Maybe it’s the Mystic Agenda to protect the secrets of the universe?\nThese changes ensure that having a Class Bonus always gives you the advantage it promises, as it now also reflects ability trends for a particular Class. Please keep in mind that these are generalizations, and some Champions abilities may not always strictly align with these relationships. Learn more about Champions’ abilities by viewing their profiles and tapping on features for detailed information.\n• When you attack someone, you charge up their Power in addition to yours. This meant they would reach a full three bars while you only reached one and a half. We've reduced the amount defenders receive such that you'll be at two bars when they're at three. This change maintains the underdog functionality to give defenders a chance to comeback while being less punishing to players earning high Combos.\nNew damage types for attacks now play a larger role in the abilities of Champions. For example, some heroes power-up by successfully blocking magical damage, while others’ abilities may harm anyone that makes physical contact with them.\nNew Resistances and Immunities have found their way to the Battlerealm. Some heroes are completely immune to specific status effects based on either lore from the comics or logic. For example, the android Vision has no blood, and is therefore fully immune to Bleed conditions. We’ve also strengthened the effectiveness of certain status effects, so be careful who you choose to bring into battle! Could you guess who might be immune to the new “Poison” condition?\n• Poison: Inflicts damage over time and reduces healing and regeneration effectiveness.\n• Unstoppable: A buff to shrug off the impact from attacks, but still take the damage.\n• Weakness: A debuff that reduces Attack attributes.\n• Heal Block: Fully prevents the target from gaining health in any way.\n• Power Lock: Seals the target, preventing them from gaining any Power.\n• When fighting, you may notice that many status effects are now able to stack. This also changes how certain beneficial “buffs” and detrimental “debuffs” interact with one another. For example, it's now possible to have both Armor Up and Armor Break effects on you simultaneously. Let the tug-o-war begin, and may the strongest effects win!\n• Black Bolt's Corkscrew: +25% damage, but at the cost of minor recoil damage.\n• Punisher's “Wrath” has been replaced by \"Payback\". Payback deals additional damage based on the total damage dealt to Frank.\n• Colossus' “Unbreakable” now deal bonus damage based on his armor level at the time of activation.\n• All of Black Panther’s special attacks now deal bonus damage based on the number of Bleeds on the target.\n• Spider-Man’s Web-Slinger now has a chance to inflict Weakness.\n• Vision’s Physical Disruption: Added a minor Power Burn effect due to “his” use of his Infrared Beam. “He” also now purges all status effects while phasing through the ground.\n• Scarlet Witch: Increased the Critical Hit Chance for Hex Bolt and Hex Sphere.\n• Many knockback effects have been adjusted to improve consistency.\nWe’ve tested the Signature Abilities quite extensively before releasing them, but there have been a few abilities that we have been keeping an eye on. We’ve compared our notes with the feedback you’ve been sending us and are making some balance changes to them. Thanks for your feedback!\nSlightly reduced the frequency and duration of Juggernaut’s “Unstoppable” ability.\n• He was indeed a bit too...unstoppable. We’ve toned down the frequency this ability triggers, as well as reduced the duration it’s active for when it does trigger. We feel Juggernaut is still a powerful Champion despite these revisions. Take care!\nSlightly reduced the starting values of Wolverine's “Cellular Regeneration”.\n• We found that Cellular Regeneration was too strong at lower levels where fewer counters to Regeneration exist.\nRe-scaled Gamora's “Assassination” to start higher but scale slower.\n• At lower levels, Special Attacks were used too infrequently, giving this powerful ability little visibility. We’ve adjusted the scaling to better match Special Attack usage at all levels.\nIncreased the frequency that Black Bolt’s “Provocation” triggers.\n• Due to the varying Critical Hit rates across all Champions, in some cases Provocation would trigger rarely or not at all within a fight. We’ve increased the frequency to ensure you’ll see it every match – but especially so against opponents with high Critical Hit rates.\nWe’ll continue to follow the effect of these new abilities on gameplay. Please keep your feedback coming!\nHey everyone! We have been hard at work on improving the game and have prepared a big update inspired in part by your great community feedback. Please keep letting us know what you think!\n• Fixed many Dash, Medium, Heavy and Special Attacks missing or failing to execute.\n• Added Alliances and a new Alliance Crystal.\n• Rocket Raccoon and Unstoppable Colossus join The Contest.\n• Temporary Boosts to Attack, Health, and XP are now available from the Alliance Crystal.\n• Rewards for completing and exploring Chapters and Acts. Earn a guaranteed 3-Star hero crystal for each fully explored Act! This is retroactive, just complete any quest to claim them.\n• A new Fight Menu combines The Arenas, Story Quests and Event Quest menus.\n• Updated Summoner Profiles with new information. Inspect other players’ Profiles and brag about your achievements!\n• A list of blocked users has been added to Chat windows. The option to unblock these users is found in this new menu. The power is in your hands now!\n• We fixed Dash and Medium Attack issues for many heroes that sometimes missed or did not activate.\n• We fixed issues to Drax and Colossus Light and Medium Attacks where they would not connect.\n• Fixed an issue where the camera would stop moving after a level 3 special sequence.\n• Fixed an issue where the player’s heavy attack would get stuck in charge even after the player has released input.\n• Fixed a rare bug where Champions were still able to deal damage after they died, resulting in tied fights.\nForm Alliances with your Friends!\nWhat is better than playing? Playing with your friends! Create a new Alliance or join an existing one through the new Alliance Menu.\n• Invite other players to your Alliance.\n• Search for an Alliance by name or join a Recommended Alliance.\n• Receive rewards for entering your first Alliance.\n• Alliance News Feed. The news feed celebrates your Alliance member’s achievements.\n• Alliance Chat. Chat with other members of your Alliance in a private channel all to yourself.\n• Help Allies. Players can ask for help when out of Energy or Stamina. Alliance members help each other as much as they can to earn Loyalty Points. Loyalty points have a daily limit to how many can be earned.\n• Alliance Crystal. Access a new Alliance Crystal while part of any Alliance. Use new Loyalty Points for purchasing Alliance Crystals.\nHe may start out slow, but watch out for his immense power at high ranks!\n• Adjusted the range of many Heavy Attacks, including Hulk and Drax, to ensure they correctly connect with enemies.\n• Many Special Attacks, including those for Wolverine, Iron Fist, Winter Soldier, Punisher, Black Panther, and many others have had their range adjusted to ensure they correctly connect with enemies even if activated immediately after a combo that knocked the enemy back.\n• Payback and Unbreakable now display their maximum potential damage bonus.\n• Added detailed descriptions for Bleed Immunity and Poison Immunity.\n• Gamora: We’ve adjusted the scaling of her base Special Attack damage to ensure they scale up more similarly to other heroes. This also makes Gamora more reliant on her high Bleed damage, and improves the chances of opponents able to deal with her high Bleed.\nVital Strike and Jade Assassin damage decreased by 10%.\nGodslayer damage increased by 10%.\n• Magik: Rewind is a game-changer for Magik that allows her to go up against foes like Gamora and Rewind off big Critical Hits and Bleed damage; however, the frequency of Rewind triggering was too low to be there when she needed it.\nIncreased the likelihood Rewind triggers by +20% at all levels.\nRewind now heals over one second instead of instantly.\nFixed a bug allowing Magik to break out of an enemy combo using Rewind. It now only removes Status Effects.\n• Hulk: Given the riskiness of losing Health in certain game modes, Hulk’s anger-management provided too little help too late in the game. We’ve increased the Attack boost to ensure he’s appropriately scary in all game modes – as long as he’s angry!\nIncreased Hulk Rage by +20% Attack at all ability levels.\nArc Overload no longer causes Armor Break when it expires.\n• Vision: Added Poison Immunity to our robot friend.\nArena tuning is an ongoing process. The team is continually making adjustments to Arenas to improve the experience.\nUltron has infected The Contest!\nMany new Champions join the battle against Ultron.\nQuest through the new Ultron’s Assault Event.\nWield new power with Summoner Masteries.\nGrow your Friend’s List with the new Social Hub.\nTeam up with your Alliance in new Events, Arenas, and more!\nFilter and sort your Stash.\nFights have been optimized for performance improvements on all devices.\nUsers can now filter through the items in their Stash.\nFixed several issues where Hero Rating would fluctuate.\nFixed a bug with Rhino and Juggernaut having 11-20% more Armor than intended.\nFixed a bug with Rocket Raccoon’s Dash attack being slower than intended.\nAdded a confirmation popup when spending Units on stamina recharges and unlocking arenas.\nRegeneration no longer displays green Health values if you’re at full Health.\nSeveral new improvements to how status effects are displayed.\nAI opponents are no longer able to perform one unavoidable attack in response to a Special Attack 3.\nA new and improved look for all Health Potions in the Battlerealm.\nAll Revive Potions now revive your Champions with +10% more Health.\nWe’re adding so many new Champions, they could form their own Alliance!\nSome of your favourite heroes of the Marvel Cinematic Universe join The Contest!\nSummoner Mastery is on the horizon!\nMasteries provide beneficial effects for your Champions.\nAccess Masteries through your Summoner Profile.\nEarn Mastery Points when you level up.\nChoose your Masteries wisely and strategically customize your benefits.\nRecover your points to try a new specialization as often as you’d like.\nKeep an eye on in-game messaging for more information.\nThe daily loyalty limit has been set to refresh at 08:00UTC for all players.\nA timer has been added to show when the daily loyalty limit resets.\nLoyalty balance is now displayed in the Alliance menus.\nAsk for Versus help with a single tap on the ‘Help’ icon in Team Select.\nNew Alliance Events are coming very soon!\nWork together with your Alliance to complete objectives and receive rewards!\nMuster your might, Alliance Arenas will soon open their gates!\nCompeting in Alliance Arenas shares your points across your whole Alliance; work together to reach milestones and top ranks!\nWork together to amass a huge score, and defeat your competition in classic Arena combat! No slackers here either - if you don’t contribute to win the competition, you’re not eligible for the goods!\nAll social features (Chat, Mail, and Friends) can now be accessed through the new Social Hub.\nSearch for and add friends, and send private messages to Summoners on your Friends List.\nRedesigned chat and mail screens.\nTake on other Summoners’ top Champions for bragging rights and prizes in 1-on-1 Duels!\nA new series of special Ultron quests are available, starting with the first Chapter. Fight back against Ultron’s infection alongside the Summoner, and team up with some of Marvel’s finest! New quests unlock each week!\nThe Spider-Man Champion gate has been removed from Act 1, Chapter 1, Quest 5.\n• Fixed an issue where chat snapped to the most recent message.\n• Fixed several issues where Hero Rating would fluctuate.\n• Various improvements to the Summoner Mastery screens and descriptions.\n• Increased the ISO8 awarded by duplicate 2-Star Champions.\nQuest through the new single-player campaign, Ant-Man’s Adventure!\nIn addition to Ant-Man and Yellowjacket feuding throughout the Battlerealm, additional new Champions will be joining The Contest!\nAccess more Masteries in the new Utility Mastery tree!\nPlease note, these changes may result in a loss of Hero Rating as incorrect effects are restored back to normal levels.\nImproved and polished combat mechanics to reduce the amount of stutters and lost input.\nFixed and optimized rendering related issues with Metal enabled devices.\nTeam up with Ant-Man, and put a stop to Yellowjacket’s mysterious mission!\nAll Alliance Quests only last for a specified amount of time, defeat the boss with your Alliance before it expires!\nNew Prestige System - A dynamic difficulty and score setting that adjusts as you and your Alliance succeed in harder quests. The better you do and the tougher your Alliance is, the higher the prestige. The higher the prestige, the better the rewards!\nChoose your teams carefully as Champions within Alliance Quests cannot be used in other Story or Event Quests.\nAct 4 has been released! Play Chapter 1 now!\nSummoner level maximum has been increased to level 60!\n5-Star Champions are coΩming to The Contest! These are the most powerful Champions yet!\nAdditional improvements have been made to the UI, Versus Arenas, Synergy Bonuses, the Stash & Items Store.\nAct 4 - Chapter 1 released!\nNew challenges - more path variation and features to challenge the strongest Summoners!\nGreater challenge means greater rewards! Earn 4 Star Crystals and Mastery Points!\nThe Summoner Level cap has been increased by ten levels to level 60!\nChampion Items will be coming soon! These allow you to apply items and buffs to a specific Champion, keep an eye out for updates on these new Champion Items!\nSynergy Bonuses have updated iconography and the calculation has been updated to a distinct, additive bonus - What you see is what you get!\nAlliance class distribution is now displayed on team select - Choose the right class!\nYour Catalysts now have their own inventory, and will no longer appear in the Upgrade Item inventory.\nThe Stash is now separated into three tabs: Catalysts, Rewards and ISO, allowing you to sort and view your Stash much faster!\nThe UI flow for both Quests and Arenas have been greatly improved. You can now skip through fight victory and reward animations!\nHere is the rundown of patch 5.1.0, filled with various bug fixes and optimizations. The important ones to note are below.\nNew Champions, new theme, and a new arena!\nTo celebrate our one year anniversary AND the holidays, we’ll be running a special event quest! Battle through the history of The Contest, and test your mettle against familiar faces both old and new!\nA special reward will be available to those who master every quest!\nOur Anniversary Celebration will be happening very soon; stay tuned for more info!\nMore Act 4 quests are coming very soon!\nOpponents in Story Quests now have the ability to use their Special 3 attack! Note that we are not changing previous quest opponents to have this special attack (Act 1-3, Proving Grounds, Realm of Legends will not change); this will be in effect starting with the soon-to-be-released Act 4 content.\nAs with our previous major build releases (3.0’s Ultron, 4.0’s Ant Man, and 5.0’s Battlerealm), the Contest has been reskinned with a new theme!\nThe Road to Knowhere map is here! Fight in a new level inspired by Guardians of the Galaxy!\nA new button in your Alliance Chat to take you directly to Alliance Quests!\nYou can now collect Catalyst Fragments in Event Quests, Proving Grounds, and Alliance Quests; these can be pieced together into a Catalyst!\nSelling Items is now a thing! Sell any items in your inventory for gold!\nLevel 3 and Level 4 Health Potions have arrived! These are powerful instruments to help you tackle all the new Act 4 content!\nOver 400 bugs were fixed in this patch!\nThis patch is a fix for the missing Champions during the Special 3 animation on Android devices.\nThis issue occurred during our upload process to the Google Play Store. This was an odd edge case scenario that we could not have caught during our internal tests, as it began appearing once we uploaded to the Google Play Store. This hotfix will be out by tomorrow, and will put Android at version 6.0.1. As this issue does not occur on iOS devices, iOS will remain at version 6.0.\n3:30pm PST: We have started slow-rolling this patch out to Android devices, beginning with about 20% of users. We expect this to be available for 100% of users within the next 24 hours.\nWe have a few new Champions that you will see within the next couple of months (including one of my personal favorites)!\nOver 200 total bugs squashed in this patch!\nAn artifact left over from the early days of the contest was Black Panther’s ability to gain a Critical Hit Rate boost during Special 3 attacks. As many might know, Critical Hits aren’t possible during a Special 3 anymore, making this effect...unhelpful. We’ve switched it out with a new ability to stack up even more Bleed effects on the opponent based on how many Bleeds are already active.\nExample: The opponent has 4 stacks (instances) of Bleed on them when you launch a Special 3. With this new ability, you have a chance to add an additional 0 - 4 more stacks (instances) of Bleed.\nPreviously, a bug existed that allowed champions with Evade to continue to dodge Black Widow’s attacks, even if her Signature Ability was maxed out. This issue has been fixed.\nCaptain America WW2 has started to become outpaced by his non-WW2 counterpart and while we want the two to feel different and each have their own specific uses, we also want to ensure they are kept within range of each other in terms of balance. To accomplish this, we’ve given WW2 Cap the ability to Stun with his Special 1 and Special 3 attacks, but kept his Bleed on Special 2 the same, giving him options during combat against non-bleeding champions.\nA bug that prevented Daredevil from triggering Armor Breaks from Heavy Attacks has been fixed and is now working as intended.\nAgainst non-bleeding champions: Critical Hits have a chance to Armor Break on Special Attacks.\nIncrease range of Signature to 25% from 20%.\nMany players found Elektra’s signature ability lacked enough opportunities to use it. To remedy this, we’ve increased the range from 20% to 25%. Additionally, to help make Elektra unique from other skill champions, we’ve given her the ability to deal with naturally Bleed Immune champions. Note: This Armor Break only applies to champions naturally immune to bleed, such as Colossus and Ultron, and not to champions granted Bleed Immunity from Local or Link Nodes.\nGuillotine’s Bleed effect used to have a chance to activate from any given attack, meaning that it had to be kept quite weak to compensate for the frequency of triggers. We’ve made the switch to have her Bleed behave closer to existing champions, and in doing so have boosted the strength of the Bleed and have allowed it to stack.\nNorman Osborn overloads the Arc Reactor in his chest if Health drops below 10%, granting a large burst of power, with (18% - 48% ) Armor, Regeneration, and Power Gain. After that, his suit burns out and cannot trigger Armor Up, Armor Break or Stun and loses all base Armor.\nMany players didn’t like Iron Patriot’s old signature ability, feeling that due to the lack of Regeneration, it was considerably weaker than Iron Man’s. While we agreed, we didn’t want to just copy and paste his signature ability, but rather give him his own unique twist on the ability. This “all or nothing version” feels more like Norman Osborn, pushing his suit to the limit to get a larger boost but at the cost of damaging the suit. The addition of Power Gain allows Iron Patriot a large attack before the suit burns out, if timed correctly.\nHeavy Attacks: 90% chance to Stagger the enemy for 8 seconds. A Staggered enemy cannot gain their next beneficial effect.\nAll versions of Juggernaut, even those who haven’t been awakened, now gain the 2 second Unstoppable ability at the start of the fight when they hit Rank 2.\nWe wanted to add some new functionality to Juggernaut, while also keeping him true to his Mystic class assignment. To accomplish this, we added this “buff smasher” effect which keeps an opponent from gaining their next beneficial effect. Additionally, we wanted to make non-awakened versions of Juggernaut more fun to play, without adding more power to the awakened variations. As a result, we gave all versions of Juggernaut the ability to become Unstoppable at the start of the fight.\nWhile many players liked the new functionality of Star-Lord’s Element Gun effect, they found it to be a little too random, specifically when it would Heal Block a champion incapable of Healing. We’ve now added in some contingencies that will make Heal Block appear less unless the opposing champion shows that he / she can Heal during the fight. This includes both activated healing effects, such as Wolverine or Ultron’s Heal, or passive healing effects gained from Masteries, such as Salve or Willpower.\nIt’s been a bit weird that Bucky wasn’t friends with his most famous friend. Well, he is now. This affects 3 Star and above versions.\nWe’ve increased the overall speed of this attack, allowing quick players to use this ability after a four or five hit combo.\nIt seems the Marvel’s have gotten tired of their beams being dodged so easily and have decided to angle it a little better, increasing the overall range of the attack and making it harder to dodge away from. We’ve also increased the speed of both special attacks to allow them to better flow into combat.\nIn order to allow this attack to better flow in combat, we’ve shaved off a few frames from the beginning, allowing players to chain this attack into 4 and 5 hit combos.\nAlliance Wars have arrived! It’s Alliance versus Alliance in a war for Battlerealm supremacy!\nEnter the NEW Loyalty Store to buy Alliance Potions, Mastery items, or other EXCLUSIVE items.\nGain Power back from Special Attacks, enhance or defend against Special Attacks, OR gain a temporary Arena Point Boosts with hoards of new Summoner Boost items!\nAdditional changes and improvements are listed below.\nThis patch will be released February 24th.\nA new area of the Battlerealm has been opened! Compete with your Alliance-mates for pride, glory, and PRIZES!\nMatchmake to find a rival Alliance, then combine strategy and teamwork to dominate them.\nSetup the ultimate defensive team to fortify your Battlerealm, then take your offensive team on the assault!\nWatch your War Rating skyrocket as your Alliance works together to defeat rivals!\nLoad up on Crystal Shards, Loyalty, and brand new exclusive rewards!\nNote that this will be slow-rolled to Alliances in phases, similar to the introduction of Alliance Quests (to ensure server stability and gather your feedback on the new mode). Expect tuning changes throughout these phases, as well as into Season 1.\nUse Loyalty instead of Units to obtain items for Alliance Quests & Wars!\nItems will rotate daily, similar to how the Mastery cores in the current Store change.\nStore contents will be randomly chosen from a pool of categories/items; a select few items will be persistent and always be available for purchase.\nA 5-Star version of Unstoppable Colossus will be available in the Loyalty Store (keep in mind, this is an expensive Champion due to his exclusivity; this will require winning quite a few Alliance Wars and saving up!).\nThis is accessible from the “Store” section of the pop-down menu, and will be available at a later date after the initial 7.0 launch; there will be advance notice through forums and in-game before we release the Loyalty Store.\nNew Summoner Boosts have arrived in the Loyalty Store; NEW Boost types, purchasable with Loyalty Points.\nClass specific Boosts, such as Mystic Champions restoring power after using Special Attacks 2 and 3, or Skill Champions boosting their Special Attack Damage.\nDefensive Boosts, where your Champions take reduced incoming Special 3 Attack Damage.\nGain a temporary Arena Point boost with new Arena Boost items!\nFixed an issue where, after Parrying certain Champion’s Special Attacks, your Champion would be stuck in a blocking state until the Special Attack finished.\nFixed an issue where 90s Cyclops’ Armor Breaks would not remove Armor Ups.\nFixed an issue with Scarlet Witch’s Signature Ability proc rate (previously, the % chance displayed did not match in-game functionality; this is now fixed).\n(Netflix) Daredevil’s Heavy Attack now has a chance to apply 2 stacks of Armor Break, instead of the previous 1 stack.\nWhen spending Battlechips to enter an Arena (such as the Tier 4 Basic or Alpha Catalyst Arena), there is now a confirmation popup.\nThe Alliance Crystal now has a purchase limit that resets daily.\nPermanently increased the Alliance Crystal’s points in Summoner Advancement (from 30 to 300).\nUpdates to Champion Special Attack animations, flow, and timing.\n7.0.1 will be released within the next few days.\nA celebration message is sent to the War Room when an Alliance War battlegroup is cleared.\nPlayers can now tap directly on another node icon while the tile info popup is open (previously, the popup had to be closed before selecting another node).\nAlliance’s reward tier position is now highlighted in the Alliance War tier breakdown.\nIn Attack Phase, players can view the score breakdown for both the battlegroup and overall.\nThe “Place Your Defenders” text now disappears much faster after tapping on the screen.\nMail messages now display the date they were sent.\nIt should be much harder to accidentally tap the Units Store when closing a screen.\nPlayers can tap to skip the point animation in Versus mode again.\nResolved an issue with Class Masteries (specifically Mystic Dispersion) not functioning.\nThe Juggernaut issue with his linked nodes not appearing in Act 4, Chapter 3, Quest 3 (4.3.3) has been fixed.\nFixed a crash that occurs when a player who is not in an Alliance enters Alliance Wars through an outside link.\nFixed a text issue where Alliance War specific descriptions would appear on the Alliance Quest “Select a Battlegroup” screen.\nResolved ~20 various rare crashes and additional minor issues in different game modes.\nFixed and optimized performance on the new Samsung S7.\nFixed an Unknown Error that occurred rarely after a device was woken after going to sleep.\nImproved Performance(Frames Per Second) tracking per fight to help diagnose hitches/pauses/lag spikes during gameplay.\nImproved gesture tracking(Swipe, Tap, Hold) during low performance moments in combat.\nFixed a rare crash that would sometimes occur when receiving a phone call while in combat.\nTuned and updated many Champion Special Attack animations to improve timing and combat flow. Please see the expanded forum post HERE for a full list.\nFixed She-Hulk’s Special Attacks being marked as a projectile (allowing Daredevil to evade them).\nFixed an issue where the player would be stuck in place after parrying Captain America’s Special 1.\nFixed an issue where chaining 2 medium attacks into Old Man Logan’s Special 2 would cause the first 2 strikes to miss opponents.\nFixed an issue with Daredevil or Spider-man missing with a dash attack if Vision charges a heavy attack during the dash.\nFixed an issue where some hidden information in Alliance Wars was visible.\nFixed a display issue where Defender Placement percentage was not displaying all placed Alliance members.\nResolved minor issue with the total Alliance’s score being displayed on the War Progress widget (now only displays the score of the battlegroup being viewed).\nMultiple minor Alliance War issues have also been fixed in this patch.\nFixed a display issue where Shard amounts provided by defeating a boss displayed as double.\nFixed a display issue where opponent PI values would display differently between the map, prefight screen, and in combat.\nBoss power is now correctly displayed after removing Global and Linked boosts.\nFixed an issue where a player in Alliance Quests would lose input ability on the quest board after sleeping the device.\nFixed an issue where a player enters Alliance Quests and gets stuck after viewing the linked node or buff node tutorial.\nFixed an issue where sending an Alliance invite to a player would cause the “Add Friend” button to become greyed out.\nFixed a text issue that appears when viewing Featured Hero information from the Home Screen.\nJoin The Iron or fight for The Blue with new events, quests, Champions, and special Shards; inspired by Marvel’s Captain America: Civil War!\nSolo Events: constantly-evolving events that vary in length, requirements, and prizes!\nCompare statistics against other players and Alliances with the new Leaderboards!\n", "answers": ["Players can skip dialogue on the quest map by pressing the 'SKIP' button."], "length": 6743, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "c677827fc67e60c6396dceb5c4194aaba925aa1c61ad7115"} {"input": "What was the reason given by Governor Rick Scott for not implementing a prescription drug monitoring database in Florida?", "context": "How Oxycontin, Florida and the Sackler Family Created the Opioid Crisis In America\nWhy are the Sacklers worth $13 billion today? Answer: “The Oxy Express Explained”\n(MASS TORT NEXUS MEDIA)\nA COMPARISON OF OXYCODONE PRESCRIBING\nIn the first six months of 2010, Ohio doctors and health care practitioners bought the second-largest number of oxycodone doses in the country at just under 1 million pills.\nFlorida doctors bought 40.8 million in the same period, the comparison is astounding, yet it flew under the DEA, Opioid Big Pharma and everyone elses radar for years and years.\nOf the country’s top 50 oxycodone-dispensing clinics, 49 were in Florida. From August 2008 to November 2009, a new pain clinic opened in Broward and Palm Beach counties on average of every three days.\nPharmacies and distributors are at fault as well, pharmacies ordered jaw-dropping numbers of pills from opioid drug distributors, the middlemen between manufacturers and pharmacies.\n90 of 100 of the nation’s top 100 oxy-buying doctors in 2010, were in Florida. 49 of 50 of the country’s top oxy-dispensing clinics were in Florida. For some reason this didn’t raise an alarm or cause anyone to look further at the time.\nPurdue Pharma New What Was Happening In Florida\nPurdue and the Sacklers chose to ignore Florida, because apparently nobody there sued them or complained. In 2007, in other states, the infamous drug maker and three of its executives pled guilty in federal court and paid out $634.5 million in fines for purposefully misleading regulators, doctors, and patients about the addictiveness of their opioid painkiller. Around the same time, Purdue was also sued by several states, including Washington, over similar allegations. Purdue agreed to a $19.5 million multi-state settlement. And in 2015, Purdue settled a case with Kentucky, agreeing to pay $24 million.\nAs part of the state settlements, Purdue was supposed to set up monitoring programs to make sure that its opioid drug didn’t wind up in the wrong hands. It was supposed to watch out for shady pharmacies, unusually large orders, or suspiciously frequent orders. But on this front, Everett alleges that Purdue once again put profits over people.\nObviously, this was ignored as the Florida based “Oxy Expres”; rolled on for years and years with np input, comment or oversight by Purdue Pharma and the Sackler family other than “show me the money” and enjoying a life of luxury on the misery created and managed in the Purdue Pharma boardroom. But, the Purdue boardroom isn’t the only guilty “Opioid Big Pharma” industry player who designed and supported the opioid prescribing crisis.\nFor the current status of efforts to make Opioid Big Pharma accept responsibility in litigation filed in federal and state courts across the country, see: https://www.masstortnexus.com/Briefcases/254/OPIOID-CRISIS-BRIEFCASE-INCLUDING-MDL-2804-OPIATE-PRESCRIPTION-LITIGATION\nWhy Distributors Are Liable\nCardinal Health, one of the nation’s biggest distributors, sold two CVS pharmacies in Sanford a combined 3 million doses of oxycodone, flooding the town of 54,000 with an average of 250,000 oxycodone pills every month.\nWest of Jupiter, a Walgreens drug distribution center sold 2.2 million tablets to a single Walgreens’ pharmacy in tiny Hudson, a roughly six-month supply for each of its 12,000 residents. It shipped more than 1.1 million pills to each of two Fort Pierce Walgreens pharmacies.\nFor 40 days starting in late 2010, the distribution center shipped 3,271 bottles of oxycodone — 327,100 doses of the drug — to a Port Richey Walgreens pharmacy, prompting a distribution manager to ask: “How can they even house this many bottles?”\nThere were 53 million oxycodone prescriptions filled in 2013 by US pharmacies, according to NIDA. This translates to approximately one bottle of this addictive drug for every 6 people in the country. How was this not noticed by those responsible for monitoring narcotics prescribing in the United States?\nCharts and Data On Florida’s Oxycontin Gold Mine\nhttps://www.documentcloud.org/documents/3936665-Purdue-Pharma-1-in-48-Study.html\nhttps://www.documentcloud.org/documents/3534759-uS-Atty-on-Purdue-Settle.html#document/p2/a384323\nA Boardroom Contrived Opioid Epidemic\nThis is the pain chart created by the “Opioid Big Pharma Industry” to support massive over-prescribing of opioids across the country to everyone who walked in to a medical treatment facility, this was an effort to increase narcotic prescribing practices in mainstream medical care–and it worked very very well! This chart became a standard treatment assessment protocol tool across the country.\nhttps://www.documentcloud.org/documents/3936646-DEA-NATL-DRUG-ASSESSMENT-2010.html#document/p51/a383739\nHOW WEST VIRGINIA WAS TARGETED\nIt-Was-Raining-Opiates-How-drug-companies-submerged-West-Virginia-in-opioids-for-years\nReliably red on the political map, Huntington is a West Virginia town with a 182-year-old university, a storied football team and more than 100 churches.\nIt’s where Will Lockwood graduated from high school. It’s where he enrolled at Marshall University. It’s where he first tried OxyContin. By the time Lockwood entered Marshall, Detroit dealers were trickling into Huntington, selling OxyContin and pills with OxyContin’s active ingredient, oxycodone.\nEven though Lockwood could step out his front door and get the drug, Detroit street dealers weren’t the preferred supplier, they were in Florida.\nIt may have been 1,000 miles away, but to Lockwood, getting OxyContin and oxycodone from Florida’s loosely regulated pain clinics “was legal, in a sense.”\nTwice a month, different “crews” from Huntington crowded into vans and headed south to Palm Beach and Broward counties, home to more than 200 pill mills, the pain clinics where anyone with a fake ache and hard cash could walk out with pills and prescriptions.\nAfter hitting a string of clinics, the Huntington crews drove back with “around 500 to 600 pills per person,” said Lockwood.\nBut it wasn’t just a few hundred pills. It was tens of thousands.\nAnd it wasn’t just Huntington, The West Virginia vans were part of a nationwide caravan heading to South Florida. Cars bearing tags from Kentucky, Tennessee, the Carolinas, Virginia and Ohio crowded into one clinic parking lot after another, loading up on pills and prescriptions.\nNews stories and law enforcement focused on those “parking lot” states in Appalachia, where dealers and addicts with a tank of gas or a cheap plane ticket traveled the “Oxy Express” to Palm Beach and Broward.\nBut Florida’s pill pipeline reached far beyond those roadways.\nBy 2010, Florida was the oxycodone drug dealer of choice for drug users and dealers in the Great Lakes, Northeast and Mid-Atlantic regions as well as the Southeast, DEA records show, an area spanning virtually every state east of the Mississippi. It wasn’t just that Florida guaranteed a flow of cheap oxycodone. For 10 years, key lawmakers and agency heads repeatedly looked the other way as crooked doctors and bogus clinics flooded almost half the nation with the highly addictive drug.\nIn failing to crack down, Florida extended by years the amount of time highly addictive oxycodone would be available to both first-time experimenters and addicts. It gave criminals the raw materials for trafficking. It gave Will Lockwood the OxyContin needed to feed his growing habit, It paved the way for his eventual jump to heroin.\nJumping state lines\nTeenage high-school wrestling buddies in New Port Richey ran oxycodone into Tennessee; they were paid with cash hidden in teddy bears. A Hillsborough County man mailed 17,000 pills to Glen Fork, W.Va., a month’s supply for every man woman and child in the tiny town.\nA Boston Chinatown crime boss trafficked pills from Sunrise into Massachusetts, New York, Rhode Island and South Carolina. Wellington twins and pill mill kingpins Paul and Phil George, brothers who oversaw one of the largest operations in the country from their five Palm Beach and Broward clinics, pushing oxycodone into Kentucky, Tennessee, Ohio and South Carolina.\nA husband and wife team operating out of a Forest Hill Boulevard clinic funneled pills to Delaware. At Palm Beach International Airport, two federal security agents accepted $500 a pop each time they waved through thousands of pillsbound for Connecticut and New York.\nA Palm Bay man’s Puerto Rican family bought local pills destined for the working class town of Holyoke, Mass. In Rhode Island, police pulled over a Lauderhill man caught speeding through Providence. They found 903 oxycodone tablets and 56 morphine pills in the car.\nSenior citizen and Tulane business graduate Joel Shumrak funneled more than 1 million pills into eastern Kentucky from his South Florida and Georgia clinics, much of it headed for street sales — an estimated 20 percent of the illicit oxycodone in the entire state.\nVan loads of pill-seekers organized by “VIP buyers” traveled from Columbus, Ohio, to three Jacksonville clinics, where armed guards handled crowd control (federal indictment) and doctors generated prescriptions totaling 3.2 million pills in six months. In Miami, Vinny Colangelo created 1,500 internet website names to entice drug users throughout the nation to one of his six South Florida pain clinics or pharmacies.\nEven the Mafia got in on the Florida oxy express action: A Bonanno crime family associate oversaw a local crew stocking up on Palm Beach and Broward pain clinic oxycodone, upstreaming profits to the New York family.\nAt times, it seemed almost no section of the country was free of Florida-supplied pills: When Olubenga Badamosi was arrested driving his Bentley Continental in Miami in 2011, the Oregon man was one of two traffickers overseeing a crew smuggling South Florida oxycodone to sell in Salt Lake City, Seattle and Denver as well as Oregon, Nevada, Texas and even Alaska.\nPharmacy delivers oxy ‘pot of gold’\nIt would be hard to overstate Florida’s role in feeding the country’s voracious appetite for oxycodone. Oxycodone 30-milligram tablets were favored by addicts. And in 2009 and 2010, roughly four of every 10 of those pills were sold in Florida. Small wonder: Of the nation’s top 100 oxycodone-buying doctors, 90 were in Florida.\nPharmacies, too, ordered jaw-dropping numbers of pills from drug distributors, the middlemen between manufacturers and pharmacies.\nWest of Jupiter, a Walgreens drug distribution center sold 2.2 million tablets to a single Walgreens’ pharmacy in tiny Hudson, a roughly six-month supply for each of its 12,000 residents. It shipped more than 1.1 million pills to each of two Fort Pierce Walgreens pharmacies. By contrast, a single Walgreens pharmacy in the Central Florida townOviedo bought 169,700 doses of oxycodone in 30 days.\nPeople on both sides of the counter knew what was going on: In a letter to the chief executive of Walgreens, Oviedo’s police chief warned that people were walking out of the town’s two Walgreens stores and selling their drugs on the spot, crushing and snorting them, or — still in the pharmacy’s parking lot — injecting them.\nWhy Pharmacies are LIABLE\nIn Fort Pierce, a Walgreens pharmacist accidentally provided an extra 120 oxycodone pills to a customer. When the druggist called to ask that the man return the pills, the customer’s girlfriend bluntly responded that he was an addict, that he sold oxycodone and the 120 pills were “a pot of gold,” DEA records show.\nThat was in September. The same man came back to the same Walgreens in December and January with a prescription in hand, and the pharmacy filled his prescriptions every time.\n‘Wild West of Oxycodone Prescribing’\nCincinnati-based Masters Pharmaceuticals Inc. was a middling-sized drug distributor selling oxycodone to Florida pharmacies.\nIt sold to other customers in other states. But mostly, it sold to Florida: Oxycodone made up more than 60 percent of its drug sales in 2009 and 2010, according to federal records. Of its top 55 oxycodone customers, 44 were in Florida.\nCompany CEO Dennis Smith worried that the Florida-bound oxycodone was getting in the wrong hands. A trip to Broward did nothing to ease his mind. “It was,” he later testified, “the Wild West of oxycodone prescribing.”\nBus and park benches touted pain clinics. When Smith picked up and thumbed through City Beat, a free magazine, he found pages of ads for pain clinics. “It would show young people sitting around a pool and it named the pain clinic and say (sic) ‘we dispense on site,’ and that really hit home hard.”\nSmith stopped selling to pain clinics. But the company continued to shovel millions of oxycodone pills to Florida pharmacies. Masters executives figured the pharmacies would keep an eye out for excessive prescriptions written by pill mill doctors. But not all pharmacies were worrying about doctors at pain clinics, many pharmacies were courting the pill mills prescribers.\nA Lake Worth Family Pharmacy\nIn 2009, the small pharmacy off Lucerne Avenue in Lake Worth had a history. It had been in business for 43 years. The owner and head pharmacist had been there for 32. It had shaded parking and a downtown location, a stone’s throw from the City Hall Annex.\nWhen a Masters inspector visited, he was alarmed to find Tru-Valu Drugs bustling with a long line of young, thin, tattooed customers arriving in groups of 10 to pick up pills. There were signs in the pharmacy warning of limits on the number of oxycodone pills handed out. Even Mallinckrodt Pharmaceuticals, an oxycodone manufacturer, was worried about the volume of its pill sales there.\nOf the 300,000 doses of all drugs the small pharmacy dispensed in December 2008, 192,000 were for oxycodone 30 mg, the dosage preferred by traffickers and users alike.\nThe huge oxycodone volume was no accident. The owner and head pharmacist, unidentified in DEA records, told a Masters inspector that the pharmacy “has pushed for this (narcotic) business with many of the area pain doctors.”\nAnd, despite the torrent of oxycodone going out the door, the pharmacy owner expressed frustration that drug distributors were limiting the amount of narcotics they would sell to his now-closed pharmacy.\nOhio to Florida and Back\nPharmacy after pharmacy benefited from the combination of Masters’ Ohio oxycodone business and Florida’s unregulated pill mills.\nIn Englewood, north of Fort Myers, the pharmacy owner filled prescriptions for six pain clinics — including clinics an hour’s drive away. A Masters inspector found cars from Tennessee and Kentucky in the parking lot and young men leaving the pharmacy carrying large trash bags.\nSuperior Pharmacy not only filled oxycodone prescriptions for pain clinics, it shared waiting room space with a pain clinic in a Temple Terrace strip mall outside Tampa. Neither Masters nor Superior had so much as Googled the background of pain clinic doctors writing those prescriptions, the DEA later said.\nHad they done so, the DEA dryly noted, they “would likely have come across a press release” announcing one of the doctors had been arrested and charged with trafficking in prescription drugs.\nHundreds of thousands of oxycodone pills were sent from Ohio distributors to Florida pharmacies. Unknown thousands of pills headed right back up to Ohio.\nWhen Ohio police burst into Christopher Thompson’s home outside Columbus, they found an assault rifle, $80,000 in cash and oxycodone from his Florida deals. A construction worker whose own pill habit started at age 14, Thompson oversaw a ring of 15 Ohio buyers who traveled to Florida to pick up oxycodone to resell in Central Ohio.\nTwo hours to the west in Martin’s Ferry, David L. Kidd orchestrated a ring of buyers traveling to West Palm Beach and Central Florida to pick up oxycodone for resale on the streets of eastern Ohio and West Virginia.\nDoctors and pharmacies from Florida were complicit with Kidd’s ring in fueling Ohio’s opioid epidemic, wrote the U.S. attorney for West Virginia after Kidd’s 2011 arrest: “The steady flow of pain pills into the Ohio Valley from Florida must stop.”\nDriving To Pick Up Death By Rx\nWith more drugs came more deaths, in January 2010, say police, Fort Lauderdale pathologist Dr. Lynn Averill started a seven-month oxycodone shopping spree, buying 437,880 oxycodone pills from drug distributors.\nThe same month, Matthew Koutouzis drove from Toms River, N.J., to see Averill in her Broward County pain clinic. The 26-year-old collected prescriptions for 390 pills and overdosed two days later. Brian Moore traveled 13 hours from his Laurel County, Ky., home to see Averill. He left with prescriptions for 600 pills and also overdosed within 48 hours.\nKenneth Hammond didn’t make it back to his Knoxville, Tenn., home. He had a seizure after picking up prescriptions for 540 pills and died in an Ocala gas station parking lot.\nKeith Konkol didn’t make it back to Tennessee, either. His body was dumped on the side of a remote South Carolina road after he overdosed in the back seat of a car the same day of his clinic visit. He had collected eight prescriptions totaling 720 doses of oxycodone, methadone, Soma and Xanax. Konkol had every reason to believe he would get those prescriptions: In three previous visits to the Plantation clinic, he had picked up prescriptions for 1,890 pills.\nAn estimated 60 percent of her patients were from out of state, a former medical assistant told the DEA. In 2015, Averill pleaded not guilty to eight manslaughter charges. She is awaiting trial in Broward County. Averill was just one doctor at just one clinic. In 2010, the year Averill’s patients overdosed, Florida received applications to open 1,026 more pain clinics.\nAn online message board advising drug users summed it up: “Just go anywhere in South Florida and look for a ‘pain management clinic.’ It shouldn’t be too hard; you can’t swing a dead cat without hitting one.” Complain about anything from a back injury to a hangnail, it advised, “and they’ll set you right up.”\nBy this time, Kentucky had reined in its pill mills. It didn’t matter, Ohio, Delaware, North Carolina, Connecticut acted as well, but other state’s efforts didn’t matter either, Florida continued ignoring the pill mills and rogue doctors feeding the nation’s oxycodone habit, the pills flowed.\n“There were folks down there, where if I had an opportunity to, get my hands around their throat, I would have wrung their neck,” said Huntington Mayor Steve Williams. On Florida’s inaction he stated, “There was total evidence as to what was happening. It lays at the foot, in my opinion, of the public officials there that allowed it to continue on.”\nGovernor Jeb Bush Backed A Solution\nOne of the first dinners Florida Gov. Jeb Bush hosted after moving into the governor’s mansion in 1999 was a small one. Among those sitting at the table with Bush were Lt. Gov. Toni Jennings, state Sen. Locke Burt and James McDonough, who would become the state’s hard-nosed drug czar. There was an urgent topic on the agenda that night: the explosion of prescription painkillers. For the state’s first family, it may have been personal. Bush had talked publicly about one of his children’s struggle with addiction.\nBy the time the meal ended, all had agreed on the need for establishing a prescription drug monitoring program that would collect information and track prescriptions written for controlled substances, such as oxycodone.\nAbsent a prescription drug monitoring database, there was no way to know whether someone was “doctor shopping,” going from doctor to doctor, getting more and more prescriptions to feed their habit.\nAnd there was no way to know whether a doctor was overprescribing, key to pinpointing whether a pill mill was operating, and where. Similar databases had been adopted by more than a dozen states. It was being described as a “silver bullet” to curb overprescribing. Soon enough, $2 million to get the database up and running would be on the table — but it came with a catch.\nFlorida Attorney General Misfires Against Purdue\nIn 2001, OxyContin-maker Purdue Pharma was fending off early criticism of its blockbuster painkiller. At issue was whether Purdue’s aggressive marketing campaign had misled doctors and patients alike. Purdue and three top executives later pleaded guilty to federal charges of illegally marketing the drug. Far from being safe and non-addictive, OxyContin carried the same addiction risk as morphine, and was every bit as potent.\nBut that was six years away. In 2001, towns in Maine reported an alarming uptick in crime tied to OxyContin. The first of several congressional hearings was ramping up. Critics and parents who lost children were piling on. Reporters were starting to write stories.\nIn November, Florida Attorney General Bob Butterworth appeared poised to take on the company. Calling OxyContin street sales “a major threat to public health,” Butterworth told a state Board of Medicine committee that Purdue should consider temporarily taking the drug off the market. It wasn’t only traffickers concerning Butterworth. It was the sales pitch.\nIn late 2001, Butterworth called a young assistant attorney general into his office and gave him a magazine article on OxyContin and an assignment: Look into Purdue marketing. Former Florida Attorney General Bob Butterworth and Palm Beach County State Attorney Dave Aronberg. The young lawyer, now-Palm Beach County State Attorney Dave Aronberg, said he knew nothing about OxyContin. But he didn’t like what he read.\nDuring the yearlong inquiry, 589 Floridians died after taking oxycodone. Nothing criminal was found, Aronberg later said. Instead, Butterworth and Purdue struck a settlement. As part of a $2 million deal, Purdue would pay to establish a prescription monitoring database, the same silver bullet sought by Bush. After Florida’s computerized system was up and running, the same system would be free to any other state. The entire country, not just Florida, would benefit.\nIt could have been a groundbreaking deal. There was one catch. State lawmakers had to vote to create the prescription monitoring program by 2004, or Purdue would keep its money.\nMarco Rubio Kills The Anti-Oxy Rx Bill\nA political gight killed the program. “And there was one person who was responsible,” said former state Sen. Burt, now an Ormond Beach insurance executive. “And it was Marco Rubio.”\nA rising state lawmaker in 2002, now-U.S. Sen. Marco Rubio had the clout to make or break the legislation. He had been one of two state House majority whips and was on the fast track to becoming House speaker.\nRubio didn’t kill the 2002 bill out of opposition to prescription monitoring—it was politics “as usual” yet nobody blamed Rubio for the resulting opioid crisis that seems to have started in his political backyard and flourished beyond belief..\nU.S. Sen. Marco Rubio, R-Fla., was a leader in the Florida House in 2002 when he blocked a vote on prescription monitoring. That year, Rubio favored a bill changing the Miami-Dade County charter, which failed to pass because of a single “no” vote in the Senate. Burt cast the vote.\nAngered by what he saw as Burt’s betrayal, Rubio killed the prescription drug monitoring bill. “When I found out he broke his word, it made the choice easy,” Rubio told The Miami Herald.\nIt’s not certain that the full Legislature would have passed the bill had it made it to a floor vote. Rubio was the first, not the last, in a line of state legislative leaders over years who would refuse to seriously consider the bill. Most cited privacy concerns.\nBut prescription monitoring databases in Florida and other states free to use Florida’s model would have pinpointed rogue doctors, would-be pill mills and doctor-shoppers across the country, just as all three were beginning to converge. In doing so, it could have curbed a national opioid epidemic when it was just an emerging problem, not the monster it would become.\nOnly weeks after the 2002 bill was killed, Bush suppressed a sob as he discussed his daughter’s arrest for forging a prescription. Court-ordered to drug treatment and then briefly to jail, Noelle Bush survived her pill addiction. The 2004 deadline for greenlighting a monitoring system passed. So did Purdue’s million-dollar obligation to pay for it.\nBetween 2002, the year Rubio killed the database that could have identified doctor-shoppers, and late 2011, when the database finally came online, more than 20,800 Floridians died after taking prescription opioids, including OxyContin, annual Florida Medical Examiners’ reports show. “Not getting that bill through the Legislature resulted in Florida becoming the pill mill capital of the United States,” said Burt.\n“There was heartache for thousands of families beyond measure and it didn’t have to happen.”\nFlorida Officials Were Told Of The Oxy Express\nThe East Kentucky hills and valleys of Greenup County suit Keith Cooper, a long-haired undercover cop-turned-sheriff: “It’s a backwater. I tell people all the time I am a hick sheriff from a hick location, and by 2011, the rural county and its sheriff had big city problems.\nGreenup is near the stretch of interstate highways that provided drug traffickers and users with a straight shot to Palm Beach and Broward pill mills. It’s less than an hour’s ride to Huntington Tri-State Airport, where a $27 flight to Fort Lauderdale was a popular draw for dealers hoping to stock up.\nArrests for Florida pills soon eclipsed local arrests for pot.\n“When we locked ’em up, we take all their pill bottles and all their paperwork, and we found maps to the doctors offices and everything,” recalled Cooper.\n“I called the (Florida) medical board and gave them a big list of doctors,” Cooper said. He called the state pharmacy board, too. He got no response.\n“So then I called the Attorney General’s Office and the Governor’s Office. I was calling them all, the whole state. Of course, I was talking to the state police the entire time. “I told them, all of the profits were down there. And all of the pain’s up here.” Nothing happened. Florida’s oxycodone pipeline continued to flow.\nOn the other side of the law in Greenup, Mikey Frazier was banking on it.\nThe Oxy Express\nFrazier was on a scholarship to play baseball at his junior college in Chicago when he suffered a torn rotator cuff. Doctors prescribed Percocet, a pill containing oxycodone, in 2002. When doctors cut him off, he bought it on the street. In 2006, he moved to OxyContin, nearly pure oxycodone. In 2007, he gave his friends money to go to Florida and bring him back pills.\n“My buddy had a minivan and he would actually go down one week and take two to three people with him, and then the following week I’d go,” said Frazier. He still remembers the route: “I’d take 64 East to 77 South to 95 South. And it’s just a straight shot.”\nOthers followed suit. “What got everyone started was because the doctors around here won’t write a strong enough prescription,” he recalled. OxyContin and generic oxycodone still could be had — just not in Kentucky, which had a prescription drug monitoring database.\nIn Florida, “there was none of that … stuff that they check and find out what doctor you’ve been to,” said Frazier.\n“And one person does it, and then they tell a friend, and then they go do it, and that’s how it all really got started here.”\nMEDICAID-MEDICAIRE PAID MILLIONS FOR OXY\nTallahassee wasn’t just ignoring the epidemic, It was financing it.\nBefore her office was raided by law enforcement in December 2001, Asuncion M. Luyao’s patients would wait in a line in the rain to get prescriptions from the Port St. Lucie internist and acupuncturist. She was one of the most prolific prescribers of OxyContin in the state.\nAnd hundreds of thousands of those pills were being paid for by Medicaid, Florida’s taxpayer-financed health program for the state’s poorest and sickest citizens. Between 1999 and 2001, Medicaid shelled out $935,634 for OxyContin prescriptions written by Luyao. That was just OxyContin. Luyao was prescribing an array of addictive drugs. In the 12 months leading up to the clinic raid, Medicaid paid roughly $1 million for 7,000 prescriptions, only about 17 percent of them for OxyContin.\nNor did the raid slow her down. Between the raid and her arrest on trafficking charges four months later, Luyao wrote another 282 OxyContin prescriptions billed to Medicaid. She was not an outlier. In 24 months, taxpayers footed the bill for more than 49 million doses of pills containing oxycodone, even though there were only 1.36 million Medicaid patients. Half were children.\nThe sheer volume of pills might have been a tipoff that the drugs were not all intended for legitimate use. So were arrest reports dating to 2001. One man had used his 7-year-old son’s Medicaid number to doctor-shop for OxyContin. A Miramar pharmacist who billed Medicaid $3.7 million for OxyContin pills was charged with paying Medicaid patients $150 each to use their IDs.\nMedicaid paid for more than $300,000 to fill Dr. James Graves’ OxyContin prescriptions. The Florida Panhandle physician was the first doctor in the nation convicted of killing patients by overprescribing OxyContin.\nAddiction risk for people taking high doses of oxycodone begins climbing after just three days, a recent study concluded. And most people on Florida Medicaid getting oxycodone prescriptions in 2011 were getting much more than a few days worth. They were getting an average of nine months worth of pills, state officials said.\nPill mill doctors prescribed 1 million of those pills:\nDoctors working for the George twins’ trafficking empire prescribed at least 102,081 oxycodone pills billed to Medicaid before the ring collapsed in 2010.\nWorking out of a Delray Beach pain clinic founded by a convicted drug smuggler, Zvi Harry Perper, son of the Broward County medical examiner, was arrested on trafficking charges, but not before he wrote prescriptions to Medicaid patients for 115,977 doses of oxycodone in 90 days.\nIn Lake Worth, Cesar Deleon was arrestedas part of a DEA pill mill sweep and charged with 55 counts of illegally distributing drugs. Deleon wrote orders for 20,302 oxycodone pills for Medicaid patients.\nMiami internist Dr. Selwyn Carrington authorized 32,411 doses of oxycodone for Medicaid patients in just two years. He was busted for signing his name to hundreds of prescriptions.\nFurther, Florida wasn’t in any hurry to stop doctors linked to pill mills.\nCarrington was arrested for overprescribing in March 2011. The state’s emergency order to suspend his license was signed months after he had pleaded guilty in 2012.\nPerper was busted at a Delray Beach pill mill operated by a former felon in 2011. The state did not act against his license until 2014.\nJoseph M. Hernandez was writing prescriptions from his car, a veritable pill mill on wheels, when he was busted in February 2010 on one count of trafficking in oxycodone.\n.Florida’s Department of Health didn’t file paperwork to restrict his license for almost 18 months.\nDuring that time, Hernandez wrote oxycodone prescriptions for Medicaid patients totaling 258,940 doses representing a taxpayer-footed bill of $130,165.\nPurdue Pharma’s Profits Before Patients Creed\nKelly Skidmore is exactly the type of person Purdue Pharma’s OxyContin marketing was intended to reach: Diagnosed with juvenile arthritis, the former state legislator’s struggle with chronic pain began at age 4.\nSkidmore was wary of opioid painkillers, though, one reason her willingness in 2009 to work with Purdue was surprising. But she did it to get Florida’s dormant drug monitoring database up and running.\nThen a state representative in a district straddling Palm Beach and Broward counties, Skidmore recalled that, “They came to me and said, ‘Could you help get it across the finish line?’ ”\nOxyContin and prescription opioids, a serious problem in 2002, had evolved into a full-blown crisis in the ensuing seven years. Broward alone had more pain clinics than it had McDonald’s. Deaths tied to oxycodone had exploded, up by 263 percent since the prescription monitoring database had first been proposed and killed. Overdoses from prescription opioids were claiming more than seven lives a day.\n“By God, if we had had seven dolphins a day dying and washing up on Florida beaches, we would have been appropriating money and solving it,” Skidmore said.\nSkidmore believed a database wasn’t going to resolve the underlying addiction crisis. Still, it was a start. Not a silver bullet, but “maybe silver buckshot,” she said. The database law passed with gaping loopholes. No health care professional would have to report opioid prescriptions or check the database before prescribing more, and the state refused to pay for it.\n“Just to get that one little piece … took nine years of filing bills and then it had no teeth,” Skidmore said. “And it should have been the easiest piece.”\nWhere Was The DEA and Everyone Else?\nThe DEA all but wrung its hands over Florida’s lethal inaction. The agency ticked off a devil’s brew of regulatory loopholes: Florida’s Health Department regulated health care professionals but not pain clinics. The state’s Agency for Health Care Administration regulated pain clinics that accepted insurance, but pill mills were most often on a cash-only basis. And the prescription monitoring database, mired in a vendor dispute, remained stalled.\nIn early 2011, when Gov. Rick Scott took office, just one drug — oxycodone — was tied to six fatal overdoses a day. Deaths tied to all drugs claimed 25 a day. In the handful of Appalachian states where traffickers were bringing back South Florida pills, it was worse.\nOhio’s death rate for oxycodone and similar opioids had doubled in 24 months, federal records show. Kentucky’s was up by more than 50 percent. And in West Virginia, home to hard-hit Huntington, death rates tied to pill mill drugs such as oxycodone and Opana had climbed by 341 percent.\nThe DEA formally pinpointed Palm Beach, Broward and Miami-Dade counties as the nation’s single biggest hub for trafficking pills across state lines. Within weeks of being sworn in, Scott abolished Florida’s Office of Drug Control, eliminating the state drug czar position, announced plans to drive a final stake in the heart of the database and rebuffed Purdue Pharma’s renewed offer to help pay for it.\nScott, a tea party conservative, cited privacy concerns, expressed skepticism the monitoring program would work and raised the possibility taxpayers would be left with a $500,000-a-year bill to operate it.\nAttorney General Pam Bondi had also ridden the tea party wave to her position. She shared many of Scott’s conservative convictions. Unlike Scott, the former prosecutor relentlessly lobbied to keep the database alive. Florida’s failure to adopt the drug monitoring database was so out of step with the rest of the country that it began spawning conspiracy theories on both sides of the law.\nEveryone knew prescription monitoring was going to kill the pill smuggling business, said a corrupt Florida Highway Patrol trooper as he drove a load of pills out of Florida, according to a federal lawsuit. Talking to the confidential informant in the seat next to him, the trooper speculated someone in Tallahassee must have a piece of the action, “because (Scott) was so adamant about not putting that system in place. Right?”\nIn Greenup, an infuriated Cooper told a reporter, “In my opinion, (Scott’s) getting money from somewhere. He has to be.” A few days later, recalled Cooper, “A lieutenant with the state police I’d been talking to down there called me, said, ‘Man, just a head’s up: I wouldn’t come to Florida.’” In states on the receiving end of the Florida pill pipeline and among federal officials, Scott’s resistance triggered outrage.\nIn Kentucky, where as much as 60 percent of the illicit oxycodone in that state flowed from Florida, Lt. Gov. Daniel Mongiardo proposed erecting billboards at the Florida line: “Welcome to the Oxy Tourism Capital of the World.”\nU.S. House Appropriations Chairman Hal Rogers, also from Kentucky, twice wrote Scott. “Canceling Florida’s prescription drug monitoring program is equal to firing firefighters while your house is ablaze,” he wrote.\nGil Kerlikowske, director of the White House Office of National Drug Control Policy, asked to meet with Scott. So did DEA Administrator Michele Leonhart.\nThree U.S. senators — New York’s Chuck Schumer, West Virginia’s Joe Manchin and Rhode Island’s Sheldon Whitehouse — joined Florida’s Bill Nelson in pointing out that the pills weren’t just a Florida problem: There were “serious ramifications for the rest of the country,” wrote Nelson of Scott’s reluctance to crack down. This is a perfect example of how political rhetoric, in-fighting and contrived agendas prevented an early stop to the emerging opioid crisis many years ago.\nWHY DIDN’T THE DEA, DRUG DISTRIBUTORS AND PHARMACIES TAKE NOTICE BEFORE THE OPIOID CRISIS SPREAD ACROSS THE COUNTRY LIKE WILDFIRE? WAS IT BECAUSE OF THE BILLIONS IN PROFITS, QUARTERLY BONUSES AND DIVIDENDS? STOCK OPTIONS CASHED IN BY BOARDROOMS AT EVERY OPIOID BIG PHARMA COMPANY? STAY TUNED FOR HOW “PROFITS BEFORE PATIENTS” BECAME THE NORM\n(article excerpts and quotes have been taken from publicly available media sources and court records)", "answers": ["Privacy concerns and skepticism about its effectiveness."], "length": 6048, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "e84f6371f23c95f766248ac4e52b781f325f52f09c6eb9ae"} {"input": "How many experiments were demonstrated to test the capabilities of the controller?", "context": "Paper Info\n\nTitle: Force Feedback Control For Dexterous Robotic Hands Using Conditional Postural Synergies\nPublish Date: Unkown\nAuthor List: Dimitrios Dimou, José Santos-Victor, Plinio Moreno\n\nFigure\n\nFig. 1.Example of modeling the contacts and friction during manipulation.\nFig. 2. Schematic representation of the proposed force controller.The input is the state (GRASP or RELEASE) and the force readings.Based on that the grasp size is adjusted by a value C and is given to the posture mapping function along with the desired grasp type.A finger configuration is then generated and commanded to the robot.\nFig. 3. Our control algorithm in Python-like pseudocode.\nFig. 4. Our first experiment.The robot picks up a bottle, transports it, and places down on the desk.In the bottom part of the figure, you can see the control signals during this task.\nFig. 5.The household objects used in our experiments.\nUnder the pictures of the execution you can see the signals recorded by the controller: the average normal force applied by all fingers (blue line), the thresholds f threshold high n .(purple dashed line) and f threshold low n.(yellow dashed line), the average tangential force (green), and the grasp size used in each time-step (red).The task is divided four stages: 1) (red part) the initial grasp of the object, in this stage the force controller closes the grasp until the applied normal\nFig.6.In the upper row of images, you can see our second experiment.The robot picks up the chips can, rotates it 90 degrees, and places back down.In the middle row, for our third experiment, the robot picks up the chips can, rotates it 90 degrees, and hands it over to a person.In the bottom row, for our forth experiment, the robot picks up a foam brick, rotates it 180 degrees, and hands it over to a person, using a pinch grasp.\n\nabstract\n\nWe present a force feedback controller for a dexterous robotic hand equipped with force sensors on its fingertips. Our controller uses the conditional postural synergies framework to generate the grasp postures, i.e. the finger configuration of the robot, at each time step based on forces measured on the robot's fingertips.\nUsing this framework we are able to control the hand during different grasp types using only one variable, the grasp size, which we define as the distance between the tip of the thumb and the index finger. Instead of controlling the finger limbs independently, our controller generates control signals for all the hand joints in a (lowdimensional) shared space (i.e.\nsynergy space). In addition, our approach is modular, which allows to execute various types of precision grips, by changing the synergy space according to the type of grasp. We show that our controller is able to lift objects of various weights and materials, adjust the grasp configuration during changes in the object's weight, and perform object placements and object handovers.\n\nINTRODUCTION\n\nTo perform complex manipulation tasks in unstructured environments, humans use tactile feedback from their fingers. This feedback is provided by tactile afferents located in the skin of the hand. Particularly, for handling small objects with precise movements, the afferents located in the fingertips are used, which have high density and adapt fast to pressure changes .\nThese afferents provide information about the characteristics of the exerted contact forces, such as the magnitude and the direction. For anthropomorphic robots to be able to perform dexterous tasks similar force feedback signals must be used to alleviate problems arising from uncertainty in measurements, and handle external perturbations.\nFor example, using open-loop position control to lift a heavy object may fail due to slip without any feedback mechanism to provide tactile information. Previous works have used tactile sensors to design force controllers that use slip prediction to update the desired normal forces applied by the fingertips.\nThe slip predictors are based on machine learning models such as neural networks and random forests to classify multi-modal signals from a tactile sensor. In all previous works, each finger was separately controlled by an independent force controller. In addition, they required labeled data to train the slip predictors and because each finger is controlled independently is not obvious how to implement different anthropomorphic grasp types.\nIn this work we develop a force controller that takes as input the force readings of the fingertips and computes the grasp size which is then used along with a grasp type label to generate a grasp posture with the desired characteristics. To avoid slippage the desired normal contact force is calculated to be proportional to the tangential contact forces.\nThe applied normal force is then controlled using the size of the grasp as a control variable. Larger grasp sizes mean less force is applied to the object. So the grasp size is calculated from the error between the desired normal force and the actual measured normal force. The grasp size is then given to the posture sampler that generates a grasp posture, i.e. the finger joint angles.\nThe posture sampler is modeled with a conditional Variational Auto-Encoder (cVAE) based on the framework proposed in . With this framework we abstract away the low-level control of the fingers and generate hand postures based on high-level properties such as the type and the size of the grasp. So it works as a mapping function that takes as input a low-dimensional vector and the grasp type and size as conditional variables and maps them to a set of joint angles.\nWe show that with our controller we can control a dexterous robotic hand to lift objects of different weights using three precision grasps. Our controller is also able to compensate and retain a stable grasp during changes in the objects' weight, for example when filling up a cup or emptying it. In addition we show how with the addition of the hand pose information we can use the controller to calculate if the tangential force is due to gravity or due to a support surface and use this information to perform handovers and place down objects on surfaces.\nWe perform several real-world experiments with a dexterous robotic hand to showcase the capabilities of our controller and support our design choices. To sum up our main contributions are • We develop a controller for a dexterous robotic hand that uses force feedback and the conditional synergies framework to perform dexterous manipulation tasks.\n• We show that with our controller we can easily use different precision grasp types, by changing only the grasp type variable which is given to the grasp posture mapping function. • We demonstrate by incorporating information about the world pose of the hand we can use our controller to perform additional tasks such as placing down and handing over objects.\nRoboticists have looked for inspiration in humans for developing methods for complex object manipulation . Neuroscientists have studied for a long time the processes that allow humans to use tactile feedback to perform complex manipulation tasks. Humans tend to adjust the grip force according to the object's weight, its friction and they use a safety margin to account for uncertainties .\nTo gather information about the tactile states they use multiple afferents that are located in the skin of the fingers . There are different afferents in different parts of the hand depending on their usage, e.g. fast adapting afferents in the fingertips for precise manipulation. Based on signals from these afferents, humans encode simple contact events into action phases, such as grasping, lifting or releasing, which they combine in order to perform more complex and long-horizon manipulation tasks .\nIn robotics tactile sensors have been used for object stabilization and slip prediction in a variety of settings. For example, in , a compliant anthropomorphic prosthetic hand was controlled using force sensing to maintain object stability and avoid slip. In , they develop a control approach that uses integrated force and spatial tactile signals to avoid slip with unknown objects in real world settings.\nIn , , grasp quality metrics are computed based on the tactile feedback from the robots fingertips. In these works, simple two or three fingered grippers were considered for simple grasping tasks. Force control with anthropomorphic robotic hands has also been explored in more recent works. In , they employ three slip prediction methods to estimate when slip starts and based on the force signals at that moment they calculate the friction coefficient value.\nBased on the calculated friction coefficient, they design a force controller that independently controls each finger to achieve a desired normal force. The desired normal contact force is set to be proportional to the tangential contact force and a safety margin based on the evidence found in . In , they train a random forest to classify the contact states into the classes: no contact, contact, slip.\nBased on this classification signal, when slip is detected they increase the desired normal contact force to avoid it. In they train a recurrent neural network to estimate slip and the object material from the readings of a Biotac sensor. The force controller is increasing the desired normal contact force when slip is detected.\nAll these works , , use tactile feedback sensors to predict slip. They collect labeled data, on which they train their models. This approach is based on complex and expensive tactile sensors, and the process of collecting data is cumbersome. In addition, the data do not cover all possible hand poses, which would be impractical.\nIn contrast, in our work we do not rely on slip prediction, we avoid slip by defining a tangential force gain and a safety margin that work for a large number of objects. Furthermore, instead of independently controlling each finger we use a synergistic framework to generate grasp postures, that is conditioned on two variables: the grasp type and the grasp size.\nThis way, instead of controlling the values of each joint of each finger, we control only the two conditional variables greatly simplifying the control pipeline. This also, gives us the ability to use different grasp types in our manipulation tasks by changing only the grasp type variable. In also a synergistic framework was used to prevent an object from slipping from a humanoid hand, but they modeled only one synergy for a tripod grasp and they used the forces on the robotic arm as feedback, while we use force feedback from the fingertips.\nOur control algorithm could also be applied to different hands as it does not depend on the hands configuration. Finally, in previous approaches only lifting tasks had been considered. In our work we demonstrate that our approach can be used to perform more complex tasks, such as placing objects on surfaces and performing handovers, which was not done in previous works.\nOur goal in this work is to design a control algorithm for an anthropomorphic robotic hand to perform dexterous manipulation skills such as lifting and placing down objects. Our control algorithm will use tactile feedback from the force sensors on the fingertips of the hand to decide the forces that need to be applied to the object in each step of the task.\nGiven the desired forces to be applied, the size of the grasp will be computed. Given the grasp size and a desired grasp type, the posture generator will generate a grasp posture, i.e. the hand configuration, such that the force constraints are satisfied. To model the contacts and friction we use Coulombs' law, which states that in order to avoid slip, the normal contact force f n to the contact surface of an object, times the fiction coefficient µ, has to be larger than the tangential force f t :\nµf n ≥ f t You can see an example in Figure , where an object is pressed against a wall by an applied normal force f n , and we have the tangential force f t = mg due to gravity. In order for the object to remain stable we need to apply a normal force: where µ is the friction coefficient between the object and the wall.\nIn the case of a dexterous hand manipulating an object, we want the normal forces applied by all fingers to be greater than the tangential force divided by the friction coefficient of the materials of the object and the fingertip. Since it is hard to accurately compute the friction coefficient between all possible object materials previous works have used multi-modal tactile sensors like the BioTac sensor, which provides information about the pressure, skin deformation, and temperature, to predict slip and based on that signal to increase the applied normal force.\nIn our work we use the FTS3 sensors which is a low-cost sensor that measures the 3D force applied in each fingertip. In addition, previous works gathered labeled datasets in order to train their slip prediction models which is time-consuming and limits the possible orientations of the hand, because gathering labeled data for all possible orientations is impractical.\nTo overcome this we experimentally selected the parameters that determine the value of the applied normal force such that we avoid slip for all objects in our dataset, from the lightest to the heaviest. In order to guarantee contact between the fingertip and the object, in the beginning of the grasping phase, we use an offset f of f set n as the minimum normal force applied by each finger.\nIn they also suggest that humans use an additional safety margin which is proportional to the tangential force, f margin n ∝ f t . So the final desired normal contact force becomes: where G is the gain that includes the friction coefficient and the additional safety margin. To alleviate the effects of noise in the sensors, the running average of the measured normal force f n and tangential force f t is used, as a low pass filter.\nSo for each force measurement we have the following relation: where α ∈ (0, 1) is a parameter that determines how much new measurements affect the value, and is experimentally selected. Given the measured normal force f n from the fingertip sensors we can compute the error f err n = f des n − f n . We use this error signal to control the grasp size variable g size , that we use as a conditional variable in our posture mapping function.\nThe grasp size represents the distance between the thumb and the index finger in a grasp posture. So a smaller grasp size will result in a tighter grasp and greater normal force applied to the surface of the object. We use a linear controller for the grasp size variable that is implemented as follows: where K is a parameter that controls the rate of decrease of the grasp size, and is experimentally selected.\nSo when the error between the desired normal force and the actual normal force is large the grasp size decreases so tighter grasp postures are generated in order to apply more normal force. In practice, in order to avoid oscillations in the grasp size we use the desired normal force as a high threshold that we want the measured normal force to be below:\nIf the normal force is below that threshold the grasp size does not change even if there are small oscillations in the measured tangential and normal forces. Also, in order to avoid the hand applying too much force that damages the hardware or the object we use a low threshold, that is: where w threshold is the width of the threshold in mN .\nIf the measured normal force is below the grasp size increases in order to apply less force. So the final grasp size variable for grasping is calculated as follows: where This is similar to the deadband control method , where instead of having a fixed reference point, an operating range is set. If the response is in this range, the controller does not exert any correction.\nIn our case, the operating range changes according to the force signals from the robot's fingertips. The grasp posture mapping function is based on the conditional postural synergies model presented in . It uses a conditional Variational Auto-Encoder model to generate grasps postures conditioned on additional variables such as the grasp size.\nIn this work we augment this model to also generate grasp postures conditioned on the grasp type. The model is trained on a set of labeled grasp samples acquired by teleoperating a robotic hand using a data-glove. Using this model we are able to abstract away the low-level control of each joint of each finger and generate grasps based on more general characteristics such as the type and the size of the grasp.\nIn this way we can control all the fingers jointly by a single value, the grasp size, thus greatly reducing the control parameters. In addition we are able to use the same control algorithm for different precision grasp types, by changing the grasp type conditional variable. Finally, we can modify our controller to release objects instead of grasping them.\nGiven the pose of the hand in the world coordinate frame, which we can acquire from the robotic arm that is attached to, we can use the forward kinematics of the hand to compute the poses of each fingertip. Then using the force readings of each fingertip we can calculate the global direction of the net tangential force.\nIf the angle between the direction of the net tangential force and the direction of gravity is less than 90 degrees, i.e. the net tangential force's direction is towards the ground, we assume that the tangential force is due to gravity pulling the object, so the force controller tries to grasp it. If the angle is more than 90 degrees, i.e. the net tangential force's direction is upward, it means that something is pushing (or pulling) the object upward, in which case we assume that the object is touching on a support surface or someone is pulling the object so the controller increases the grasp size given to the posture mapping function proportionally to the normal force measured thus slowly releasing the object.\nOpening the grasp is done by controlling the grasp size variable as follows: That way we can place objects on surfaces but also perform robot to human handovers, where the robot holds the object and the human grasps the object and slightly pushes or pulls it up, signaling to the robot that there is a support surface.\nThe robot then slowly releases the object by opening its grasp. We showcase these scenarios in the experiments' section. Based on these observations, we present our force controller in Figure . The hand starts in an open pre-grasp position, a latent point is sampled from the prior distribution of the posture mapping function, and given the desired grasp type and the grasp size a grasp posture, i.e. the joint angles of the fingers, is sampled.\nThe initial grasp size is set to the maximum value, and when the force controller comes into effect and depending on the state of the system and the forces on the fingertips grasp size changes by some value C, according to equations 1,2, until the desired normal force is achieved. To choose between grasping or releasing an object we use a finite state machine formulation.\nWhen the hand reaches the desired grasp pose, which we assume is provided, the GRASP state is activated, in which the controller tries to grasp the object. When the controller detects that the tangential force applied to the object is coming from a support surface the state changes to the RELEASE state, in which the controller releases the object by opening the grasp.\nYou can see the full algorithm in Python-like pseudocode in Figure . To summarize, the advantages of our controller compared with previous approaches are threefold: 1) instead of controlling each joint of each finger of the hand we use only two variables, the grasp size and the grasp type, which allows us to perform multiple grasp types by changing only one variable while the grasp size variable is common among all grasp types, that greatly reduces the complexity of the control process compared to independently controlling a 21 DoF hand to perform different grasp types, 2) we do not rely on slip prediction for controlling the desired normal force, which involves gathering labeled data and works only for the hand poses in the training dataset, and 3) we can use our controller to also release objects instead of only grasping them.\n\nExperimental Set-up.\n\nFor our experiments we used the Seed Robotics RH8D Hand , which is a robotic hand with 7 DoFs. The hand is equipped with the FTS-3 force sensors in each fingertip, which are high resolution tactile sensors that provide the 3D force applied in each fingertip. The sensor provides data at a rate of 50Hz. For the experiments the hand was mounted on a Kinova Gen3 7DoF robot.\nTo train the posture mapping function we used the CyberGlove to teleoperate the hand and collect 468 grasps belonging to three precision grasp types: tripod, pinch, lateral tripod. The architecture of the cVAE model was the same as in , with the addition of the grasp type as a conditional variable, which was one-hot encoded.\nWe used 10 household objects shown in Figure . With the heaviest object weighing 380g and the lightest 1g. During the experiments the trajectories of the arm were prerecorded, while the hand was controlled online by our control algorithm.\n\nParameter tuning.\n\nTo select the values of the parameters in our controllers we conducted preliminary experiments where we tested lifting and releasing several objects, with different physical properties. To select the value of the normal offset force f of f set n , we used an empty plastic cup as our test object, and we choose a value such that the fingers do not deform the cup.\nThe final value of the parameter was set to -50 mN. To select the values of the gain G and the rate of decrease K, of the grasp size, we experimented with the heaviest object in our dataset, which is the mustard bottle and weighs 380g. The gain G was set to 2.0 such that the desired normal force would be enough to hold the object.\nThe rate of change of the grasp size was set to 100.0, based on the operating frequency of the force sensor and the range of values of the tangential force. For the tangential force averaging process we used a parameter value of α t = 0.7, because we want the controller to be sensitive to fast changes in its value, that can arise for example during lifting an object.\nFor the normal force averaging process we used a parameter value of α n = 0.5, as we do not want it to be affected by noise that could make the controller overconfident.\n\nExperiments.\n\nTo explore the capabilities of our controller, we demonstrate five experiments of increasing complexity: 1) we picked and placed a bottle using a tripod grasp, 2) we picked, rotated and placed a chips can on a box using a tripod grasp, 3) we picked, rotated and handed over the chips can to a person using a tripod grasp, 4) we picked, rotated and handed over a brown foam brick to a person using a pinch grasp, 5) a person handed over a plastic cup to the robot, filled it with coins to increase its weight, and the robot then handed it back to the person using a tripod grasp.\nYou can see the execution of the first experiment in In the middle row, for our third experiment, the robot picks up the chips can, rotates it 90 degrees, and hands it over to a person. In the bottom row, for our forth experiment, the robot picks up a foam brick, rotates it 180 degrees, and hands it over to a person, using a pinch grasp.\nFig. . In our fifth experiment, a person hands over an empty plastic cup to the robot, throws coins in it to increase its weight while the robot adjusts its grip to stabilize the object, and then hand overs the cup back to the person. force is below the offset f of f set n , 2) (green part) the robot lifts the object, as it tries to lift the tangential force increases, increasing the threshold, so the grasp size decreases to apply more normal force, 3) (orange part) the robot transports the object, you can see, in point A in the Figure, a perturbation in the tangential force when the robot begins to move, the controller responds by decreasing the grasp thus stabilizing the object, and 4) (blue part) the robot enters the releasing phase, where it lowers the arm until it detects that the tangential force is due to a support surface, then it stops lowering the arm and increases the grasp size slowly releasing the object.\nIn point B in the Figure, you can see that there is noise in the tangential force, due to the arm moving to place the object on the table, that is also reflected in the desired normal force. Because we use the desired normal force as a threshold and not as a reference signal this noise is not manifested in the control of the grasp size.\nYou can see the execution of the second experiment in the upper part of Figure . This experiment demonstrates the ability of the controller to handle arbitrary hand poses. The experiment is divided in four parts: 1) the robot enters the GRASP phase and the force controller generates grasps to achieve a normal contact force below the f of f set n threshold, 2) the robot lifts the object and adjusts the grasp size to avoid the object falling, 3) the hand rotates to place the chips can on the horizontal position, and 4) the robot enters the RELEASE phase, and the arm lowers until the object touches the box, when the hand detects the supporting surface, it starts to slowly release the object.\nYou can see the execution of the third experiment in the middle part of Figure . This experiment demonstrates the ability of the controller to perform robot to human handovers. The experiment is divided in four parts: 1) the robot enters the GRASP phase and the force controller generates grasps to achieve a normal contact force below the f of f set n threshold, 2) the robot lifts the object and adjusts the grasp size to avoid the object falling, 3) the hand rotates to place the chips can on the vertical position, and 4) the robot enters the RELEASE phase, the arm stays still, the human grasps the object from the bottom and slightly pushes it up, the hand then detects that there is a supporting surface and starts to slowly release the object.\nYou can see the execution of the fourth experiment in the bottom part of Figure . This experiment is similar to previous one, but the grasp type that the robot uses is a pinch grasp, that involves only the thumb and the index finger. To perform this we only had to alter the grasp type conditional variable that was given to the posture mapping function.\nYou can see the execution of the fifth experiment in the bottom part of Figure . In the first part (blue) of the experiment the robot closes its grasp, by reducing the grasp size, until the normal force is below the force offset. In the next three parts (pink, green, red) the person throws coins in the cup to increase its weight.\nYou can see in the signal plots that each time coins are added the tangential force decreases so the normal force threshold decreases too. The grasp sizes then decreases as well in order to apply more normal force. This experiment demonstrates the ability of the controller to handle perturbations in the weight of the object during grasping.\n\nCONCLUSION\n\nIn summary, we presented a controller that uses force feedback integrated with conditional synergies to control a dexterous robotic hand to grasp and release objects. We demonstrated that our controller can lift objects of different weights and materials while avoiding slip, react online when the weight of the object changes, place them down on surfaces, and hand them over to humans.\nIn addition, the control architecture is modular, so the synergy grasp mapping component can be easily changed in order to control several precision grasp types. However, our experiments also revealed various limitations of our controller. For example our method fails to stabilize the object when rotational slip occurs.\nIn addition hardware limitations such as, slow update rates and noise in the force measurements can create problems that result in the object falling. In future work we plan to incorporate additional sensing modalities, such as vision to alleviate some of these issues.", "answers": ["5."], "length": 4837, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "daa4eb9d8b28a987b1c2c049200634cdc510636b19a64ccd"} {"input": "What size chains were used in the benchmarking?", "context": "Paper Info\n\nTitle: Compressed quantum error mitigation\nPublish Date: 10 May 2023\nAuthor List: Maurits Tepaske (from Physikalisches Institut, Universität Bonn), David Luitz (from Physikalisches Institut, Universität Bonn)\n\nFigure\n\nFIG.3.The out-of-time-ordered correlator C otoc i=L/2,j (t) as a function of the operator position j and time t, for the infinite temperature initial state, for a denoised second-order Trotter supercircuit with Trotter depth Mtrot = 32 and denoiser depth M = 2.We consider evolution times t = 0.5, 1, ..., 5, for the periodic L = 14 Heisenberg chain that is affected by two-qubit depolarizing noise with p = 0.01.\nFIG. 4. The complex eigenvalues λ of the noisy second-order Trotter supercircuit with Mtrot = 16 at time t = 1 (left), the corresponding optimized denoiser with M = 4 (center), and the denoised Trotter supercircuit (right).The Trotter circuit is for a L = 6 Heisenberg model with PBC, and all twoqubit channels are affected by depolarizing noise with p = 0.0046.The unit circle, on which unitary eigenvalues must lie, is shown in black, and the noiseless eigenvalues are shown as blue bars.It is evident that the denoiser recovers all the noiseless eigenvalues from the noisy circuit.\nFIG. 2. The complex eigenvalues λ of the noisy second-order Trotter supercircuit with Mtrot = 16 at time t = 1 (left), the corresponding optimized denoiser with M = 4 (center), and the denoised Trotter supercircuit (right).The Trotter circuit is for a L = 6 Heisenberg model with PBC, and all twoqubit channels are affected by depolarizing noise with p = 0.036.The unit circle, on which unitary eigenvalues must lie, is shown in black, and the noiseless eigenvalues are shown as blue bars.It is clear that the denoiser recovers with high accuracy the noiseless eigenvalues from the noisy circuit.\nFIG. 3. The half-chain channel entanglement entropy S at different two-qubit depolarizing noise strengths p, for a secondorder Trotter supercircuit with Mtrot = 16 and t = 2, for a M = 4 denoiser.The Trotter circuit is for a Heisenberg model with PBC of size L = 6.The different curves correspond to the different supercircuits, i.e. the noisy supercircuit, the denoiser, the corresponding denoised supercircuit, and the noiseless variant.\nFIG. 4. The out-of-time-ordered correlator C otoc i=L/2,j (t) as a function of the operator position j and stacked time t, for the infinite temperature initial state, for a denoised secondorder Trotter supercircuit with Trotter depth Mtrot = 32 and denoiser depth M = 2.It is optimized at t = 2 and stacked up to ten times.The calculations are for the periodic L = 14 Heisenberg chain that is affected by two-qubit depolarization with p = 0.01.The denoiser is affected by the same noise.\nFIG.6.The distribution of the ZZ angle α of M = 2 denoisers (top panels) and M = 8 denoisers (bottom panels), with the lightest color corresponding to the denoiser for the Trotter supercircuit with t = 0.5, and the darkest color with t = 5.As usual, we consider the Heisenberg model on a periodic chain, and second-order Trotter supercircuits with depths Mtrot = 8, 16, 32, 64, which together with the denoiser is affected by a two-qubit depolarizing noise with p = 0.01.The panels are arranged as Mtrot = 8, 16, 32, 64 for top left, top right, bottom left, bottom right, respectively.\nFIG. 7. The sampling overhead γ of the optimized denoisers from Fig. 2 of the main text, with denoiser depths M = 1, 2, 4, 6, 8 and Trotter depths Mtrot = 8, 16, 32, 64 at times t = 0.5, 1, ..., 5, for the Heisenberg model on a chain with PBC affected by two-qubit depolarizing noise with p = 0.01.The panels are arranged as Mtrot = 8, 16, 32, 64 for top left, top right, bottom left, bottom right, respectively.\nFIG.8.The domain wall magnetization Z dw after evolving a periodic density wall |dw |dw * with the denoised second-order Trotter supercircuits D C from Fig.2of the main text.These supercircuits have various Trotter depths Mtrot = 8, 16, 32, 64, denoiser depths M = 1, 2, 4, 6, 8, and evolution times t = 0.5, 1, ..., 5, for the periodic L = 14 Heisenberg chain that is affected by two-qubit depolarizing noise of strength p = 0.01.The denoiser is affected by the same noise.The non-denoised results are labelled with M = 0 and the noiseless results with p = 0.The panels are arranged as Mtrot = 8, 16, 32, 64 for top left, top right, bottom left, bottom right, respectively.We see that the denoiser allows us to recover the noiseless behavior.\n\nabstract\n\nWe introduce a quantum error mitigation technique based on probabilistic error cancellation to eliminate errors which have accumulated during the application of a quantum circuit. Our approach is based on applying an optimal \"denoiser\" after the action of a noisy circuit and can be performed with an arbitrary number of extra gates.\nThe denoiser is given by an ensemble of circuits distributed with a quasiprobability distribution. For a simple noise model, we show that efficient, local denoisers can be found, and we demonstrate their effectiveness for the digital quantum simulation of the time evolution of simple spin chains. Introduction.\n-Quantum information processing has been theoretically shown to hold great promises, and quantum algorithms were developed which can in principle achieve an exponential speed-up over their classical counterparts, both for general purpose computing and quantum simulation . However, present day quantum computing prototypes still suffer from significant noise processes which hinder the execution of many potentially groundbreaking quantum algorithms .\nNontrivial quantum algorithms typically require large sequences of quantum gates, each of which introduces dissipation and hence an overall loss of coherence, eventually rendering the results useless. Until quantum error correction becomes practical, quantum error mitigation seems to be more feasible to increase the accuracy of expectation values.\nHere the goal is to induce the (partial) cancellation of errors that stem from noisy quantum gates by extending the circuit corresponding to the desired algorithm with an ensemble of gates , sampled from a quasiprobability distribution. The traditional way to accomplish this is with the gatewise method from , where noise is mitigated by inverting the noise channel of each gate separately, i.e. the cancellation of errors is performed for each gate on its own.\nHere the local noise channel is approximated in a way such that it can be easily inverted analytically, e.g. using Pauli twirling . Gates are then sampled from the inverted noise channel by interpreting it as a quasiprobability distribution. Because in this gate-wise approach every noisy gate has to be modified separately, the sign problem is exponentially large in the number of gates, limiting the practicality of the mitigation.\nThe success of the gate-wise approach resulted in a large body of work concerning these methods , including extensions for simultaneous mitigation of multiple gates by Pauli-twirling entire layers or variationally constructing a mitigating matrix product operator . In principle, errors during the execution of a circuit can propagate and accumulate.\nThese propagated errors * david.luitz@uni-bonn.de ≈ C\n\nC\n\nFIG. 1. An example of the quantum error mitigation procedure used in this work for the time evolution of the wave function of a spin chain. The ideal second-order Trotter supercircuit C of depth Mtrot = 1 (light blue) is approximated by applying a denoiser D of depth M = 1 (red) to the noisy Trotter supercircuit C (dark blue).\nBecause the denoiser is applied after fully executing the noisy Trotter supercircuit, it represents an approximate inverse of the global noise channel with a precision tunable by the depth of the denoiser. can potentially blow up and lead to large errors for the circuit as a whole . Here we introduce a mitigation technique that takes into account the propagation of errors, can be performed with a tunable number of extra gates, and works for non-Clifford local noise channels since the inversion of the accumulated global noise channel is implicit.\nWe first execute the targeted noisy circuit completely, letting the noise propagate and accumulate, and only afterwards we apply an extra random circuit sampled from a quasiprobability distribution. We call the corresponding ensemble of random circuits a denoiser, and we construct it such that upon averaging the accumulated errors cancel.\nEssentially, the denoiser inverts a global noise channel. Since we will construct it as a local brickwall circuit, following the classical preprocessing approach from , we call this compressed quantum error mitigation. Method. -Due to the inevitable coupling of a quantum processor to its environment, every qubit operation is affected by noise.\nTherefore, the simplest technique to minimize the impact of the resulting noise is to minimize the number of operations when performing a quantum algorithm. In we showed that many-body time evolution operators can be efficiently compressed into brick-wall circuits with high fidelity per gate. In this Letter, we consider the noise explicitly by treating quantum operations as (generally non-unitary) quantum channels, corresponding to completely positive and trace preserving (CPTP) maps .\nFor example, instead of a noiseless two-qubit gate G, which acts on a quantum state |ρ in superoperator form as G|ρ = G⊗G * |ρ , we get the noisy channel G = N G, where the noise channel N implements the two-qubit noise . These channels are used to construct a \"supercircuit\" C = N G i=1 Gi , consisting of N G channels, which is affected by multi-qubit accumulated noise.\nThis supercircuit encodes an ensemble of circuits . For simplicity, we assume that the noisy channels Gi in each half brickwall layer are lattice inversion and translation invariant, such that we can construct a denoiser with these properties, limiting the number of variational parameters. The purpose of quantum error mitigation is to modify the ensemble of circuits described by C in a way that we can use it to obtain the noiseless expectation values.\nIn superoperator language, we do this by following the supercircuit C with a denoiser supercircuit D, such that D C is as close to the noiseless supercircuit C = C ⊗ C * as possible. Here C is the target unitary circuit. Because the noise channel N is non-unitary, hence making the supercircuit C non-unitary, we need to use a non-unitary denoiser to retrieve the unitary C.\nWe illustrate the mitigation procedure in Fig. , where a denoiser with one layer is used to mitigate errors for a second-order Trotter supercircuit with one layer. This circuit architecture is commonly used to simulate the time evolution of a quantum many-body system, until some time t, with controllable precision , and we will use it to benchmark the denoiser.\nIn practice, we cannot directly implement a supercircuit, and so we have to utilize its interpretation as an ensemble of circuits. Essentially, after executing a shot of the noisy circuit we sample the denoiser and apply it. The goal is to construct the denoiser in a way that averaging over many of its samples cancels the accumulated errors and gives us a good approximation of the noiseless expectation values.\nIt should be noted that our approach requires more gate applications on the quantum processor than with the gate-wise scheme, since there each sample from the mitigation quasiprobability distribution can be absorbed into the original circuit, whereas our approach increases the circuit depth. We take this into account by imposing the same noise on the denoiser.\nFurthermore, within our scheme, the dimensionality of the quasiprobabilistic mitigating ensemble can be controlled, in contrast to the gate-wise approach where it is equal to the gate count. To facilitate the stochastic interpretation we parameterize each two-qubit denoiser channel G i as a sum of CPTP maps, such that we can sample the terms in this sum and execute the sampled gate on the quantum processor.\nConcretely, we use a trace preserv-ing sum of a unitary and a non-unitary channel. For the unitary part we take a two-qubit unitary channel U( φ i ) = U ( φ i ) ⊗ U * ( φ i ), with U ( φ i ) a two-qubit unitary gate parameterized by φ i . For this we take the two-qubit ZZ rotation exp(−iα(σ z ⊗ σ z )) with angle α, which can be obtained from native gates on current hardware , and dress it with four general one-qubit unitaries, only two of which are independent if we want a circuit that is space inversion symmetric around every bond.\nThe resulting gate has 7 real parameters φ i . For the non-unitary part, which is essential because D has to cancel the non-unitary accumulated noise to obtain the noiseless unitary circuit, we use a general onequbit measurement followed by conditional preparation channel M( , with V a general one-qubit unitary and each κ i a 3-dimensional vector, resulting in a real 9-dimensional ζ i .\nThis yields the two-qubit correlated measurement M( With these parts we construct the parameterization with coefficients η i ∈ R that satisfy η 0 + η 1 = 1 because G i is trace preserving. Note that here the tensor product symbol corresponds to combining two one-qubit channels to make a two-qubit channel, whereas in most of the paper it is used to link the column and row indices of a density matrix.\nWe construct the denoiser from the noisy channels Gi = N G i . With this parameterization one denoiser channel has 17 independent real parameters, such that a denoiser of depth M , i.e. consisting of M brickwall layers, has 34M real parameters (we use one unique channel per half brickwall layer). For reference, a general channel has 544M parameters.\nTo determine the mitigated expectation values we use the full expression where |ρ 0 is the initial state and |1 is the vectorized identity operator on the full Hilbert space. To evaluate this on a quantum processor, we use the stochastic interpretation of (1) to resample . In particular, from each channel (1) we get a unitary with probability p 0 = |η 0 |/γ and a measurement followed by conditional preparation with probability p 1 = |η 1 |/γ.\nHere γ = |η 0 | + |η 1 | is the sampling overhead, which characterizes the magnitude of the sign problem from negative η i . For quasiprobability distributions, i.e. with γ > 1, every denoiser sample has an extra sign sgn(η) = N G g=1 sgn(η g ), 2. The normalized distance between the denoised Trotter supercircuit D C and the noiseless Trotter supercircuit C (top panels), at evolution times t = 0.5, 1, ..., 5, and the twopoint z-spin correlator C zz i=L/2,j=L/2 (t) of a spin on the middle site at times 0 and t (bottom panels), for the infinite temperature initial state.\nWe consider denoisers with depths M = 1, 2, 4, 6, 8 and second-order Trotter circuits with depths Mtrot = 16, 32, 64. In the top panels we use a Heisenberg chain with L = 8, and in the bottom panels with L = 14, both with periodic boundary conditions. All gates are affected by two-qubit depolarizing noise with p = 0.01.\nThe non-denoised results are labelled with M = 0, and the noiseless values with p = 0. where sgn(η g ) is the sign of the sampled coefficient of the gth channel. γ = 1 means that all signs are positive. Observables Ô p=0 for the noiseless circuit are then approximated by resampling the observables from the denoiser ensemble\nwhere γ = N G g=1 γ g is the overall sampling overhead, with γ g the overhead of the gth gate. Clearly, a large γ implies a large variance of Ô p=0 for a given number of samples, with accurate estimation requiring the cancellation of large signed terms. The number of samples required to resolve this cancellation of signs is bounded by Hoeffding's inequality, which states that a sufficient number of samples to estimate Ô p=0 with error δ at probability 1 − ω is bounded by (2γ 2 /δ 2 ) ln(2/ω) .\nSince γ scales exponentially in γ g , it is clear that a denoiser with large M and γ 1 will require many samples. We observed that decompositions with γ > 1 are crucial for an accurate denoiser. Restricting to γ = 1 leads to large infidelity and no improvement upon increasing the number of terms in or the depth M of the denoiser.\nSimply put, probabilistic error cancellation of gate noise introduces a sign problem and it is crucial to find optimal parameterizations (1) which minimize γ to make the approach scalable. This issue arises in all high performance error mitigation schemes , because the inverse of a physical noise channel is unphysical and cannot be represented as a positive sum over CPTP maps.\nThis is clearly visible in the spectra of the denoiser, which lies outside the unit circle (cf. Fig. ). This makes the tunability of the number of gates in each denoiser sample a crucial ingredient, which allows control over the sign problem, because we can freely choose the η i in . For the parametrization (1) of denoiser channels, we try to find a set of parameters for error mitigation by minimizing the normalized Frobenius distance between the noiseless and denoised supercircuits\nwhich bounds the distance of output density matrices and becomes zero for perfect denoising. We carry out the minimization of on a classical processor, using gradient descent with the differential programming algorithm from . Instead of explicitly calculating the accumulated global noise channel and subsequently inverting it, we approximate the noiseless supercircuit C with the denoised supercircuit D C, effectively yielding a circuit representation D of the inverse noise channel.\nResults. -To benchmark the denoiser we apply it to the second-order Trotter circuits of the spin-1/2 Heisenberg chain with periodic boundary conditions (PBC) where is the Pauli algebra acting on the local Hilbert space of site i. A second-order Trotter circuit for evolution time t with depth M trot consists of M trot − 1 half brickwall layers with time step t/M trot and two layers with half time step .\nWe consider circuits that are affected by uniform depolarizing noise with probability p for simplicity, but our approach can be used for any non-Clifford noise. The two-qubit noise channel is which acts on neighboring qubits i and i + 1 and is applied to each Trotter and denoiser gate, and p = 0.01 unless stated otherwise.\nWe study circuits with depths M trot = 16, 32, 64 for evolution times t = 0.5, 1, ..., 5, and denoisers D with depths M = 1, 2, 4, 6, 8. In the top panels of Fig. we show (4) for a chain of size L = 8 as a function of time t. Here it can be seen that even for M trot = 32 a denoiser with M = 1 already improves by roughly an order of magnitude at all considered t.\nDepending on M trot and t, further increasing M lowers , with the biggest improvements occurring for high precision Trotter circuits with large depth M trot = 64 and short time t = 0.5, where the Trotter gates are closer to the identity than in the other cases. At the other extreme, for M trot = 16 the improvements are relatively small upon increasing M > 2. In all cases the denoiser works better at early times than at late times, again indicating that it is easier to denoise Trotter gates that are relatively close to the identity.\nTo probe the accuracy of the denoiser on quantities that do not enter the optimization, as a first test we consider the two-point correlator between spins at different times where we have chosen the infinite temperature initial state, and C(t) is the Trotter supercircuit for time t. In the bottom panels of Fig. we show C zz i=L/2,j=L/2 (t) for the supercircuits from the upper panels, now for a L = 14 chain.\nHere we see that at M trot = 16 we can retrieve the noiseless values already with M = 1, but that increasing M trot makes this more difficult. At M trot = 64 we see larger deviations, and improvement upon increasing M is less stable, but nonetheless we are able to mitigate errors to a large extent. As a further test, we compute the out-of-time-ordered correlator (OTOC) ]\nIn Fig. we show the results for i = L/2, for a Trotter circuit with depth M trot = 32 and a denoiser with depth M = 2. Here we see that a denoiser with M M trot is able to recover the light-cone of correlations, which are otherwise buried by the noise. In the Supplementary Material we consider how the denoiser performs at different noise levels p, and how the denoised supercircuits perform under stacking.\nThere we also calculate domain wall magnetization dynamics, and show the distribution of the optimized denoiser parameters and the sampling overhead associated to the denoiser as a whole. In Fig. we show the eigenvalues of the noisy supercircuits for a noisy second-order Trotter supercircuit with M trot = 16 at t = 1 (left), the corresponding optimized denoiser with M = 4 (center), and the denoised supercircuit (right).\nThe eigenvalues λ of a unitary supercircuit lie on the unit circle, and in the presence of dissipation they are pushed to the center. We see that the spectrum of the denoiser lies outside the unit circle, making it an unphysical channel which cures the effect of the noise on the circuit, such that the spectrum of the denoised circuit is pushed back to the unit circle.\nThe noiseless eigenvalues are shown as blue bars, making it clear that the denoiser is able to recover the noiseless eigenvalues from the noisy circuit. In the Supplementary Material we show the spectra for a p = 0.036 denoiser, where we observe a clustering of eigenvalues reminiscent of Refs. . There we also investigate the channel entropy of the various supercircuits .\nConclusion. -We have introduced a probabilistic error cancellation scheme, where a classically determined denoiser mitigates the accumulated noise of a (generally non-Clifford) local noise channel. The required number of mitigation gates, i.e. the dimensionality of the corresponding quasiprobability distribution, is tunable and the parameterization of the corresponding channels provides control over the sign problem that is inherent to probabilistic error cancellation.\nWe have shown that a denoiser with one layer can already significantly mitigate errors for second-order Trotter circuits with up to 64 layers. This effectiveness of low-depth compressed circuits for denoising, in contrast with the noiseless time evolution operator compression from , can be understood from the non-unitarity of the denoiser channels.\nIn particu-lar, measurements can have non-local effects, since the measurement of a single qubit can reduce some highly entangled state (e.g. a GHZ state) to a product state, whereas in unitary circuits the spreading of correlations forms a light-cone. To optimize a denoiser with convenience at L > 8, the optimization can be formulated in terms of matrix product operators or channels , which is convenient because the circuit calculations leading to the normalized distance and its gradient are easily formulated in terms of tensor contractions and singular value decompositions .\nThis provides one route to a practical denoiser, which is relevant because the targeted noiseless circuit and the accompanying noisy variant in (4) need to be simulated classically, confining the optimization procedure to limited system sizes with an exact treatment or limited entanglement with tensor networks.\nNonetheless, we can use e.g. matrix product operators to calculate (4) for some relatively small t, such that the noiseless and denoised supercircuits in (4) have relatively small entanglement, and then stack the final denoised supercircuit on a quantum processor to generate classically intractable states.\nAnalogously, we can optimize the channels exactly at some classically tractable size and then execute them on a quantum processor with larger size. Both approaches are limited by the light-cone of many-body correlations, as visualized in Fig. , because finite-size effects appear when the light-cone width becomes comparable with system size.\n1. The normalized distance (left) and z spin correlator C zz i=L/2,j=L/2 (right), for a second-order Trotter supercircuit of depth Mtrot = 16 for time t = 1, affected by various twoqubit depolarizing errors p. We compare the values obtained with and without a denoiser, i.e. M > 0 and M = 0, to the noiseless values (p = 0).\nThe denoiser is affected by the same noise as the Trotter circuit. We consider denoisers with depths M = 1, 2, 4, 6, 8, and we use a L = 8 Heisenberg chain with PBC for the normalized distance, while for the correlator we use L = 14. * david.luitz@uni-bonn.de to observe that even for larger noise strength p, the local observable C zz improves significantly even with denoisers of depth M = 1.\nFor large noise strengths, we generally see that the optimization of the denoiser becomes difficult, leading to nonmonotonic behavior as a function of p, presumably because we do not find the global optimum of the denoiser. It is interesting to analyze the spectra of the supercircuits considered in this work.\nAs mentioned in the main text, the spectrum of the ideal, unitary supercircuit C lies on the unit circle. The comparison to this case is therefore instructive. In the main text, we showed an example of the spectra in Fig. for moderate noise strength. Here, we show additional data for stronger noise p = 0.036 in Fig. for a denoiser with M = 4 layers, optimized to mitigate errors for a second-order Trotter supercircuit with M trot = 16 layers at time t = 1.\nThe eigenvalues λ of the noisy supercircuit C are clustered close to zero, far away from the unit circle (except for λ = 1), showing that the circuit is strongly affected by the noise. To mitigate the impact of the noise, the denoiser consequently has to renormalize the spectrum strongly. If it accurately represents the inverse of the global noise channel, its spectrum has to lie far outside the unit circle, which is the case.\nInterestingly, we observe a clustering of eigenvalues which is reminiscent to the spectra found in . By comparison to these works, we suspect that this is due to the local nature of the denoiser, and warrants further investigation. The right panel of Fig. shows the result of the denoiser, pushing the eigenvalues back to the unit circle, nearly with the exact same distribution along the circle as the noiseless eigenvalues (blue bars).\nDue to the strong noise, this is not achieved perfectly, and it is clear that this cannot work in principle if the global noise channel has a zero eigenvalue. The complexity of an operator can be quantified by its operator entanglement entropy . Here we calculate the half-chain channel entanglement entropy S of the noiseless C, noisy C, denoiser D, and denoised D C supercircuits.\nWe define S as the entanglement entropy of the state that is related to a supercircuit C via the Choi-Jamio lkowski isomorphism, i.e. ψ C = χ C /N , where the process matrix χ ab,cd C = C ac,bd is simply a reshaped supercircuit and N ensures normalization. Then we have S = −Tr [ψ C ln ψ C ]. This entropy measure is a particular instance of the \"exchange entropy\", which characterizes the information exchange between a quantum system and its environment .\nIn Fig. we plot the various S for a second-order Trotter circuit with M trot = 16 at t = 2, for a denoiser with M = 4, both affected by two-qubit depolarizing noise with p ∈ [10 −3 , 10 −1 ]. The Trotter circuit is for a Heisenberg model with L = 6 and PBC. We see that at large p, the noise destroys entanglement in the noisy supercircuit, and that the denoiser S increases to correct for this, such that the denoised supercircuit recovers the noiseless S.\nHere we investigate how denoised supercircuits perform upon repeated application. We optimize the denoiser for a Trotter supercircuit for a fixed evolution time t. Then, to reach later times, we stack the denoised supercircuit n times to approximate the evolution up to time nt: In Fig. we stack a denoised t = 1 supercircuit up to n = 20 times and calculate the correlation function, defined in the main text, for the middle site.\nWe consider Trotter depths M trot = 8, 16, 32, 64 and denoiser depths M = 1, 2, 4, 6, 8, for a L = 14 Heisenberg chain with p = 0.01 depolarizing two-qubit noise. The noisy results correspond to M = 0 and the noiseless results to p = 0. In Fig. we calculate the OTOC, defined in the main text, with stacked time evolution for a denoised t = 2 supercircuit with M trot = 32 and M = 2, stacked up to ten times.\nWe see that the stacked supercircuit performs very well, and the additional precision obtained by using deep denoisers (M = 8) pays off for long evolution times, where we see convergence to the exact result (black dashed lines in Fig. ) as a function of M . FIG. . The two-point z-spin correlator C zz i=L/2,j=L/2 (t) of a spin on the middle site at times 0 and t, for the infinite temperature initial state, for denoised second-order Trotter supercircuits that are optimized at evolution time t = 1 and then stacked up to twenty times.\nWe use Trotter depths Mtrot = 8, 16, 32, 64 and denoiser depths M = 1, 2, 4, 6, 8. The calculations were performed for a periodic Heisenberg model with L = 14 and PBC, affected by two-qubit depolarizing noise with strength p = 0.01, which also affects the denoiser. The non-denoised results are labelled with M = 0, and the noiseless results with p = 0.\nThe panels are arranged as Mtrot = 8, 16, 32, 64 for top left, top right, bottom left, bottom right, respectively. The costliest and most noise-susceptible operation is the two-qubit ZZ rotation with angle α, which is the foundation of the unitary piece in our channel parameterization, defined in the main text.\nFor completeness, we here present the α angles of the optimized denoisers. The results are shown in Fig. , which contains histograms for the channel count N G versus α. The histograms are stacked, with the lightest color corresponding to the angles of the denoiser at t = 0.5 and the darkest at t = 5. The top four panels are for a denoiser with M = 2 and the bottom four with M = 8.\nWe consider M trot = 8, 16, 32, 64. We see that in both cases the distribution widens upon increasing M trot , indicating that the unitary channels start deviating more from the identity. Moreover, while the M = 2 denoisers in all cases except M trot = 64 have ZZ contributions close to the identity, this is clearly not the case for M = 8.\nFor simplicity, we did not focus on obtaining denoisers with the smallest sampling overhead γ, which is required to minimize the sign problem and hence ease the sampling of mitigated quantities. Instead, we let the optimization freely choose the η i in the denoiser parameterization, as defined in the main text.\nIn Fig. we show the sampling overhead of the denoisers from Fig. of the main text. We see that for M = 1 and M = 2 the sampling overhead is relatively small and uniform across the different t, whereas for M > 2 the optimization sometimes yields a denoiser with large γ and other times with small γ. This could be related to the difference in α distributions from Fig. .\nThe large fluctuations of γ appears to stem from the difficulty in finding optimal deep denoisers, and our optimization procedure likely only finds a local minimum in these cases. Here C(t) is the Trotter supercircuit for time t. In Fig. we show Z dw for the circuits from Fig.", "answers": ["L = 8 and L = 14."], "length": 5385, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "e568cc6d77a0a433937ab4bcf62e49b36a5cf7b3faa0d3ab"} {"input": "What experimental techniques were used to study the quantum dot structures in this research?", "context": "\\section{Introduction}\n\nDespite the rise of graphene and other 2D materials, semiconducting single-walled carbon nanotubes (SWNT) are still regarded as strong candidates for the next generation of high-performance ultrascaled transistors~\\cite{Cao_IBM_2015,IBM_2017,3D_CNT_FET} as well as for opto-electronic devices~\\cite{Review_Avouris,CNT_photonics} such as chip-scale electronic-photonic platforms~\\cite{Pernice_2016} or low-threshold near-infrared tunable micro-lasers~\\cite{Graf_2017}. \nEngineering a quantum dot (QD) along a (suspended) semiconducting SWNT foreshadows promising opportunities in the field of quantum information processing and sensing through recently proposed schemes such as detection and manipulation of single spins via coupling to vibrational motion~\\cite{Palyi_2012}, optomechanical cooling~\\cite{Wilson_Rae_2012} as well as all optical manipulation of electron spins~\\cite{Galland_all_optical_2008}. Furthermore, the quasi one-dimensional geometry of SWNTs allows for defining tunable p-n junctions induced by electrostatic doping through local gates~\\cite{Buchs_JAP,tunable_pn_2011}. Combining a well-defined QD within such a p-n junction structure could constitute a crucial building-block for the realization of highly desirable electrically driven, on-demand single photon emitters operating at telecom wavelength, based $e.g.$ on a turnstile device architecture~\\cite{turnstile_1994,turnstile_1999}.\nIn practice, QDs in carbon nanotubes have been reported predominantly for two different confinement structures: i) Engineered tunneling barriers at metal-nanotube contacts~\\cite{Pablo04nat} and/or by gate electrodes, used \\emph{e.g.} to manipulate single electron spins~\\cite{Laird:2015}, ii) Unintentional localization potentials stemming from environmental disorder~\\cite{Hofmann_2016}, allowing for single-photon emission mediated by localization of band-edge excitons to QD states~\\cite{CNT_photonics,Hoegele_2008,Walden_Newman_2012,Hofmann_2013,Pernice_2016_2}. Both types of structures are usually operated at cryogenic temperature due to small energy scales ranging from a few to few tens of millielectronvolts.\n\\\\\n\\indent Another technique for achieving confinement in SWNTs makes use of artificial defects such as covalently bound oxygen or aryl functionalization groups on the side walls of semiconducting SWNTs, inducing deep exciton trap states allowing for single-photon emission at room temperature~\\cite{Htoon_2015,tunable_QD_defects}. Also, carrier confinement between defect pairs acting as strong scattering centers has been reported for mechanically induced defects~\\cite{Postma_SET} as well as for ion-induced defects with reported level spacings up to 200 meV in metallic SWNTs~\\cite{Buchs_PRL}. The latter technique, combined with recent progress in controlling defects structure and localization~\\cite{Robertson_2012,Yoon_2016,Laser_writing_2017} offers a high potential for engineering a broad set of SWNT-based quantum devices operating at room temperature. \n\\\\\n\\indent Here, we demonstrate confinement of electrons and holes in sub-10 nm QD structures defined by ion-induced defect pairs along the axis of semiconducting SWNTs. Using low temperature scanning tunneling microscopy and spectroscopy (STM/STS), bound states with level spacings of the order of 100 meV and larger are resolved in energy and space. By solving the one-dimensional Schr\\\"odinger equation over a piecewise constant potential model, the effects of asymmetric defect scattering strength as well as the influence of the Au(111) substrate such as terrace edges on the bound states structure are remarkably well reproduced. By means of ab-initio calculations based on density functional theory and Green's functions, we find that single (SV) and double vacancies (DV) as well as chemisorbed nitrogen ad-atoms are good candidates to produce QDs with the experimentally observed features. These simulations also allow to study the scattering profile as a function of energy for different defect combinations.\n\n\\section{Experimental section}\n\nThe experiments have been performed in a commercial (Omicron) low temperature STM setup operating at $\\sim5$~K in ultra high vacuum. Topography images have been recorded in constant current mode with a grounded sample, using mechanically cut Pt/Ir tips. Differential conductance $dI/dV$ spectra, proportional in first approximation to the local density of states (LDOS)~\\cite{Tersoff85} have been recorded using a lock-in amplifier technique. The LDOS spatial evolution along a nanotube axis is obtained by $dI/dV(x,V)$ maps built by a series of equidistant $dI/dV$ spectra. Spatial extent mismatches between topography images and consecutive $dI/dV(x,V)$ maps have been systematically corrected~\\cite{Buchs_Ar}, and the metallic nature of the tip has been systematically checked on the gold substrate to prevent any tip artefacts before recording STM or/and STS data sets. \n\\\\\n\\indent Nanotube samples were made of extremely pure high-pressure CO conversion (HiPCo) SWNTs~\\cite{Smalley01} with a diameter distribution centered around 1 nm, FWHM $\\sim$ 0.3 nm~\\cite{Buchs_conf}. The measured intrinsic defect density was below one defect every 200 nm. SWNTs were deposited on atomically flat Au(111) surfaces from a 1,2-dichloroethane suspension, followed by an in-situ annealing process~\\cite{Buchs_APL_07,Buchs_Ar}.\n\\\\\n\\indent Local defects in SWNTs have been created in-situ by exposure to: (i) Medium energy $\\sim$ 200 eV argon ions (Ar$^{+}$) produced by an ion gun \\cite{Buchs_Ar,Buchs_PRL}, (ii) Low energy (few eV's) nitrogen ions (N$^{+}$) produced by a 2.45 GHz ECR plasma source~\\cite{Buchs_APL_07,Buchs_NJP_07}. In both cases, the exposure parameters have been calibrated to reach an average defect separation along the SWNTs of about 10 nm~\\cite{Buchs_Ar,Buchs_APL_07}.\n\n\\section{Results and discussion}\n\\subsection{Experimental LDOS patterns}\n\\begin{figure}\n \\includegraphics[width=8cm]{Figure_1.pdf}\n \\caption{\\label{exp_data_1} (a)-(b) 3D topography images (processed with WSXM~\\cite{WSXM}) of SWNT I with Ar$^{+}$ ions-induced defects, with sample-tip bias voltage ($V_\\mathrm{S}$) 1 V and tunneling current $I_\\mathrm{S}$ 0.1 nA. (c) Corresponding $dI/dV(x,V)$ map recorded along the horizontal dashed lines in (b), with $V_\\mathrm{S}=1$ V, $I_\\mathrm{S}=0.2$ nA. Spatial resolution $\\sim$ 0.3 nm. (d) 3D topography image of SWNT II with N$^{+}$ ions-induced defects, with $V_\\mathrm{S}=1$ V, $I_\\mathrm{S}=128$ pA. (e) Corresponding $dI/dV(x,V)$ map recorded along the horizontal dashed lines in (d), with $V_\\mathrm{S}=1.5$ V, $I_\\mathrm{S}=0.3$ nA. Spatial resolution $\\sim$ 0.2 nm.}\n\\end{figure}\nIn Fig.~\\ref{exp_data_1} (a) and (b), we show 3D STM images of the same semiconducting SWNT (referred as SWNT I in the following) with Ar$^{+}$ ions-induced defect sites labeled $d1-d5$ . Panel (d) shows a 3D STM image of a second semiconducting SWNT (referred as SWNT II) with N$^{+}$ ions-induced defect sites labeled $d6-d7$. In both cases, defect sites typically appear as hillock-like protrusions with an apparent height ranging from 0.5~{\\AA} to 4~{\\AA} and an apparent lateral extension varying between 5~{\\AA} and 30~{\\AA}~\\cite{Buchs_NJP_07,Buchs_Ar,Thesis_Buchs}. \n\\\\\n\\indent The resulting $dI/dV(x,V)$ maps recorded along the horizontal dashed line drawn in the STM images (b) and (d) are displayed in panels (c) and (e) in Fig.~\\ref{exp_data_1}, respectively. Defect signatures in the LDOS in both cases are characterized by deep in-gap states at the defects positions. This is consistent with the expected defect structures, $i.e.$ mainly SVs, DVs and combinations thereof for collisions with Ar$^{+}$ ions~\\cite{Buchs_Ar} and bridgelike N ad-atom for collisions with N$^{+}$ ions~\\cite{Thesis_Buchs,Nitrogen_prb_07}. Note that gap states at energy levels $\\sim$~0.2 eV and $\\sim$~0.05 eV in panels (c) and (e), respectively, are shifted to the right from $d3$ by about 1 nm and to the right from $d6$ by about 2 nm. This indicates the presence of intrinsic or ion-induced defects on the lateral or bottom side wall of the SWNTs~\\cite{Kra01prb}, not visible in the topographic images. These defects are labelled $d3'$ and $d6'$, respectively. \n\\\\\n\\begin{figure}\n \\includegraphics[width=12cm]{Figure_2.pdf}\n \\caption{\\label{exp_data_Ar} (a)-(b) QD I detailed $dI/dV(x,V)$ maps in conduction and valence bands. Lower subpanels contain QD states linecut profiles and stationary wave-like fits in left and right QD parts. Right subpanels contain experimental energy dispersion relation data sets $k_\\mathrm{n}(E_\\mathrm{n})$ and tight-binding calculations. (c)-(d) Resulting LDOS calculated from a one-dimensional piecewise constant potential model featuring potential barriers and a potential step (gray area), with position of the potential step: 5.09 nm from the right barrier's center, potential step heigth: $U_\\mathrm{C}=V_\\mathrm{L}-V_\\mathrm{R}=60$ meV, barrier heights: $V_\\mathrm{d3'}=1$ eV, $V_\\mathrm{d4}=0.85$ eV, barrier widths: $a_\\mathrm{d3'}=a_\\mathrm{d4}=3.4$ nm. Valence band: $V_\\mathrm{d3'}=-0.4$ eV, $a_\\mathrm{d3'}=a_\\mathrm{d4}=2.5$ nm, $V_\\mathrm{d4}=-0.4$ eV. $E_\\mathrm{g}$ stands for bandgap energy.}\n\\end{figure}\n\\begin{figure}\n \\includegraphics[width=12cm]{Figure_3.pdf}\n \\caption{\\label{exp_data_N} (a) QD II detailed $dI/dV(x,V)$ map. Lower subpanels contain QD states linecut profiles and stationary wave-like fits in the left and right QD parts. Right subpanel contains experimental energy dispersion relation data sets $k_\\mathrm{n}(E_\\mathrm{n})$ and tight-binding calculations. (b) Resulting LDOS calculated from a one-dimensional piecewise constant potential model featuring potential barriers and a potential step (gray area) with position of the potential step: 4.7 nm from the right barrier's center, potential step heigth: $U_\\mathrm{C}=V_\\mathrm{L}-V_\\mathrm{R}=60$ meV, barrier heights: $V_\\mathrm{d6'}=0.6$ eV, $V_\\mathrm{d7}=0.6$ eV, barrier widths: $a_\\mathrm{d6'}=1.5$ nm, $a_\\mathrm{d7}=2.6$ nm.}\n\\end{figure}\n\\indent Remarkably, the $dI/dV(x,V)$ maps in Fig.~\\ref{exp_data_1} exhibit several broad discrete states in the conduction bands of SWNT I, II (white dashed boxes in panel (c) and (e), respectively) and in the valence band of SWNT I (white dashed box in panel (c)), characterized by a modulation of the $dI/dV$ signals in the spatial direction between pairs of consecutive defect sites $d3'-d4$ and $d6'-d7$. Enlarged plots of these boxed regions are displayed in Fig.~\\ref{exp_data_Ar}(a)-(b) and Fig.~\\ref{exp_data_N}(a) for SWNTs I and II, respectively. In the conduction bands, cross-sectional curves recorded along the black horizontal dashed lines labelled m1--m3 in Fig.~\\ref{exp_data_Ar}(a) and m1--m4 in Fig.~\\ref{exp_data_N}(a) are plotted below the LDOS panels. These clearly reveal one to three and respectively one to four spatially equidistant maxima. The number of maxima increases for increasing $\\left|V_\\mathrm{bias}\\right|$ and the measured level spacings between consecutive discrete states is of the order of 100 meV and larger for both cases. This indicates that defect sites $d3'-d4$ and $d6'-d7$, respectively separated by 12.1 nm and 11.2 nm, act as strong scattering centers able to confine carriers in semiconducting SWNTs~\\cite{Buchs_PRL,Bercioux_prb_2011}. Such intrananotube QD structures will be referred as QD I (in SWNT I) and QD II (in SWNT II) in the following. We estimated the level spacings in the conduction band of QD I to 98 meV (m1-m2) and 116 meV (m2-m3). For QD II, we measured 122 meV (m1-m2), 185 meV (m2-m3) and 210 meV (m3-m4).\n\\\\\n\\indent In the valence band of SWNT I, discrete states with level spacings of the order of 80-90 meV, with one clear maximum at the level m-1, can also be distinguished between defect sites $d3'-d4$ in Fig.~\\ref{exp_data_Ar}(b). The discretization of the states indicates that this QD structure also confines holes. Discrete states starting from m-2 and lower show less well defined structures compared to the conduction band states. In the case of SWNT II, no clear discrete states are observed in the valence band (see supplementary information). These observations are most probably the result of an energy dependent scattering strength of the defects, respectively $d3'$-$d4$ and $d6'$-$d7$, leading here to a weaker confinement in the valence band. Such energy dependence is well known for metallic SWNTs~\\cite{Chico96,vac_2007,mayrhofer:2011,Bockrath_Science01} and is corroborated by our ab-initio calculations. Note that mixing effects with defect states and substrate-induced effects~\\cite{substrate_effects} cannot be ruled out.\n\\\\\n\\indent Another remarkable feature in the LDOS is the strong spatial asymmetry of the lowest energy states m1 and m-1 in QD I and m1 in QD II. In QD I, m1 is shifted to the right side of the dot while m-1 is shifted to the left side. Higher states m2 and m3 show more symmetry in terms of position of the maxima relative to the center of the QD. In QD II, m1 is shifted to the right side of the QD. We attribute the observed lowest energy states asymmetry (for electrons as well as for holes) in part to their strong sensitivity to weak potential modulations within the QD structure (as we will show in section \\ref{1D}). For QD I, this assertion is supported by the observation of a 0.25 nm high Au(111) terrace edge located around the center of the QD, leading to a supported-suspended interface (see white dashed lines in Fig.~\\ref{exp_data_1}(b) and more topographic details in Fig.~S2(a)-(d) in supplementary information). Such configurations have been reported to induce a rigid shift in the SWNT bands~\\cite{Clair_2011}, for instance here a down-shift in the right side of QD I corresponding to the \"suspended\" portion between two terraces. In QD II, we attribute the spatial shift of m1 to a potential modulation induced by a layer of disordered impurities, most probably residua from the 1,2-dichloroethane suspension, lying between the gold substrate and the SWNT (see Fig.~\\ref{exp_data_1}(d) and Fig.~S2(e)-(h) in supplementary information). \n\\\\\n\\indent Also, the LDOS in QD I and II (Fig.~\\ref{exp_data_Ar}(a) and Fig.~\\ref{exp_data_N}(a), respectively) reveals asymmetric patterns with curved stripes oriented from top left to bottom right for QD I and from bottom left to top right for QD II. These are characteristic signatures for defect pairs with different scattering strengths~\\cite{Bercioux_prb_2011,Buchs_PRL}. For instance here, the left defect in QD I ($d3'$) has a larger scattering strength than the right one ($d4$), while the right defect in QD II ($d7$) has a larger scattering strength than the left one ($d6'$). \n\\\\\n\\indent The exact atomic structure of the defects could in principle be determined from a comparison of $dI/dV$ spectra with simulated first-principle LDOS signatures of expected defect types. In reality, this is hampered by the large number of possible geometries to simulate, including complex multiple defect structures~\\cite{Buchs_Ar}, together with the large unit cells of the semiconducting chiral SWNTs studied here.\n\\\\\n\\subsection{1D piecewise constant potential model}\n\\label{1D}\nTo better understand the physical origins of the non-trivial signatures of the quantized states, we model the experimental $dI/dV$ maps by solving the time independent one-dimensional Schr\\\"odinger equation over a piecewise constant potential model of QD I and QD II. The scattering centers are approximated by semi-transparent rectangular tunneling barriers leading to a square confinement potential~\\cite{Laird:2015}. This is supported by previous results on defect-induced confinement in metallic SWNTs using the same experimental conditions~\\cite{Buchs_PRL} and is consistent with ab-initio simulations presented later in this work. The potential modulation within the QD is approximated by a potential step. The resulting potential geometries are illustrated with gray shaded areas in Fig.~\\ref{exp_data_Ar} (c) and (d) and Fig.~\\ref{exp_data_N}(b). Dispersion relations $E(k)$ can be extracted experimentally from the quantized states wavefunctions by measuring the energy and corresponding momenta in the left and right sides of the QDs. The wavevectors $k$ are determined using stationary wave-like fitting functions~\\cite{Buchs_PRL} displayed with dashed red curves in Figs.~\\ref{exp_data_Ar}(a)-(b) and ~\\ref{exp_data_N}(a)). From this procedure, the potential step height and position can be estimated (see supplementary information). The experimental data sets $E(k)$ are plotted in the right panels of Figs.~\\ref{exp_data_Ar}(a) and \\ref{exp_data_N}(a) together with dispersion relations from a third-nearest neighbor tight-binding calculation closely approximating ab-initio results~\\cite{Reich_TB_2002}. These chirality-dependent tight-binding dispersion relations, calculated within an extended Brillouin zone resulting from the defect-induced breaking of the translation invariance~\\cite{Bercioux_prb_2011}, are used in the Hamiltonian of our one-dimensional model. Taking into account the measured chiral angle, diameter distribution~\\cite{Buchs_conf} and measured bandgaps, we find the best match with chiralities $(7,6)$ for QD I and $(11,1)$ for QD II (see supplementary information). \n\\\\\n\\indent Once chiralities together with potential step heights and positions are optimized, one can fit the height and width of the rectangular tunneling barriers in order to reproduce the experimental level spacings and general LDOS patterns. On a qualitative ground, a symmetric double barrier system results in the formation of spatially symmetric discrete bound states. Increasing both barrier heights simultaneously shifts the bound state energy levels and level spacings up. This leads to sharper bound states as the confinement in the QD is made stronger thus increasing the lifetime of the confined electrons. Increasing the barrier thickness with constant inner edge separation does not affect much the level spacings but further sharpens the bound states. Any asymmetry introduced by a change in the width or height of one single barrier leads to broader bound states. The presence of a potential step modifies the LDOS in lifting the levels of the bound states, with a more pronounced effect on the lower states. In QD I and II, the center of each barrier is aligned with the center of the gap states ($d3'$-$d4$ for QD I and $d6'$-$d7$ in QD II) and the width ratio is kept proportional to the ratio of the spatial extent of the gap states. Thus, by increasing the width of the barriers, we decrease the length of the QD leading to higher level spacings, and vice versa. The experimental level spacings can then be approximated by tuning both barrier widths in the same ratio and the heights individually, knowing that the scattering strength of $d3'$ ($d7$) is larger than $d4$ ($d6'$) according to the observed asymmetry in the LDOS described above \\footnote{The transmission probability through a rectangular tunneling barrier is given by $T=\\left( 1+\\frac{V^{2}\\sinh^{2}\\left( a \\cdot \\sqrt{2m^{*}(V-E)}/\\hbar \\right)}{4E(V-E)} \\right)^{-1}$, where $V$ and $a$ are respectively the barrier height and width. For the argument in the $\\sinh$ sufficiently small such that $\\sinh(x)\\simeq x$, it can be shown that $a$ and $V$ can be coupled such that the transmission probability becomes a function of the area under the barrier $A=a\\cdot V$, with $T=\\left( 1+ \\frac{m^{*}A^{2}}{2\\hbar^{2}E} \\right)^{-1}$. In our case, this condition is not satisfied and thus the barrier geometries are tuned empirically to fit the experimental level spacings.}. \n\\\\\n\\indent For QD I, we find a good match in the conduction band for the barrier heights $V_\\mathrm{d3'}=1$ eV and $V_\\mathrm{d4}=0.85$ eV, widths $a_\\mathrm{d3'}=a_\\mathrm{d4}=$ 3.4 nm, and potential step $V_\\mathrm{L}-V_\\mathrm{R}=60$ meV. With these parameters, the spatial profile of the obtained quantized states (see lower subpanels in Fig.~\\ref{exp_data_Ar}(a) and (c)) reproduces the experimental modulation features remarkably well. Also, the simulated LDOS displays a pattern with curved stripes oriented from top left to bottom right, as observed experimentally, due to a left barrier with a larger scattering strength. In the valence band, although modes m-2 and lower do not show a well defined structure in the spatial direction, thinner barriers with dimensions $a_\\mathrm{d3'/d4}=2.5$ nm, $V_\\mathrm{d3'/d4}=-0.4$ eV, leading to a slightly longer QD length (9.6 nm compared to 8.7 nm in the conduction band) can reproduce the measured level spacings very well. \n\\\\\n\\indent For QD II, we observed that the measured energy levels are overestimated by a factor $\\alpha\\sim1.29$, presumably due to a voltage division effect induced by the impurity layer mentioned above (see details in supplementary information). We find a good agreement with the experimental LDOS with the parameters: $V_{d3'}=V_{d4}\\simeq$ 0.47 eV, $a_\\mathrm{d6'}=1.5$ nm, $a_\\mathrm{d7}=2.6$ nm and $U_\\mathrm{C}=V_\\mathrm{L}-V_\\mathrm{R}\\simeq 47$ meV. Note that in Fig.~\\ref{exp_data_N}(b) the barrier and potential heights are multiplied by $\\alpha$ to allow a direct comparison with the experimental LDOS. The simulated LDOS shows a pattern with curved stripes oriented from bottom left to top right, as observed experimentally, due to a right barrier exhibiting a larger scattering strength. Also, the spatial profile of the obtained bound states (see lower subpanels in Fig.~\\ref{exp_data_N}(a) and (b)) reproduces the experimental features quite well. Note also that one can distinguish an isolated state in the experimental LDOS at an energy level between m1 and m2, about in the middle of the QD. This state that prevented an accurate fit of the state m2 in the right QD part is attributed to a spatial feature visible in the STM topography image in Fig.~\\ref{exp_data_Ar}(d) (see also supplementary information, Fig.S2(f)), probably a physisorbed impurity which does not affect the LDOS significantly.\n\\\\\n\\subsection{Ab-initio calculations}\n\\begin{figure}\n \\includegraphics[width=16cm]{Figure_4.pdf}\n \\caption{\\label{num_data} (a)-(c) LDOS ab-initio simulations of a semiconducting $(16,0)$ SWNT with combinations of vacancies defects separated by 11.1 nm. Subpanels display QD state linecut profiles. (d) Tight-binding (black curve) and ab-initio dispersion relations (green circles) for a pristine $(16,0)$ SWNT with $E_\\mathrm{n}(k_\\mathrm{n})$ data sets extracted from (a)-(c). (e)-(g) LDOS ab-initio simulations of a semiconducting $(17,0)$ SWNT with combinations of N ad-atoms and vacancies defects separated by 10.7 nm. (h) Tight-binding (black curve) and ab-initio dispersion relations (green circles) for a pristine $(17,0)$ SWNT with $E_\\mathrm{n}(k_\\mathrm{n})$ data sets extracted from (e)-(g).}\n\\end{figure}\nIn order to elucidate the physical nature of the electron/hole confining scattering centers, we performed ab-initio simulations based on a combination of density functional theory~\\cite{pbe,paw,vasp_paw,VASP2}, maximally localized Wannier orbitals~\\cite{transportwannier90} and Green's functions (see supplementary information). Without loss of generality, we have simulated short unit cell semiconducting zigzag SWNTs with different combinations of the most probable defect structures. Results for vacancy defects likely being induced by 200 eV Ar$^{+}$ ions, separated by about 11 nm in a $(16,0)$ SWNT are shown in Fig.~\\ref{num_data}(a)-(c) with DV-DV, DV-SV and SV-SV pairs, respectively. The LDOS displays midgap states at the defect positions as expected as well as defect states in the valence band~\\cite{Buchs_Ar}. Most importantly, clear quantized states with a number of maxima increasing with energy are observed between the defects in the conduction band, emphasizing the ability of SVs and DVs to confine carriers. For the asymmetric configuration DV-SV, one can distinguish faint curved stripe patterns oriented from top left to bottom right, indicating a larger scattering strength for DVs compared to SVs. This is consistent with observations in transport experiments~\\cite{Gomez05nm}. On the other hand, the patterns in the valence band strongly depend on the defect types. Discrete states can be distinguished for the DV-DV case, with m-2 being mixed with defect states. For the DV-SV case, clear curved stripe patterns oriented from bottom left to top right indicate again a stronger scattering strength for DV. Also, broader states are observed, indicating that the scattering strength of DVs and SVs is weaker in the valence band compared to the conduction band.\n\\\\\n\\indent More insight on the energy dependent scattering strength for each defect pair configuration can be obtained by extracting the wavevector $k_\\mathrm{n}(E_\\mathrm{n})$ for each resonant state. This data set is plotted in Fig.~\\ref{num_data}(d) for the conduction and valence bands together with the $(16,0)$ dispersion relations calculated from the third-nearest neighbor TB model and from the ab-initio calculation for the pristine nanotube. A first observation is the excellent agreement between TB and ab-initio results, further validating the method used in Figs.~\\ref{exp_data_Ar}(a)-(b) and ~\\ref{exp_data_N}(a). The vertical dashed lines indicate the limiting $k_\\mathrm{n,\\infty}=\\frac{\\pi \\cdot n}{L}$ values corresponding to the closed system (infinite hard walls potential) with $L=11.1$ nm being the defect-defect distance. In the conduction band, we find that $k_\\mathrm{n}(E_\\mathrm{n})=\\frac{\\pi \\cdot n}{L_\\mathrm{eff}(n)} < k_\\mathrm{n,\\infty}$, indicating that the effective lengths $L_\\mathrm{eff}(n)$ of the QD are larger than $L$ ($i.e.$ the resonant states wavefunctions are characterized by penetrating evanescent modes inside the defect scattering potential), as expected for an open system. The shortest $L_\\mathrm{eff}(n)$ are obtained for the DV-DV configuration with 12.1 nm (m1), 13.1 nm (m2) and 12.9 nm (m3), which we attribute to wider scattering potential profiles for DVs compared to SVs. In the valence band, we find that $k_\\mathrm{n}(E_\\mathrm{n})=\\frac{\\pi \\cdot n}{L_\\mathrm{eff}(n)} > k_\\mathrm{n,\\infty}$, with $L_\\mathrm{eff}(n)$ values between 7.9 nm (DV-DV, m-1) and 9.66 nm (DV-SV, m-2). We attribute this pronounced QD shortening to wider scattering potential profiles of both DVs and SVs in the valence band, probably due to mixing with wide spread defect states in the valence band.\n\\\\\n\\indent Ab-initio calculations for different defect pairs combinations containing at least one N ad-atom, $i.e.$ N-DV, N-SV and N-N, are presented in Fig.~\\ref{num_data}(e)-(h) for a $(17,0)$ SWNT, along with details on the defects geometries. Remarkably, clear QD states are generated for all three configurations, underlining the potential of N ad-atoms to confine carriers in semiconducting SWNTs and thus to generate intrananotube QDs. \n\\\\\n\\indent In order to demonstrate the scattering strengths of the different defects, we calculated the energy dependent conductance in addition to the LDOS for the different combinations of the QD defining scattering defects on the $(16,0)$ and $(17,0)$ SWNTs, see supplementary information. Generally we can observe strong conductance modulation of the order of 30-40\\% with regard to the pristine CNT for all three tested defects (double vacancies DV, single vacancies SV and chemisorbed C-N) with the DVs having the largest scattering strength in the CB and VB. \n\\\\\n\\indent Note that the choice of the zigzag SWNT chiralities in the two different ab-initio scenarios is motivated by the different effective masses of both chiralities ($m^{*}_{(17,0)}>m^{*}_{(16,0)}$) which is typical for chirality families $(3n-1,0)$ and $(3n-2,0)$~\\cite{ZZ_families}. Taking advantage of recent reports on SWNT chirality control~\\cite{chirality_control_EMPA,chirality_control_chinese,chirality_chemistry}, this property could be used in practice to design QDs with different level spacings for the same QD length. From an application point of view, however, QDs generated by DVs will have far superior stability at room temperature due to their high migration barrier above 5 eV ($\\sim$~1 eV for single vacancy)~\\cite{Kra06vm}. This value drops down by at least 2 eV for N ad-atoms depending on their chemisorption configuration~\\cite{Nitrogen_prb_07,Yma05nitr}.\n\\\\\n\\indent Our ab-initio simulations do not take into account any substrate effect. In the experimental case, the carriers can decay through the substrate, thus limiting their lifetime. This leads to state broadening, measured between about 60 meV up to 120 meV in QD I and II, while the quantized states widths in ab-initio simulations vary between about 5 meV and 45 meV. This suggests that a better contrast of the experimental quantized states, especially in the valence band, could be achieved by lowering the nanotubes-substrate interaction through $e.g.$ the insertion of atomically thin insulating NaCl films~\\cite{Ruffieux_Nature_2016}. This would allow to gain more insight on the electronic structure of the QDs as well as in the associated scattering physics at the confining defects~\\cite{Buchs_PRL}. \n\n\\section{Conclusions and outlook}\nIn summary, using low-temperature STM/STS measurements supported by an analytical model and ab-initio simulations, we have demonstrated that intrananotube quantum dots with confined electron and hole states characterized by energy level spacings well above thermal broadening at room temperature can be generated in semiconducting SWNTs by structural defects such as vacancies and di-vacancies, as well as nitrogen ad-atoms. These results, combined with recent progresses in type and spatial control in the formation of defects~\\cite{Robertson_2012,Yoon_2016,Laser_writing_2017} as well as chirality control~\\cite{tunable_QD_defects}, hold a high potential for applications in the design of SWNT based quantum devices. These include $e.g.$ electrically driven single-photon emitters operating at room temperature and telecom wavelength. In this context, the observation of quantum confinement effects in the emitted light of cut, sub-10 nm, semiconducting SWNTs~\\cite{Dai_2008} shall be seen as an additional motivation for investigating the optical properties of our \"QD with leads\" building-blocks. These would include $e.g.$ studying optical transitions selection rules for different types and configurations of defect pairs~\\cite{sel_rules_2006} associated with experimental studies such as photoluminescence~\\cite{Lefebvre06} combined to $g^{(2)}$ correlation measurements~\\cite{Hofmann_2013} in suspended SWNT devices as well as photocurrent imaging~\\cite{Buchs_Nat_comm} and spectroscopy~\\cite{Gabor_2009}.\n\n\\section*{Acknowledgements}\nThe authors thank Ethan Minot, Lee Aspitarte, Jhon Gonzalez, Andres Ayuela, Omjoti Dutta and Arkady Krasheninnikov for fruitful discussions.\nThe work of DB is supported by Spanish Ministerio de Econom\\'ia y Competitividad (MINECO) through the project FIS2014-55987-P and by the (LTC) QuantumChemPhys. LM acknowledges support from the BMBF-project WireControl (FKZ16ES0294) and computing time for the supercomputers JUROPA and JURECA at the J\\\"ulich Supercomputer Centre (JSC).\n\n\n\\clearpage\n\n\\section*{References}\n\n\n", "answers": ["Low temperature scanning tunneling microscopy and spectroscopy (STM/STS)."], "length": 4297, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "5b978c8d5792b07ad99da0fc2639c6046051e9c29825ad25"} {"input": "How many people attend the 233rd ACS national meeting?", "context": "The major actions taken by the board of directors and council during the national meeting in Chicago were reported in C&EN, April 30 (page 32).\nThe Society Committee on Budget & Finance met on Saturday, March 24, to review the society's 2006 financial performance. The society ended 2006 with a net contribution from operations of $12.2 million, on revenues of $424.0 million and expenses of $411.8 million. This was $7.8 million favorable to the approved budget.\nAfter including the results of the Member Insurance Program and new ventures, the society's overall net contribution for 2006 was $11.5 million, which was $7.4 million favorable to the approved budget. The favorable variance was primarily attributable to higher than budgeted electronic services revenue and investment income, as well as expense savings from lower than budgeted health care costs and reduced IT spending. In addition, the society ended the year in compliance with the board-established financial guidelines.\nThe Society Committee on Education (SOCED) received an update from President Catherine Hunt on the thematic programming featured in Chicago focusing on the sustainability of energy, food, and water. President-Elect Bruce Bursten solicited input from the committee pertaining to the central role of education in his agenda. SOCED received a presentation from the Membership Affairs Committee on its white paper on membership requirements.\nCommittee members strongly support the proposal to include undergraduates as members of the society, but they requested that financial arrangements be clearly spelled out in the petition to ensure that the highly successful Student Affiliates program remains intact. The committee discussed the Education Division programs that were reviewed in 2006 and those that will be reviewed in 2007, under the auspices of the Program Review Advisory Group. SOCED received an update from the Committee on Professional Training regarding the draft ACS guidelines for approval of bachelor's degree programs in chemistry.\nCommittee members discussed the report prepared by the Globalization Task Force, focusing on those sections relevant to education. The committee suggested initiatives related to the new ACS strategic plan, including a potential program that would engage retired chemists in the K-12 classroom. SOCED created a task force to consider the role of online, or \"virtual,\" simulations in the chemistry laboratory, recognizing the value of online/virtual experiences as a supplement to, but not a replacement for, hands-on laboratory experiments.\nThe committee ratified two interim actions taken since the Dec. 3, 2006, meeting: to remove the financial restriction of the ACS Petroleum Research Fund (ACS PRF) Supplement for Underrepresented Minority Research Programs (SUMR) and to contact nominators whose nominations for the Volunteer Service Award had expired and to invite them to reactivate their nomination packet for the 2008 Volunteer Service Award.\nActing under delegated authority, the committee voted to accept the recommendations of the ACS Petroleum Research Fund Advisory Board (February 2007 meeting) for funding grants totaling $5.2 million; voted to recommend to the board a screened list of six nominees (due to a two-way tie for fifth place) for the 2008 Priestley Medal; voted to recommend to the board a screened list of five nominees for the 2008 Award for Volunteer Service to ACS; on the recommendation of the ACS Committee on Frasch Foundation Grants, voted to recommend to the board that it recommend to the trustee (US Trust) of the Frasch Foundation 12 grants for research in agricultural chemistry for the period of 2007–12; voted to recommend to the ACS Board of Directors that a new national award be established, the \"ACS Award for Affordable Green Chemistry,\" sponsored by Rohm and Haas; and voted to recommend to the ACS Board of Directors that a new endowment be established, the \"Affordable Green Chemistry Endowment Fund,\" to support the award.\nThe committee also reviewed the final report from the Special Board Task Force on the Review of the ACS National Awards Program, chaired by Ronald Breslow; established a Canvassing & Selection Subcommittee; and reviewed a list of external awards for which ACS may want to nominate candidates. The committee agreed to include the list of significant external awards in the awards locator database that is being developed.\nThe committee was updated on efforts to reconcile ACS's technical divisions' desires to leverage national meeting content using the Internet with our journal editors' concerns about prior publication issues. A conference call on this issue was scheduled for April 21, 2007.\nThe committee received a presentation on the recent actions of the ACS Board of Directors International Strategy Group (ISG). The group's charge is to develop recommendations for a short- and long-term international strategy for the society.\nThe committee was updated on the status of the activities of the Board Oversight Group on Leadership Development (BOG). Potential solutions for the unexpectedly high cost of facilitator training and transitioning from the current Leaders Conference format to the newly designed curriculum were presented to the committee.\nThe committee reviewed plans for conducting the 2007 Membership Satisfaction Survey. Preliminary results are expected in May or June with a final report to be delivered to the board at the 2007 Boston national meeting.\nThe committee received a briefing on the status of the MORE Project: Multidisciplinary Opportunities though Resource Enhancement. Twenty-eight proposals were received, and a decision on which proposals to support will be made in early May.\nThe chair led a discussion on draft 2007 committee goals, and committee members offered several suggestions related to successfully meeting them. One suggestion was to modify a communications goal to make it more completely reflect the duties of the committee outlined in the board regulations. The chair and committee members will examine the suggestion and revisit the question after the board retreat where board committee duties will be examined.\nACS President Hunt discussed her 2007-08 Presidential Task Force on Enhancing Science & Technology, which is charged with developing advocacy best practices that can enhance ACS's attainment of its public policy priorities. The task force is composed of a diverse set of ACS members as well as former U.S. Representative and chairman of the House Science Committee, Sherwood Boehlert, who will cochair the task force.\n• Results of the 2007 Public Policy Priorities Survey, which resulted in a four-tiered ranking of ACS's 2007 public policies. The ranking will help focus staff resources in conducting outreach and advocacy on behalf of ACS members.\n• The hiring of a communications consulting firm for 2007 to assist ACS in implementing the initial phase of the ACS Strategic Communications Plan.\n• Creation of a pilot ACS state government affairs advocacy program. Committee members agreed to the creation of a pilot, and staff will propose an initial list of states, policy focus, and a budget to carry out the program.\nThe committee met in executive session on March 23 and in open session jointly with the Joint Board-Council Committee on Publications and the Division of Chemical Information on March 26.\nThe committee heard from Chemical Abstracts Service (CAS) management on a range of issues including a report on continuing database building efforts, product enhancements, and CAS's centennial celebration plans.\nThe Committee on Chemical Safety (CCS) provides advice on the handling of chemicals and seeks to ensure safe facilities, designs, and operations by calling attention to potential hazards and stimulating education in safe practices.\nCCS has several publications (many downloadable), including the flagship publication, \"Safety in Academic Chemistry Labs\" (SACL). Work has recently started on the translation of SACL into Arabic. This is in addition to the online Spanish version of SACL. Also online are the \"Student Lab Code of Conduct for Secondary Science Programs\" and a security vulnerability analysis checklist. A K-12 restricted hazardous substances list is under development. The third edition of the \"Chemical Safety Manual for Small Businesses\" will be ready soon.\nThe committee's Task Force on Laboratory Environment, Health & Safety is working on a new edition of \"Laboratory Waste Management.\" Task force members also commented on the recent Environmental Protection Agency Proposed Rule for Hazardous Waste in Academic Laboratories. Our Video Safety Resources Task Force is developing video resources to be distributed over the Web.\nCCS has been involved in collaborations for the updating of publications like \"Prudent Practices in the Laboratory\" and \"ACS Guidelines for the Teaching of High School Chemistry.\" Along with other ACS units, CCS is exploring participating in the EPA's School Chemicals Cleanout Campaign.\nThe Committee on Chemists with Disabilities (CWD) met at the 233rd ACS national meeting, Chicago, on Monday, March 26. Judy Summers-Gates reported on the Joint Subcommittee on Diversity meeting. This subcommittee is made up of representatives of the five committees that support people in chemistry (as opposed to a category of the profession): CWD, Committee on Minority Affairs, Committee on Technician Affairs, Women Chemists Committee, and Younger Chemists Committee, and its goal is to develop ways to coordinate the efforts of the five groups.\nThe CWD Ambassador Program that was announced at CWD's 25th anniversary celebration at the Washington, D.C., meeting was discussed. Zelda Wasserman reported on the status of the letter from CWD to the ACS Board regarding captioning of ACS video materials. Janelle Kasper-Wolf, of ACS staff, discussed adding new questions to the ACS annual employment salary survey to obtain information for the committee.\nAt the Chicago national meeting, the Committee on Community Activities (CCA) partnered with the ACS Education Division and the Office of the President to host \"Chemistry In Action—It's Easy Being Green\" at the Peggy Notebaert Nature Museum on Saturday, March 24. More than 250 children participated in the hands-on activities focused on recycling. ACS President Hunt presented a Salutes to Excellence plaque to the museum for its dedication to community outreach.\nThe Chemists Celebrate Earth Day celebration occurred in 120 local sections with 138 coordinators leading the efforts within their communities. This represents an increase of more than 30% in local section and coordinator participation from 2006.\nCCA was featured in C&EN's April 16th issue on page 53. A shortcut to CCA's homepage was created: chemistry.org/committees/cca.html.\nDuring the Boston national meeting, CCA and the Office of Community Activities will celebrate National Chemistry Week's 20th Anniversary and its theme, \"The Many Faces of Chemistry.\" A special outreach event is being planned for Sunday, Aug. 19. Hands-on activities will focus on health and wellness.\nThe Committee on Corporation Associates (CCA) advises and influences ACS to ensure that its products and services are of value to industrial members and their companies. CCA vice chair, Roslyn White (SC Johnson), provided an overview of recent interactions between Corporation Associates and the U.K.-based Society of Chemical Industry (SCI).\nCCA gave feedback to a recommendations report from the ACS Board Committee on Professional & Member Relations Task Force on Globalization. Presentations were also received from the ACS Green Chemistry Institute and SCI.\nStaff reported on the Department of Industry Member Programs' activities since the San Francisco meeting. The report covered the Regional Industrial Innovation Awards, the World Congress on Industrial Biotechnology, the Analytical Pavilion sponsored by C&EN, and the ACS/Pharma Leaders Meeting.\nThe Awards/Finance & Grants Subcommittee reported that CCA received two funding proposals that total $7,500. Funding was provided to the following: The Committee on Economic & Professional Affairs at $3,000 for the Chicago symposium on \"Benefits Trends for the Chemical Workforce\" and the Office of Graduate Education and the Department of Career Development & Management at $4,500 for a workshop on \"Preparing for Life after Graduate School,\" to be held in conjunction with the 39th Central Regional Meeting.\nThe subcommittee also requested that ACS staff provide CCA with an official annual statement of Corporation Associates' financial reserves as of Jan. 1 of each year.\nThe Programs Subcommittee reported on planned programming activities in 2007 and beyond between CCA and SCI. The subcommittee gave an update on a Boston symposium cosponsored by Corporation Associates and the Medicinal Chemistry Division featuring past ACS Heroes of Chemistry from the pharmaceutical industry.\nBy request of the subcommittee, representatives from Chemical Abstracts Service gave an overview on AnaVist—a tool with potential applications for CCA's efforts to provide member companies with industry-relevant information and reports. The subcommittee also requested that CCA earmark approximately $20,000 of Corporation Associates funds in 2008 for its activities.\nThe Educational Outreach Subcommittee reported on its decision to collaborate with the Graduate Student Symposium Programming Committee of the Chemical Education Division on a graduate student industry roundtable program in Boston.\nThe subcommittee requested $5,000 in support of this effort. The subcommittee also discussed a request for corporate executive support of an American Association of Physics Teachers initiative to promote undergraduate research.\nThe Committee on Environmental Improvement (CEI) continues to be focused on the sustainability of the chemical enterprise. In Chicago, the committee introduced a multiyear effort to make ACS meetings more sustainable. This effort is designed to take advantage of the large size and diverse programs of the society to lead in the sustainability arena by \"walking the talk.\"\nThe committee held a dialogue with representatives of the U.S. Environmental Protection Agency who are trying to \"green\" federal conferences and work with the travel and tourism industry to change practices and shrink the environmental footprint of meetings. The committee also used advertising and student volunteers to engage individual meeting participants in a campaign to increase recycling by asking \"Are you sustainable?\" Moving forward, CEI looks forward to working closely with the Committee on Meetings & Expositions to advance this agenda.\nCEI was also pleased to participate in the meeting theme on sustainability through the ACS presidential programming. CEI cohosted the Monday presidential luncheon to discuss sustainability issues with the Committee on Science and is leading the follow-up to that luncheon, which will include recommendations on advancing sustainability in the three focal areas of the meeting—energy, food, and water.\nThe committee also continued its dialogue with the Committee on Corporation Associates about a collaborative workshop. This activity, tentatively slated for the New Orleans meeting, will seek additional insights from chemical and allied products companies about public policy barriers that limit adoption of more sustainable products and practices as well as policy incentives that would lead to increased sustainability in the chemical enterprise.\nAt its Chicago meeting, the committee welcomed the president of the Jordanian Chemical Society and the past-president of the Arab Union of Chemists.\nThe committee was briefed on Pittcon 2007, where, with financial support from the Society of Analytical Chemists of Pittsburgh, ACS coordinated participation of a scientific delegation from Adriatic nations.\nThe committee heard reports on the 2007 Frontiers of Chemical Science III: Research & Education in the Middle East meeting; the 2007 Transatlantic Frontiers of Chemistry meeting, which was jointly sponsored by ACS, the German Chemical Society, and the Royal Society of Chemistry; planned workshops to engage U.S. and Chinese early-career scientists in chemical biology, supramolecular, and new materials chemistry; and ACS Discovery Corps U.S./Brazil Research Collaboration Project in Biomass Conversion to Biofuels, Biomaterials & Chemicals.\nThe committee discussed Latin American engagement opportunities created through Puerto Rico's involvement in three key chemical science events there: the 2009 ACS Southeast Regional Meeting, the 2008 Federation of Latin American Chemical Associations (FLAQ) meeting, and the proposed IUPAC 2011 Congress & General Assembly.\nThe committee heard reports on letter-writing efforts by the ACS president to government officials in Libya and Mexico expressing concerns about challenges to the scientific freedom and human rights of scientists there.\nThe Committee on Minority Affairs (CMA) approved new vision, mission, and values statements at the Chicago national meeting. The mission of CMA is to increase the participation of minority chemical scientists and influence policy on behalf of minorities in ACS and the chemical enterprise.\nAn aggressive new strategic plan was approved by CMA to guide its activities over the next three years. By the end of 2009, CMA will increase the number of ACS Scholars that graduate to 100 per year, add 100 new minorities to leadership positions in ACS, engage in several collaborations, and increase the number of minority members of ACS by 5,000. CMA will focus initially on increasing minorities in ACS leadership. In working toward this goal, CMA began work on two new leadership-development programs for minority chemists.\nCMA continues to support the work of the Joint Subcommittee on Diversity (JSD) in developing programs, products, and services to ensure full participation of all members in ACS. In Chicago, JSD premiered a diversity booth at the meeting exposition hall and cosponsored symposia.\nThe Committee on Patents & Related Matters (CPRM) discussed proposed legislative and regulatory changes to the U.S. patent system as well as open-access legislation and the potential effects such matters might have on industry and academia as well as on ACS.\nCPRM also continued its work on several new educational tools to assist and inform members on patent issues and other intellectual property matters important to a successful career in the chemical enterprise. Many of these tools are now available on the committee's expanded website, membership.acs.org/C/CPRM/.\nAt the March 2007 meeting, the Committee on Professional Training (CPT) reviewed 42 new and additional information reports from ACS-approved chemistry programs. CPT held conferences with four schools seeking approval, discussed three updates and five site visit reports, and approved three new schools. The total number of ACS-approved chemistry programs is now 642.\nThe committee released the full draft of the ACS guidelines for review and comment. Copies of the draft were distributed to the department chairs at all ACS-approved schools, the chairs of all ACS committees, and the chairs of all ACS technical divisions.\nSeveral CPT members met with the ACS technical divisions during the Chicago meeting to present an overview of the draft and obtain feedback. The draft guidelines document is available on the CPT website, and the committee invites any comments to be sent to cpt@acs.org.\nIn other business, the committee continued development of the two workshops with minority-serving institutions that will be held in 2007. The committee reviewed the editorial policies for the 2007 edition of the ACS Directory of Graduate Research, which is using a new protocol for retrieving research publication titles in an effort to improve the accuracy of the directory.\nC&EN finished 2006 with an exceptionally strong editorial package. The first months of 2007 are proving to be equally successful in fulfilling the magazine's mission of keeping its readers informed. On the advertising side, revenues in 2006 increased for the second year in a row, and the early months of 2007 show continuing positive signs. The most significant editorial achievement was the successful launch of the redesign of the print edition of C&EN with the Oct. 16, 2006, issue.\nThe Subcommittee on Copyright has successfully updated the Copyright Module on the ACS Publications website. The subcommittee is looking into the possibility of conducting copyright programs at future ACS national and regional meetings.\nThe final monitoring reports for Chemistry of Materials, Journal of Agricultural & Food Chemistry, and Molecular Pharmaceutics were presented and accepted by the committee. Journal of Chemical Information & Modeling, Organic Letters, Accounts of Chemical Research, and the Journal of Chemical Theory & Computation will be monitored next.\n3. Examining the scientific basis of public policies related to the chemical sciences and making recommendations to the appropriate ACS units.\nIn the first of these areas, ComSci partnered with President Hunt and the Committee on Environmental Improvement in planning and hosting a sustainability luncheon that featured roundtable discussions centering on a key sustainability question. At the Boston national meeting, ComSci will deliver a full-day program on the subject of \"Partnerships in Innovation & Competitiveness.\"\nRegarding the second thrust, ComSci will present two programs in Boston: a box lunch that will feature two speakers taking opposing sides on the subject of \"Genetic Screening & Diagnostic Testing: Do You Really Want to Know?\" and a symposium titled \"Creating & Sustaining International Research Collaborations.\"\nIn support of the last thrust, ComSci is planning two events for 2008: \"Balancing Security & Openness\" will gather data to determine if the recent emphasis on security is hindering scientific progress and \"Transitioning Chemical Science to Commercially Successful Products.\"\nThe Women Chemists Committee (WCC) hosted more than 70 attendees at its open meeting recently in Chicago, where representatives from Iota Sigma Pi, Women in Science & Engineering, the Association of Women in Science, and the Chicago local section helped WCC celebrate the committee's 80th anniversary.\nThe Women in Industry Breakfast was also highly successful with a new format of speed networking. More than 100 participants had the opportunity to practice their elevator speeches and make several professional connections. A related workshop will be offered by WCC in Boston.\nIn Chicago, WCC sponsored two symposia, \"Women Achieving Success: The ACS as a Platform in Leadership Development\" in honor of Madeleine Joullié's 80th birthday and the ACS Award for Encouraging Women into Careers in the Chemical Sciences: Symposium in Honor of Bojan H. Jennings.\nMore than 225 ACS meeting attendees were present for the biannual WCC Luncheon and heard the keynote speaker Laura Kiessling, 2007 Francis P. Garvan-John Olin Medal Recipient. Twelve women presented their research at this meeting with funding by the WCC/Eli Lilly Travel Grant Award. WCC members also spent time educating expo attendees on programs offered by the ACS Office of Diversity Programs at its new booth.\nIn Chicago, the Younger Chemists Committee (YCC) welcomed its new committee members with an information session centered on YCC's charter as well as on its strategic plan: to make ACS relevant to younger chemists, to involve younger chemists in all levels of the society, and to integrate younger chemists into the profession.\nIn January, YCC again hosted a Leadership Development Workshop during the ACS Leaders Conference. There were more than 80 applications for the 15 awards, which covered travel and registration for the conference. YCC plans to again fund the travel awards and provide leadership training for young chemists in 2008. YCC also solicited applications and selected a new graduate student representative on the Graduate Education Advisory Board.\nDuring the Chicago meeting, YCC programs included \"Starting a Successful Research Program at a Predominantly Undergraduate Institution,\" \"Career Experiences at the Interface of Chemistry & Biology,\" and \"Chemistry Pedagogy 101.\"\nIn addition to these programs, YCC cosponsored five programs with various committees and divisions. YCC continues to reach out to ACS committees and divisions and has initiated liaisonships with 11 technical divisions to encourage technical programming that highlights the contributions of younger chemists. Looking forward to Boston, YCC is planning symposia including \"The Many Faces of Chemistry: International Opportunities for Chemists\"; \"Being a Responsible Chemist: Ethics, Politics & Policy\"; and \"Changing Landscapes of the Bio-Pharma Industry.\"\nThe Committee on Committees (ConC) conducted its annual training session for new national committee chairs at the ACS Leaders Conference in January 2007. ConC's interactive session for committee chairs in Chicago served as an opportune follow-on and a forum for informative interchange among seasoned and new chairs.\nConC began developing its recommendations for the 2008 committee chair appointments for consideration by the president-elect and chair of the board. ConC continues to focus efforts to identify members with the skills and expertise specified by the committee chairs using the councilor preference form.\nThe form will be sent to councilors in May. ConC also seeks the names of noncouncilor members for consideration for service on council-related committees, especially those with no prior appointment.\nAs part of ongoing activities with the joint CPC-Board Governance Review Task Force, ConC has collected data on committee liaisons to other committees. This information will be distributed to committee chairs. The number of liaisons indicates that unofficial but strong communication channels exist within the ACS committee structure.\nOn Sunday evening, the Committee on Nominations & Elections (N&E) sponsored its fifth successful Town Hall Meeting for President-Elect Nominees. An estimated 200 people attended this session. This forum facilitated communication among the 2008 president-elect nominees, councilors, and other members. N&E will hold another Town Hall Meeting featuring the candidates for director-at-large at the fall meeting in Boston.\nNow that voting over the Internet has become an accepted procedure for ACS national elections, the ACS technical divisions and local sections have expressed strong interest in using this method for their elections. N&E has developed protocols for elections for local sections and divisions. This document will be forwarded to the appropriate committees for their review and distribution.\nN&E is responsible for reviewing annually the distribution of member populations within the six electoral districts to ensure that the districts have equitable representation. According to bylaw V, section 4(a), the member population of each electoral district must be within 10% of the result of dividing by six the number of members whose addresses lie within these districts. The committee is happy to report that the six electoral districts are in compliance.\nThe committee has developed a petition on election procedures for president-elect and district director. The proposed election mechanism provides for a preferential (ranked) ballot and an \"instant runoff.\" N&E continues to address the areas of campaigning and the timing of our national election process. Between the Chicago and Boston meetings, the committee plans to sponsor online forums for input from councilors and other interested members on these issues.\nIn response to member concerns regarding the collection of signatures for petition candidates, N&E reviewed the society's bylaws. The bylaws state that an endorsement is required, but does not stipulate the method of endorsement. N&E has determined that original or electronic signatures are acceptable and will establish appropriate procedures for receipt of electronic signatures.\nThe Committee on Constitution & Bylaws (C&B), acting for the council, issued new certified bylaws to the Corning Section, the Portland Section, the Division of Colloid & Surface Chemistry, and the Division of Chemical Education. The committee reviewed new proposed amendments for the Division of Medicinal Chemistry, the Columbus Section, the Detroit Section, and the Southern Arizona Section.\nThree petitions were presented to council for action at this meeting. Regarding the \"Petition on Election Procedures 2006,\" a motion to separate the petition was approved, and the petition was divided. Provisions affecting bylaw V, sec. 2d, bylaw V, sec. 3c, and bylaw V, sec. 4f, which deal with election procedures and the timing of run-off elections, were approved by council and will become effective following confirmation by the board of directors.\nThe second part of the petition regarding bylaw V, sec. 2c, and bylaw V, sec. 3b, which deal with signature requirements for petition candidates for president-elect and director-at-large respectively, was recommitted to the Committee on Nominations & Elections, which has primary substantive responsibility for the petition.\nThe Committee on Nominations & Elections was asked to reconsider the signature requirements, procedures for acceptance of electronic signatures, and recommendations from the Governance Review Task Force on election procedures.\nThe second petition presented to council for action was the \"Petition on Rules for Nominating Members of N&E for National Offices.\" This petition was not approved by council. The third petition, the \"Petition on Multiyear Dues,\" was amended by incidental motion on the council floor, calling for the petition to become effective when technical components are instituted to track payments, but no later than Jan. 1, 2010. Council approved the incidental motion and then approved the petition.\nThe committee reviewed one petition for consideration, the \"Petition on Local Section Affiliations,\" which will be submitted to council for action at the fall 2007 meeting in Boston.\nThe committee met with representatives of the Committee on Membership Affairs and the Governance Review Task Force to continue discussions on proposals currently being formulated on membership requirements and student membership. In addition, the committee discussed election issues of concern to the Southern California Section.\nWe hope you enjoyed the presidential and division thematic program, \"Sustainability of Energy, Food & Water\" in Chicago. A small, dedicated group of volunteers and staff labored tirelessly to create and coordinate this programming; to them the Committee on Divisional Activities (DAC) offers sincere thanks.\nDAC has committed to transfer the process of choosing and organizing future national meeting themes to a body that represents all divisions. We made substantial progress in Chicago, where division, secretariat, and committee representatives convened to discuss national meeting program concepts. They proposed themes for the 2008 Philadelphia national meeting as well as a framework for a future national programming group.\nDivisions have successfully served their members fortunate enough to attend national meetings. To maximize benefits to division members, DAC encourages divisions to consider extending the reach of the content they deliver at national meetings through Internet-based distribution channels and will support worthy efforts in this direction via Innovative Program Grants.\nThe committee voted in Chicago to propose modifications to the division funding formula that will more greatly reward interdisciplinary programming. The new formula will also be simpler and more transparent to divisions. DAC will present the revised plan to council for action in Boston.\nThe Committee on Economic & Professional Affairs (CEPA), working with ACS staff in the Departments of Career Management & Development and Member Research & Technology, continues to update and implement its strategic plan to address the career needs of society members.\nSpecifically, the committee reviewed and revised existing workshops and materials to help ACS members get jobs. CEPA is developing new programs to address the needs of mid- and late-career chemists to ensure their continued competitiveness in the workplace and to ease their career transitions. New initiatives in these areas include the development of workshops, online training, surveys to assess member needs, suggested changes to public policies, and updates to professional and ethical workplace guidelines. As a result of discussions at the Public Policy Roundtable, which was held in San Francisco, a background paper is being developed on trends in health care issues.\nThe newly revised \"Chemical Professional's Code of Conduct\" was presented to council, which approved it. The Standards & Ethics Subcommittee is preparing a revision of the \"Academic Professional Guidelines\" to be presented to council for consideration in Boston.\nCEPA reviewed the Globalization Task Force Report. As our science diffuses around the globe, we want to make sure that our members are aware of the economic and professional challenges they will face and that they have the tools they need to succeed. Therefore, CEPA made a commitment to work with other committees, divisions, and ACS staff to develop programs and policies that position our membership to compete in the global workforce.\nCEPA heard and discussed a presentation on the proposal from the Membership Affairs Committee on broadening the requirements of membership. CEPA supports the spirit of this proposal and encourages further detailed studies to assess financial impacts on local sections and student affiliates chapters.\nThe Local Section Activities Committee (LSAC) recognized local sections celebrating significant anniversaries in 2007, including Savannah River (50 years), Northeast Tennessee (75 years), and the St. Louis and Syracuse local sections (both celebrating 100 years).\nLSAC hosted the local section leaders track in conjunction with the ACS Leaders Conference in Baltimore on Jan. 26–28. A total of 135 delegates from 124 local sections participated in the weekend leadership conference.\nLSAC also hosted a Local Section Summit on March 2–4 in Arlington, Va. The summit focused on practical operational issues that will support local sections' long-term success. Specific areas that were discussed include the development of a multiyear plan to expand or develop programming for local sections, opportunities to encourage innovation and experimentation within and among local sections, and capitalizing on existing opportunities to facilitate partnerships between local sections and other ACS groups.\nFollowing the San Francisco national meeting, LSAC launched a local section Science Café minigrant program. Fifty-five local sections accepted LSAC invitation to host Science Cafés in 2007.\nA DVD entitled \"ACS Close to Home: Local Sections Connecting Chemistry & the Community\" was released earlier this year. The video provides a seven-minute overview of the many outreach and educational programs sponsored by local sections and the critical role they play in positively influencing the public's perception of chemistry and its practitioners. Copies of the DVD were sent to all local section officers.\nThe Committee on Meetings & Expositions (M&E) reported that the 233rd ACS national meeting hosted 14,520 attendees. This included 7,152 chemical scientists, 5,059 students, 1,283 exhibitors, 119 precollege teachers, 573 exposition visitors, and 453 guests. The exposition had 424 booths with 268 companies.\nThe 10 2006 regional meetings set a new standard for excellence with attendance exceeding 8,000, a 30% increase in average meeting attendance compared to the 2005 meetings. A total of 4,717 abstracts were submitted. A region summit was held in February at which the final report of the ReACT study group was reviewed.\nThe practice of tracking the number of presenter no-shows continues. M&E will collaborate with the Committee on Divisional Activities to study options for addressing this problem. Suggestions will be presented at the Boston meeting for implementation in 2008.\nIt is the intent of M&E to pursue the goal of making our meetings \"greener.\" We will communicate with staff and governance units to identify actions for both the short and long term.\nThe American Institute of Chemical Engineers (AIChE) and ACS will hold their 2008 spring meetings simultaneously in New Orleans. An ad hoc working group consisting of members from M&E, DAC, and AIChE are actively exploring joint programming opportunities for this meeting.\nThe Committee on Membership Affairs (MAC) met in executive session on Saturday and Sunday in Chicago and reported that the ACS closed 2006 with 160,491 members, our highest year-end membership count since 2002. Of the 17,857 applications processed in 2006, more than 1,000 came from the Member-Get-a-Member campaign in which many councilors participated. The society's retention rate in 2006 remained strong at 92%. The committee also reported that recruitment for the first two months of 2007 netted 2,844 new applications—729 more than for the same time period last year.\nMAC continues to work with deliberate speed on the proposed new bylaw language for members, student members, and society affiliates-the three ways to connect to the society. The committee received input from the Governance Review Task Force and its action teams, the Council Policy Committee, the board of directors, the Committee on Constitution & Bylaws, and several other committees between the San Francisco and Chicago meetings. These interactions have resulted in the current bylaw change recommendations.\nIn Chicago, representatives from MAC attended several committee meetings and all seven councilor caucuses to summarize the current proposal for membership changes, answer questions, and seek input. In addition, all committee chairs were invited to have their respective committees review these bylaw changes and respond to MAC—if possible—before council met on Wednesday. MAC received 11 responses: eight supported the proposed changes as is, and three supported the proposed language with specified changes or considerations.\nThe comprehensive petition will likely represent the most significant and voluminous change in the ACS bylaws that has occurred in decades, and MAC is proud to be among the leaders in its development and in efforts to get it right the first time. Hundreds of individuals have contributed to this major effort, since MAC began such discussions at the spring 2004 national meeting.\nThe Committee on Ethics met in Chicago and discussed the possibility of organizing and scheduling a committee retreat in the near future to enable the committee to move from the current stage of exploring the needs and interests of ACS members to setting priorities for the next few years.\nThe Project SEED program offers summer research opportunities for high school students from economically disadvantaged families. Since its inception in 1968, the program has had a significant impact on the lives of more than 8,400 students. At the selection meeting in March, the committee approved research projects for 340 SEED I students and 98 SEED II students for this summer in more than 100 institutions.\nThe 2006 annual assessment surveys from 300 students indicate that 78% of the Project SEED participants are planning to major in a chemistry-related science, and 66% aspire to continue to graduate education. This program is made possible by contributions from industry, academia, local sections, ACS friends and members, the ACS Petroleum Research Fund, and the Project SEED Endowment.\nThe committee formally submitted a request to ConC to amend the Project SEED acronym and the committee duties described in the Supplementary Information of the \"ACS Charter, Constitution, Bylaws & Regulations.\"\nIn Chicago, the committee's agenda focused on the ACS Strategic Plan and how Project SEED fits into it, the Program Review Advisory Group (PRAG) review of the Project SEED program, the committee's review of an online application form, and planning of the 40th anniversary celebration to be held at the Philadelphia meeting in the fall of 2008. The committee selected a task force to review the criteria for selection of the Project SEED ChemLuminary Award.\n3. Making ACS relevant to technicians.\nLast year, CTA, along with the Division of Chemical Technicians, the Committee on Economic & Professional Affairs, and ChemTechLinks, started the Equipping the 2015 Chemical Technology Workforce initiative. This year, the initiative awarded six $500 minigrants to activities and programs that support the educational and professional development of chemical technicians.\nWe are pleased to announce that the winners of the minigrants are the ACS Division of Environmental Chemistry; the Chemical Technician Program Chair for the 39th ACS Central Regional Meeting in Covington, Ky.; Delta College, University Center, Mich.; Grand Rapids Community College, in Michigan; Mount San Antonio College, Walnut, Calif.; and Southwestern College in Chula Vista, Calif.\nThe winners are collaborating with industry, academia, and ACS local sections on such activities as chemical technology career fairs for high school students, discussion panels on employability skills for technicians, and technical programming at regional and national meetings on the vital role technicians have in the chemical enterprise.\nBecause of the enthusiastic response to the minigrants, Equipping the 2015 Chemical Technology Workforce will be supporting another round of minigrants to be distributed in the fall. Details will be available on the website. For more information, go to www.ChemTechLinks.org and click on \"Equipping the 2015 Chemical Technology Workforce.\"\nCTA has also joined with the Joint Subcommittee on Diversity, formerly known as the Collaboration of Committees Working Group. Because this group is focused on increasing diversity in ACS and the chemical enterprise, we believe that this is an opportunity to raise awareness of the value of technicians. CTA looks forward to collaborating on the promotion of traditionally underrepresented chemical professionals.\nIn 2007, CTA will be placing renewed focus on distribution of the ACS Chemical Technology Student Recognition Award. The award recognizes academic excellence in students preparing for careers as chemical technicians. For more information on the award, please visit the CTA website at chemistry.org/committees/cta.", "answers": ["There are 14,520 attendees, including 7,152 chemical scientists, 5,059 students, 1,283 exhibitors, 119 precollege teachers, 573 exposition visitors, and 453 guests."], "length": 6444, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "ddfed65e5495d0257b22a9537993c3453d8590cc94b8e0d6"} {"input": "Is the ISR necessary for transgene reactivation?", "context": "Current address: Division of Brain Sciences, Department of Medicine, Imperial College London, London, United Kingdom.\nIn a variety of species, reduced food intake, and in particular protein or amino acid (AA) restriction, extends lifespan and healthspan. However, the underlying epigenetic and/or transcriptional mechanisms are largely unknown, and dissection of specific pathways in cultured cells may contribute to filling this gap. We have previously shown that, in mammalian cells, deprivation of essential AAs (methionine/cysteine or tyrosine) leads to the transcriptional reactivation of integrated silenced transgenes, including plasmid and retroviral vectors and latent HIV-1 provirus, by a process involving epigenetic chromatic remodeling and histone acetylation. Here we show that the deprivation of methionine/cysteine also leads to the transcriptional upregulation of endogenous retroviruses, suggesting that essential AA starvation affects the expression not only of exogenous non-native DNA sequences, but also of endogenous anciently-integrated and silenced parasitic elements of the genome. Moreover, we show that the transgene reactivation response is highly conserved in different mammalian cell types, and it is reproducible with deprivation of most essential AAs. The General Control Non-derepressible 2 (GCN2) kinase and the downstream integrated stress response represent the best candidates mediating this process; however, by pharmacological approaches, RNA interference and genomic editing, we demonstrate that they are not implicated. Instead, the response requires MEK/ERK and/or JNK activity and is reproduced by ribosomal inhibitors, suggesting that it is triggered by a novel nutrient-sensing and signaling pathway, initiated by translational block at the ribosome, and independent of mTOR and GCN2. Overall, these findings point to a general transcriptional response to essential AA deprivation, which affects the expression of non-native genomic sequences, with relevant implications for the epigenetic/transcriptional effects of AA restriction in health and disease.\nCopyright: © 2018 De Vito et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.\nData Availability: All relevant data are within the paper and its Supporting Information files. RNAseq data are available in the ArrayExpress database under the accession number E-MTAB-6452.\nFunding: This study was funded by the Ajinomoto Innovation Alliance Program, (AIAP; https://www.ajinomoto.com/en/rd/AIAP/index.html#aiap) (to M.V.S and D.G), which is a joint research initiative of Ajinomoto Co., Inc., Japan. One of the authors [M.B.] is an employee of Ajinomoto Co., and his specific roles are articulated in the ‘author contributions’ section. The commercial funder provided support in the form of salary for author [M.B.] and some of the necessary research materials (medium for cell culture), but did not have any additional role in the study design, data collection and analysis, or preparation of the manuscript, and the authors had unrestricted access to the data. Due to a confidentiality agreement, the commercial funder participated only in the decision to publish the data obtained during the study, without any restriction.\nCompeting interests: This study was funded by Ajinomoto Co., Inc., Japan and one of the authors [M.B.] is an employee of this commercial funder. No other employment or consultancy relationships exist with the commercial funder, and no patents, products in development, or marketed products result from this study. The authors declare that no competing interests exist and that the commercial affiliation of one of the authors does not alter the adherence of authors to all PLOS ONE policies on sharing data and materials.\nIn animals, excessive, insufficient, or imbalanced nutrient availability is known to strongly impact on phenotype and health, both short and long-term, and across generations [1, 2]. In particular, studies in yeast, animal models and humans have shown that reduced food intake, reducing either overall calories, or only sugars, proteins, or even single amino acids (AA), such as Methionine (Met), may extend lifespan and healthspan, and reduce the risk of cancer and other age-related diseases [3–9]. In addition, fasting or specific AA deprivation have shown potential therapeutic applications, owing to their ability to directly reduce the growth of some tumor types [10, 11], sensitize cancer cells to chemo- or immunotherapy [12, 13], and allow efficient hematopoietic stem cell engraftment . However, little is known about the specific processes and molecular mechanisms mediating the roles of nutrient restriction in human health and longevity.\nA properly balanced diet in metazoans contains optimal amounts of a subset of AA, which cannot be synthetized de novo and are therefore named essential amino acids (EAAs). In humans these include Met, Histidine (His), Isoleucine (Ile), Leucine (Leu), Lysine (Lys), Phenylalanine (Phe), Threonine (Thr), Tryptophan (Trp), and Valine (Val), while a few others are considered as semi-essential, such as Glutamine (Gln) and Tyrosine (Tyr) [15, 16]. Consistently, EAA deprivation triggers a cell-autonomous adaptive response, characterized by extensive metabolic and gene expression modifications, implementing biosynthetic, catabolic, and plasma membrane transport processes, aimed at reconstituting the full AA complement [17, 18]. The best known and conserved pathways responding to AA deprivation are triggered by mechanistic Target of Rapamycin Complex 1 (mTORC1) and General amino acid Control Non-derepressible 2 (GCN2) protein kinases [15, 19, 20]. Activation of mTORC1 requires in particular the presence of Gln, Arg and Leu, but also Met , which activate the kinase through sensors mainly acting upstream of Rag GTPases at lysosomal membranes . In turn, mTORC1 promotes cell growth, proliferation and anabolism upon activation, and translational attenuation and autophagy upon inhibition [19, 20].\nBy contrast, GCN2 is activated by deprivation of any individual EAA, by means of its histidyl-tRNA synthetase-related domain, which binds uncharged tRNAs accumulating during AA limitation [23, 24]. Upon activation, GCN2 phosphorylates and inhibits its only known downstream target, namely the eukaryotic Initiation Factor 2 α (eIF2α), thereby initiating the Integrated Stress Response (ISR). This leads to attenuation of general translation, and induction of a transcriptional/translational program, aimed at increasing stress resistance and restoring cell homeostasis, by upregulating a specific subset of genes, including Activating Transcription Factor 4 (ATF4) and C/EBP-Homologous Protein (CHOP) [25–27]. Thus, inhibition of mTORC1 and activation of GCN2 by AA restriction cooperate to attenuate general translation at the initiation step, increase catabolism and turnover, and enhance stress resistance to promote adaptation . However, how these processes eventually induce protective mechanisms against the alterations associated with aging, which include pervasive epigenetic and transcriptional changes [28, 29], remains largely unknown.\nWe previously reported the unexpected observation that prolonged deprivation of either Tyr, or of both Methionine and Cysteine (Met/Cys), triggers the selective and reversible reactivation of exogenous transcriptional units, including plasmids, retroviral vectors and proviruses, integrated into the genome and transcriptionally repressed by defensive mechanisms against non-native DNA sequences [30, 31]. This phenomenon was observed both in HeLa epithelial and ACH-2 lymphocytic human cells, and was independent of the transgene or provirus (Ocular Albinism type 1, OA1; Green Fluorescent Protein, GFP; Lysosomal-Associated Membrane Protein 1, LAMP1; Human Immunodeficiency Virus-1, HIV-1), or of the exogenous promoter driving their transcription, either viral (cytomegalovirus, CMV; Long Terminal Repeat, LTR) or human (Phospho-Glycerate Kinase 1, PGK1; Elongation Factor-1α, EF-1α) . Furthermore, this transgene reactivation response was not reproduced by serum starvation, activation of p38, or pharmacological inhibitors of mTOR (PP242 or rapamycin), sirtuins and DNA methylation. By contrast, it was induced by pan histone deacetylase (HDAC) inhibitors, and by selective inhibitors of class II HDACs . Consistently, we found that the mechanism responsible involves epigenetic modifications at the transgene promoter, including reduced nucleosome occupancy and increased histone acetylation, and is mediated in part by reduced expression of a class II HDAC, namely HDAC4 .\nThese findings indicate that AA deprivation induces a specific epigenetic and transcriptional response, affecting the expression of newly-integrated exogenous transgenes and proviruses, and suggesting that endogenous sequences sharing similar structural and functional features may represent a transcriptional target as well [30, 31]. In particular, transposable elements, such as LTR-retrotransposons (or endogenous retroviruses, ERVs), are genomic “parasites” anciently-integrated into the genome, and silenced by epigenetic mechanisms of mammalian cells against the spreading of mobile elements, eventually becoming \"endogenized\" during evolution [32, 33]. This raises the question of whether their expression is also sensitive to AA restriction. In addition, it remains unclear whether or not the transgene reactivation response is related to specific AA deprivations, and most importantly which is the AA sensing/signaling pathway involved, in particular whether the GCN2 kinase is implicated. Thus, here we used the reactivation of silenced transgenes in cultured cells, as a model to investigate a novel molecular pathway induced by imbalanced EAA starvation, implicated in the epigenetic/transcriptional regulation of exogenous non-native DNA sequences and possibly of other endogenous anciently-integrated genomic elements.\nHeLa human epithelial carcinoma, HepG2 human hepatocellular carcinoma and C2C12 mouse skeletal muscle cells were maintained in DMEM containing glutaMAX (Invitrogen) and supplemented with 10% FBS (Sigma), 100 U/ml penicillin G (Invitrogen), 100 mg/ml streptomycin (Invitrogen), at 37°C in a 5% CO2 humidified atmosphere. Cell lines carrying integrated and partially silenced transgenes were also maintained in 600–1000 μg/ml G418.\nThe C2C12 cell line was provided by ATCC. HeLa and HepG2 cells were obtained by Drs. F. Blasi and G. Tonon at San Raffaele Scientific Institute, Milan, Italy, respectively, and were authenticated by Short Tandem Repeat (STR) profiling, using the Cell ID System kit (Promega), according to the manufacturer’s instructions. Briefly, STR-based multiplex PCR was carried out in a final volume of 25 μL/reaction, including 5 μL Cell ID Enzyme Mix 5X, 2.5 μL Cell ID Primer Mix 10X and 3 ng of template DNA. The thermal cycling conditions were: 1 cycle at 96°C for 2 min, followed by 32 cycles at 94°C for 30 sec, 62°C for 90 sec, and 72°C for 90 sec, and 1 cycle at 60°C for 45 sec. The following STR loci were amplified: AMEL, CSF1PO, D13S317, D16S539, D21S11, D5S818, D7S820, TH01, TPOX, vWA. Fragment length analysis of STR-PCR products was performed by Eurofins Genomics, using standard procedures of capillary electrophoresis on the Applied Biosystems 3130 XL sequencing machine, and assessment of the STR profile was performed at the online STR matching analysis service provided at http://www.dsmz.de/fp/cgi-bin/str.html.\nStable cell clones, expressing myc-tagged human OA1 (GPR143) or GFP transcripts, were generated using pcDNA3.1/OA1myc-His or pcDNA3.1/EGFP vectors . Briefly, HeLa, HepG2 and C2C12 cells were transfected using FuGENE 6 (Roche) and selected with 800, 1000, and 650 μg/ml of G418 (Sigma), respectively, which was maintained thereafter to avoid loss of plasmid integration. G418-resistant clones were isolated and analyzed for protein expression by epifluorescence and/or immunoblotting.\nFull DMEM-based medium, carrying the entire AA complement, and media deprived of Met/Cys (both AAs), Met (only), Cys (only), Alanine (Ala), Thr, Gln, Val, Leu, Tyr, Trp, Lys and His were prepared using the Nutrition free DMEM (cat.#09077–05, from Nacalai Tesque, Inc., Kyoto, Japan), by adding Glucose, NaHCO3, and either all 20 AAs (for full medium) or 18–19 AAs only (for deprivations of two-one AAs). Single AAs, Glucose, and NaHCO3 were from Sigma. Further details and amounts utilized are indicated in S1 Table. All media were supplemented with 10% dialyzed FBS (Invitrogen), 100 U/ml penicillin G (Invitrogen), 100 mg/ml streptomycin (Invitrogen), and G418 as required. HBSS was from Invitrogen. Cells were seeded at 10–30% of confluency; cells to be starved for 48 h were plated 2–3 times more confluent compared to the control. The following day, cells were washed and cultured in the appropriate medium, with or without EAA, for 24–48 h.\nL-Histidinol (HisOH), PP242, Integrated Stress Response Inhibitor (ISRIB), SP600125, Cycloheximide (CHX) were from Sigma; Salubrinal was from Tocris Bioscience; U0126 was from Promega. Drugs were used at the following final concentrations: HisOH at 4–16 mM; PP242 at 1–3 μM; ISRIB at 100 nM; SP600125 at 20 μM in HepG2 cells and 50 μM in HeLa cells; Cycloheximide (CHX) at 50 ug/ml in HepG2 cells and 100 ug/ml in HeLa cells; Salubrinal at 75 μM; U0126 at 50 μM. Vehicle was used as mock control. Treatments with drugs to be tested for their ability to inhibit transgene reactivation (ISRIB, SP600125 and U0126) were initiated 1h before the subsequent addition of L-Histidinol (ISRIB) or the subsequent depletion of Met/Cys (SP600125 and U0126).\nTotal RNA was purified using the RNeasy Mini kit (Qiagen), according to manufacturer’s instructions. RNA concentration was determined by Nanodrop 8000 Spectrophotometer (Thermo Scientific). Equal amount (1 μg) of RNA from HeLa, HepG2 and C2C12 cells was reverse transcribed using the SuperScript First-Strand Synthesis System for RT-PCR (Invitrogen) using oligo-dT as primers, and diluted to 5 ng/μl. The cDNA (2 μl) was amplified by real-time PCR using SYBR green Master Mix on a Light Cycler 480 (Roche), according to manufacturer’s instructions. The thermal cycling conditions were: 1 cycle at 95°C for 5 min, followed by 40–45 cycles at 95° for 20 sec, 56° for 20 sec and 72° for 20 sec. The sequences, efficiencies and annealing temperatures of the primers are provided in S2 Table. Data were analyzed with Microsoft Excel using the formula EtargetΔct target (control-sample) /EreferenceΔct reference (control-sample) . Reference genes for normalizations were ARPC2 (actin-related protein 2/3 complex, subunit 2) for HeLa and HepG2 cells; and Actb (actin beta) for C2C12 cells, unless otherwise indicated.\nsiRNA (Mission esiRNA, 200 ng/μL; Sigma) against ATF4 and GCN2 were designed against the targeted sequences NM_001675 and NM_001013703, respectively. Cells seeded in 6-well plates were transfected with 1 μg of siRNAs and 5 μL of Lipofectamine 2000 (Invitrogen), following manufacturer’s instructions, at day 1 post-plating for ATF4 and at day 1 and 2 post-plating for GCN2. At day 2 (ATF4) or 3 (GCN2) post-plating, cells were washed and cultured in medium in the absence or presence of HisOH 4 mM for 6 h. siRNAs against RLuc (Sigma), targeting Renilla Luciferase, were used as negative control. For CRISPR/Cas9 experiments, we used the “all-in-one Cas9-reporter” vector, expressing GFP (Sigma), which is characterized by a single vector format including the Cas9 protein expression cassette and gRNA (guide RNA). GFP is co-expressed from the same mRNA as the Cas9 protein, enabling tracking of transfection efficiency and enrichment of transfected cells by fluorescence activated cell sorting (FACS). The human U6 promoter drives gRNA expression, and the CMV promoter drives Cas9 and GFP expression. The oligonucleotide sequences for the three gRNAs targeting GCN2 exon 1 or 6 are listed in S2 Table. We transfected HeLa and HepG2 cells with these plasmids individually (one plasmid one guide) and sorted the GFP-positive, transfected cells by FACS. Screening GCN2-KO clones was performed by western blotting. In the case of HepG2-OA1 cells, two rounds of selection were necessary to obtain three GCN2-KO clones by using a guide RNA against exon 1. Compared to the original HepG2-OA1 cell line and to the clone resulting from the first round of selection (185#27), the selected clones E23, F22 and F27 showed a very low amount—if any—of residual GCN2 protein (see results).\nGenomic DNA of HeLa and HepG2 cells was purified using DNeasy Blood and Tissue kit (Qiagen), according to the manufacturer’s instructions. DNA concentration was determined by Nanodrop 8000 Spectrophotometer (Thermo Scientific). PCR conditions for amplification of GCN2 exon 1 and 6 were as follows: 1 cycle at 94°C for 5 min, followed by 35 cycles at 94°C for 40 sec, 56°C for 40 sec, and 72°C for 40 sec; and a final extension step of 5 min at 72°C. The primer sequences are provided in S2 Table.\nFor OA1, western immunoblotting was carried out as described . For GCN2, cells were lysed in RIPA buffer, boiled at 95°C for 5 min and resolved on a 7.5% polyacrylamide gel; immunoblotting was then performed following standard procedures. Primary Abs were as follows: anti-human OA1, previously developed by our group in rabbits ; anti-GCN2 (Cell Signaling, Cat. #3302).\nStatistical analyses were performed using Microsoft Excel for Mac (version 15.32, Microsoft) for Student’s t-test; or GraphPad Prism (version 5.0d for Mac, GraphPad Software, Inc.) for one-way analysis of variance (ANOVA), followed by Dunnett’s or Tukey’s multiple comparisons post-tests. T-test was used when only two means, typically sample versus control, were compared, as specified in the figure legends. One way ANOVA was used for multiple comparisons, followed by either a Dunnett’s (to compare every mean to a control mean), or a Tukey’s (to compare every mean with every other mean) post-test, by setting the significance level at 0.05 (95% confidence intervals). Both tests compare the difference between means to the amount of scatter, quantified using information from all the groups. Specifically, Prism computes the Tukey-Kramer test, allowing unequal sample sizes. P values in Figures are generally referred to comparison between a sample and the control (full medium/mock), and are indicated as follows: *P<0.05, **P<0.01, ***P<0.001. Comparisons not involving the control are similarly indicated, by a horizontal line at the top of the graphs, encompassing the two samples under analysis. Additional details regarding the specific experiments are reported in the Figure Legends.\nTo examine the expression behavior of genomic repeats upon AA starvation, we performed a transcriptomic analysis taking advantage of an intramural sequencing facility. HeLa-OA1 cells were cultured in normal medium (for 6-30-120 hours) or in absence of Met/Cys (for 6-15-30-72-120 hours). Total RNA was prepared using Trizol (Sigma) to preserve transcripts of both small and long sizes (from Alu, of about 0.3 kb, to Long Interspersed Nuclear Elements, LINEs, and ERVs, up to 6–8 kb long), DNase treated to avoid contamination of genomic DNA, and processed for NGS sequencing by Ovation RNA-Seq System V2 protocol and HiSeq 2000 apparatus. Raw sequence data (10–20 M reads/sample) were aligned to the human genome (build hg19) with SOAPSplice . Read counts over repeated regions, defined by RepeatMasker track from UCSC genome browser , were obtained using bedtools suite . Normalization factors and read dispersion (d) were estimated with edgeR , variation of abundance during time was analyzed using maSigPro package , fitting with a negative binomial distribution (Θ = 1/d, Q = 0.01), with a cutoff on stepwise regression fit r2 = 0.7. Read counts were transformed to RPKM for visualization purposes. The expression of the OA1 transgene and HDAC4, which are progressively up- and down-regulated during starvation, respectively , were used as internal controls.\nFor genomic repeat analysis, reads belonging to repetitive elements were classified according to RepeatMasker and assigned to repeat classes (total number in the genome = 21), families (total number in the genome = 56) and finally subfamilies (total number in the genome = 1396), each including a variable number of genomic loci (from a few hundred for endogenous retroviruses, up to several thousand for Alu). Repeat subfamilies were then clustered according to their expression pattern in starved vs control cells, by maSigPro using default parameters, and repeats classes or families that are significantly enriched in each cluster, compared to all genomic repeats, were identified by applying a Fisher Exact test (using scipy.stats, a statistical module of Python). Alternatively, differentially expressed repeat subfamilies were identified by averaging three time points of starvation (15-30-72 h) and controls. Repeats significantly up- or downregulated (104 and 77, respectively) were selected based on a P value <0.05 (unpaired two-tailed Student’s t-test, assuming equal variance), and analyzed for their class enrichment by a Fisher Exact test as described above.\nFor gene set enrichment analysis of Met/Cys deprived vs control HeLa cells, differentially expressed genes were selected considering three time points of starvation (15-30-72 h) and controls, based on a P value <0.05 (unpaired two-tailed Student’s t-test, assuming equal variance) and a fold change >2. This led to a total of 2033 differentially expressed genes, 996 upregulated and 1037 downregulated. The enrichment analysis was performed separately for up and down regulated genes, or with all differentially expressed genes together (both), using the KEGG database. The analysis was performed with correction for the background of all expressed genes (about 13600 genes showing an average expression over 3 starvation and 3 control samples of at least 5 counts) and by using default parameters (adjusted P value and q-value cut-off of <0.05 and 0.2, respectively). Differentially expressed genes were also selected considering all starvation time points, as with genomic repeats, by maSigPro using default parameters, and a fold change of at least 1.5, leading to similar enrichment results (not shown). RNAseq gene expression data are available in the ArrayExpress database under the accession number E-MTAB-6452.\nTo provide proof-of-principle that AA starvation may affect the expression of transposable elements, we performed an RNAseq analysis of the previously described HeLa-OA1 cells, carrying an integrated and partially silenced OA1 transgene . Since the reactivation of the transgene by starvation is a progressive phenomenon , we performed a time-course experiment, where each time point represents one biological sample, rather than a biological triplicate of a single time point. To this aim, cells were cultured either in normal medium, or in absence of Met/Cys for different time points (6-15-30-72-120 hours), resulting in the progressive upregulation of the OA1 transgene during starvation (Fig 1A and 1B), consistent with previously published results . The expression of genomic repeats was determined according to RepeatMasker annotation and classification into classes, families, and subfamilies. Repeat species were then subjected to differential expression and enrichment analyses in starved vs control conditions. Out of 1396 annotated repeat subfamilies, 172 species displayed a differential expression profile during starvation.\nFig 1. Exogenous transgene and endogenous retroviruses are upregulated in Met/Cys-deprived HeLa cells.\n(A,B) Exogenous integrated transgene (OA1) mRNA abundance in HeLa-OA1 cells, cultured in Met/Cys-deprived medium for the indicated time points, and analyzed by RNAseq (A), or RT-qPCR (B), compared to full medium. Data represent RPKM (A), or mean ± SD of 2 technical replicates, expressed as fold change vs. control (full medium at 6 h = 1) (B). (C) Clustering of 172 genomic repeat subfamilies, differentially expressed upon starvation, according to their expression profile. (D) Class distribution of repeat subfamilies belonging to differential expression clusters, compared to all genomic repeat subfamilies (first column). Class DNA includes DNA transposons; SINE includes Alu; LINE includes L1 an L2; LTR includes endogenous retroviruses and solitary LTRs; Satellite includes centromeric acrosomal and telomeric satellites; Others includes SVA, simple repeats, snRNA, and tRNAs. LTR-retroelements are significantly enriched among repeats that are upregulated upon starvation, while LINEs are significantly enriched among repeats that are downregulated. *P<0.05, ***P<0.001 (Fisher exact test).\nAs shown in Fig 1C, the clustering of differentially expressed repeats, according to their expression pattern, reveals profiles comparable to the behavior of the transgene in the same conditions, i.e. upregulation upon starvation and no change in regular medium (Cluster 1 and 2). In particular, Cluster 1 contains sequences that, similarly to the OA1 transgene, are progressively upregulated upon starvation (Fig 1A and 1C) , while Cluster 2 contains sequences that are upregulated at early time points. Interestingly, repeat families that are significantly enriched in these two clusters belong mostly to the group of LTR-retrotransposons, including ERV1, ERVK, ERVL, ERVL-MaLR and other LTR sequences (Fig 1D; S1A and S2A Figs). By contrast, DNA transposons (such as TcMar-Tigger) and L1 non-LTR retrotransposons are enriched among repeats that are downregulated during starvation, particularly at late time points (Clusters 3 and 4) (Fig 1D; S1A and S2A Figs). Consistent results were obtained by selecting significantly up- or downregulated genomic repeats (overall 181 species), based on their average expression out of three time points of starvation (15-30-72 h, when the transgene upregulation is more homogeneous) and controls, and on a P value <0.05 (S1B and S2B Figs). These findings suggest that EAA starvation induces genome-wide effects involving repetitive elements, and that—among major repeat classes—it upregulates in particular the expression of ERVs.\nIn addition, to obtain a general overview of main gene pathways changing their expression together with the transgene during AA starvation, we performed gene expression and enrichment analyses of regular genes, by considering three time points of starvation (15-30-72 h) and controls. Differentially expressed genes were selected based on a P value <0.05 and a fold change between means of at least 2, and analyzed with the EnrichR tool . As shown in Fig 2 and S1 File, enrichment analyses against the KEGG and Reactome databases reveals a predominance of downregulated pathways, namely ribosome and translation, proteasome, AA metabolism, oxidative phosphorylation and other pathways related to mitochondrial functions, which are affected in Huntington, Alzheimer and Parkinson diseases (http://www.genome.jp/kegg/pathway.html). In particular, a large fraction of ribosomal protein mRNAs is downregulated upon Met/Cys starvation (Fig 2A and 2C; S1 File), consistent with the notion that their genes–despite being scattered throughout the genome—are coordinately expressed in a variety of conditions . This reduced expression may depend on multiple pathways that control ribosome biogenesis in response to external stimuli, including the downregulation of Myc activity , the downregulation of mTORC1 [42, 44], or possibly the activation of the ISR, as described in yeast . By contrast, upregulated genes show a significant enrichment for transcription and gene expression (Fig 2B). Similar results were obtained by the Gene Ontology Biological Process (GO-BP) database (S1 File), overall indicating a general downregulation of translation and metabolism, and upregulation of transcription, during the time interval of Met/Cys starvation corresponding to the transgene upregulation.\nFig 2. Gene set enrichment analysis of Met/Cys-deprived HeLa cells.\nDifferentially expressed genes between three time points of starvation (15-30-72 h) and controls were selected based on a P value <0.05 and a fold change of at least 2, leading to a total of 996 upregulated, and 1037 downregulated genes. The enrichment analysis was performed separately for up and down regulated genes, using the EnrichR tool and the KEGG (A) and REACTOME (B, C) databases. Ranking is based on the combined score provided by EnrichR, and categories are displayed up to 20 items with an Adjusted P value <0.05. No significant categories were found with upregulated genes against the KEGG database. All data are shown in S1 File. The enrichment analysis using all differentially expressed genes together did not reveal any additional enriched process.\nTo characterize the pathway leading to the reactivation of silenced transgenes, we used HeLa-OA1 and HeLa-GFP cells, as described . In addition, to test cell types relevant for AA metabolism, such as liver and muscle, we generated clones of HepG2 human hepatoma and C2C12 mouse skeletal muscle cells, stably transfected with plasmids for OA1 and GFP transgenes, respectively (HepG2-OA1 and C2C12-GFP cells; endogenous OA1 is not expressed in any of these cell types). In all cases, the integrated transgenes are under the control of the CMV promoter in the context of a pcDNA3.1 plasmid, are partially silenced, and can be efficiently upregulated by HDAC inhibitors (trichostatin A, TSA; ref. and S3A, S3B and S4A Figs), indicating that their expression is controlled at least in part by epigenetic mechanisms, as previously described .\nTo establish whether the reactivation response results from the shortage of specific AAs only, such as Met/Cys, or it is triggered by any AA deprivations, we cultured HeLa-OA1, HeLa-GFP, HepG2-OA1 and C2C12-GFP cells for 24–48 hours with a battery of media deprived of EAAs or semi-EAAs, including Met/Cys, Thr, Gln, Val, Leu, Tyr, Trp, Lys, and His. As negative controls, cells were cultured in full medium, carrying the entire AA complement, and in a medium deprived of Ala, a non-essential AA. The expression of the transgene transcript was then evaluated by RT-qPCR. As shown in Fig 3, and in S3C and S4B Figs, most EAA-deficiencies induced reactivation of the OA1 or GFP transgenes in all four cell lines, with the notable exception of Trp deprivation, which consistently resulted in no or minimal reactivation of the transgenes. Indeed, despite some variability, Met/Cys deficiency, but also Thr, Val, Tyr, and His deprivation always gave an efficient response, while Leu, Gln and Lys elicited evident responses in some cases, but not in others. Depletion of Phe gave results comparable to Tyr deprivation, however it significantly altered multiple reference genes used for normalization and therefore was eventually omitted from the analysis (not shown). Finally, in the above experiments we used a combined Met/Cys deficiency, to avoid the potential sparing of Met by Cys and for consistency with our previous studies . Nevertheless, the analysis of single Met or Cys starvation, both at the protein and transcript levels, revealed an exclusive role of Met deprivation in transgene reactivation, consistent with the notion that Cys is not an EAA (S3D and S3E Fig).\nFig 3. EAA deprivation induces reactivation of silent transgenes in HeLa and HepG2 cells.\nRelative transgene (OA1) and CHOP mRNA abundance in HeLa-OA1 (A) and HepG2-OA1 (B) cells, cultured in various AA-deprived media for 48 h and 24 h, respectively, compared to full medium. Mean ± SEM of 3 independent experiments. Data are expressed as fold change vs. control (full medium = 1). *P<0.05, **P<0.01, ***P<0.001 (one way ANOVA, followed by Dunnett’s post-test vs. full medium).\nCollectively, these results indicate that transgene reactivation by EAA starvation is reproducible with most EAAs, shared by different cell types (epithelium, liver, and skeletal muscle), and conserved in different mammalian species (human, mouse).\nmTORC1 inhibition and GCN2 activation trigger the best-known signaling pathways responding to AA starvation . We previously showed that inhibition of mTORC1 is not sufficient to reproduce transgene reactivation in HeLa cells . By contrast, the involvement of GCN2 and the ISR, including the downstream effectors ATF4 and CHOP, has never been tested. In addition, this pathway has been typically assessed in transient assays, lasting for a few hours, which may not be comparable with the prolonged starvation conditions necessary to reactivate the transgene expression (at least 15–24 h). Thus, we tested whether CHOP expression was upregulated upon incubation of HeLa-OA1, HepG2-OA1 and C2C12-GFP cells in media deprived of different EAAs for 24–48 h.\nAs shown in Fig 3 and S4B Fig, we found that CHOP expression is increased in all EAA-starvation conditions, but not in the absence of Ala, in all tested cell lines. Similar, yet less pronounced, results were obtained with ATF4, consistent with the notion that activation of this transcription factor is mainly mediated by translational upregulation (not shown) [15, 26]. However, the upregulation of CHOP does not parallel quantitatively that of the transgene, neither appears sufficient to induce it. In fact, CHOP is highly upregulated even upon Trp starvation, which consistently results in no or minimal reactivation of the transgenes (compare CHOP with OA1 or GFP expression; Fig 3 and S4B Fig). Thus, while the ISR appears widely activated upon EAA starvation, the upregulation of its downstream effector CHOP only partly correlates with transgene reactivation and may not be sufficient to induce it.\nThe activation of the ISR upon AA starvation suggests that GCN2 may be involved in the transgene reactivation response. Therefore, we tested whether direct pharmacological activation of this kinase is sufficient to trigger the transgene reactivation similarly to starvation. In addition, we used pharmacological inhibitors of mTOR to corroborate previous negative results in HeLa cells in the other cell lines under study. To this aim, HeLa-OA1 or GFP, HepG2-OA1 and C2C12-GFP cells were cultured in the presence of different concentrations of PP242 (mTOR inhibitor) or L-Histidinol (GCN2 activator, inhibiting tRNAHis charging by histidyl-tRNA synthetase), either alone or in combination for 24 h, compared to Met/Cys-deprived and full medium. As shown in Fig 4 and S5 Fig, while inhibition of mTORC1 consistently leads to minor or no effects, in agreement with previous findings , treatment with L-Histidinol results in efficient reactivation of the transgene in HepG2-OA1 and C2C12-GFP cells, but not in HeLa cells.\nFig 4. mTOR inhibition and GCN2 activation differently affect transgene expression in HeLa and HepG2 cells.\nRelative transgene (OA1) and CHOP mRNA abundance in HeLa-OA1 (A) and HepG2-OA1 (B) cells, cultured in Met/Cys-deprived medium, or in the presence of PP242 (mTOR inhibitor; 1–3 μM) or L-Histidinol (HisOH, GCN2 activator; 4–16 mM), either alone or in combination for 24–48 h, compared to full medium. Mean ± SEM of 4 (A) or 3 (B) independent experiments. Data are expressed as fold change vs. control (full medium = 1). *P<0.05, **P<0.01, ***P<0.001 (one way ANOVA, followed by Dunnett’s post-test vs. full medium). PP-1 and PP-3, PP242 at 1 and 3 μM, respectively; HisOH-4 and HisOH-16, L-Histidinol at 4 and 16 mM, respectively.\nSpecifically, L-Histidinol is not effective in HeLa-OA1 and HeLa-GFP cells, either alone or in combination with PP242 (Fig 4A and S5A Fig), or by using different concentrations of the drug, with or without serum (not shown). In these cells, L-Histidinol appears also unable to trigger the ISR, as indicated by lack of CHOP upregulation, possibly due to their different sensitivity to the drug. These findings are consistent with previous reports, describing the use of L-Histidinol in HeLa cells in conditions of low His concentration in the culture medium , which would resemble AA starvation in our system and therefore may not be applicable. Thus, even though the amount of the amino alcohol was adapted to exceed 20 to 80 times that of the amino acid, as described , HeLa cells may be resistant or able to compensate.\nIn contrast, in other cell types, L-Histidinol has been utilized in regular DMEM, to mimic the AA response triggered by DMEM lacking His [48, 49]. Consistently, in HepG2-OA1 cells, L-Histidinol is sufficient to elicit extremely high levels of transgene reactivation, and its combination with PP242 results in additive or even synergistic effects, possibly due to an indirect effect of mTOR inhibition on GCN2 activity (Fig 4B) [50, 51]. Similarly, C2C12-GFP cells efficiently reactivate the transgene upon treatment with L-Histidinol, but not PP242 (S5B Fig). However, differently from HepG2-OA1 cells, simultaneous treatment of C2C12-GFP cells with L-Histidinol and PP242 does not lead to synergistic effects. Consistent with stimulation of the ISR, CHOP and to a minor extent ATF4 are upregulated by L-Histidinol in both cell lines, yet their expression levels show only an incomplete correlation with those of the transgene (Fig 4B, S5B Fig, and not shown).\nThe finding that GCN2 activation by L-Histidinol is sufficient to reactivate the transgenes in both HepG2-OA1 and C2C12-GFP cells pointed to this kinase, and to the downstream ISR, as the pathway possibly involved in the EAA starvation response. Thus, we investigated whether the ISR is sufficient to trigger upregulation of the OA1 transgene in HepG2-OA1 cells by pharmacological means. As CHOP expression does not correspond quantitatively and is not sufficient to induce transgene reactivation, we tested the role of the core upstream event of the ISR, namely the phosphorylation of eIF2α , which can be induced by pharmacological treatments, independent of GCN2 (Fig 5A). To this aim, we used Salubrinal, a specific phosphatase inhibitor that blocks both constitutive and ER stress-induced phosphatase complexes against eIF2α, thereby increasing its phosphorylation . We found that, while the ISR is activated upon Salubrinal treatment, as shown by increased CHOP expression, it does not induce OA1 transgene reactivation (Fig 5B).\nFig 5. The ISR is neither sufficient nor necessary to induce transgene reactivation in HepG2 cells.\n(A) Schematic representation of GCN2 activation by AA starvation, resulting in phosphorylation of eIF2a and initiation of the downstream ISR. In addition to GCN2, the ISR may be activated by other eIF2a kinases (PKR, HRI and PERK; not shown in the picture). (B) Relative transgene (OA1) and CHOP mRNA abundance in HepG2-OA1 cells treated for 24 h with Salubrinal (a drug that induces the ISR by inhibiting the dephosphorylation of eIF2α; 75 μM), compared to full medium. Mean ± range of two experiments. Data are expressed as fold change vs. control (DMEM = 1). *P<0.05 (paired two-tailed Student’s t-test vs. control). (C) Relative transgene (OA1) and CHOP mRNA abundance in HepG2-OA1 cells treated for 6 h with L-Histidinol (HisOH, GCN2 activator; 4 mM), in the absence or presence of ISRIB (a drug that bypasses the phosphorylation of eIF2α, inhibiting triggering of the ISR; 100 nM). Mean ± range of two experiments. Data are expressed as fold change vs. control (DMEM = 1). **P<0.01, ***P<0.001 (one way ANOVA, followed by Tukey’s post-test; P values refer to comparisons vs. control, unless otherwise indicated). (D) Relative transgene (OA1) and ATF4 mRNA abundance in HepG2-OA1 cells transfected with control (CTRL) or anti-ATF4 siRNAs, and incubated in the presence or absence of L-Histidinol (HisOH, GCN2 activator; 4 mM) for 6 h. Mean ± range of two experiments. Data are expressed as fold change vs. control (w/o HisOH = 1, top; control siRNA = 1, bottom). *P<0.05 (one way ANOVA, followed by Tukey’s post-test; P values refer to comparisons vs. control, unless otherwise indicated).\nTo test whether the ISR is necessary to trigger the transgene response to L-Histidinol, we used the chemical compound ISRIB, which inhibits the activation of the ISR, even in the presence of phosphorylated eIF2α, likely by boosting the activity of the guanine-nucleotide exchange factor (GEF) for eIF2α, namely eIF2B [53, 54]. HepG2-OA1 cells were stimulated with L-Histidinol, either in the presence or absence of ISRIB. As shown in Fig 5C, while the expression of CHOP is inhibited by ISRIB, as expected, the reactivation of the OA1 transgene is not affected. In addition, knockdown of the closest eIF2α downstream effector ATF4 by siRNAs does not interfere with the reactivation of the OA1 transgene by L-Histidinol (Fig 5D). Together, these data suggest that eIF2α phosphorylation and the downstream ISR pathway are neither sufficient nor necessary to induce transgene reactivation.\nTo definitively establish if GCN2 is necessary to trigger the transgene reactivation response to EAA starvation, we directly suppressed its expression by CRISPR/Cas9-mediated knock-out (KO). We generated three independent GCN2-KO clones from the parental HeLa-OA1 cell line, by using three different guide RNAs, two against exon 1 (clones 183#11 and 185#5), and one against exon 6 (clone 239#1) of the GCN2 gene. Genomic characterization confirmed the presence of mutations on both alleles of exon 1 of the GCN2 gene in clone 183#11, and on both alleles of exon 6 in clone 239#1; by contrast, clone 185#5 showed multiple alleles in exon 1, consistent with the presence of two cell populations, and was not characterized further at the genomic level (S6 Fig). None of these clones express GCN2 at the protein level, as shown by immunoblotting (Fig 6A). To test the GCN2-KO cells for their ability to respond to EAA starvation, parental HeLa-OA1 cells and the three GCN2-KO clones were cultured in media deprived of Met/Cys or Thr (corresponding to the most effective treatments in this cell line; see Fig 3A) for 24–48 h and transgene expression was assessed by RT-qPCR. We found that the reactivation of the OA1 transgene is neither abolished, nor reduced by KO of GCN2, thus excluding that this kinase is necessary for the response to EAA starvation in HeLa-OA1 cells (Fig 6B and 6C).\nFig 6. GCN2 knockout does not interfere with transgene reactivation in HeLa cells.\n(A) Immunoblotting of protein extracts from the HeLa-OA1 parental cell line and GCN2-KO clones 183#11, 185#5 and 239#1, immunodecorated with anti-GCN2 antibody. Arrow, GCN2 specific band. Ponceau staining was used as loading control. (B, C) Relative transgene (OA1) mRNA abundance in HeLa-OA1 cells and GCN2-KO clones, cultured in Met/Cys (B) or Thr (C) deprived medium for 24 h or 48 h, respectively, compared to full medium. Mean ± SD of 3 technical replicates from 1 experiment. Data are expressed as fold change vs. control (full medium = 1). Since independent clones may display variable reactivation responses (e.g. due to different levels of transgene expression in basal conditions), the results are not shown as means of the three clones, but as separate replicates.\nSimilarly, we generated GCN2-KO clones from the parental HepG2-OA1 cell line by the same strategy. By using a guide RNA against exon 1 of the GCN2 gene, we obtained three independent GCN2-KO clones, namely E23, F22 and F27. Genomic characterization confirmed the presence of mutations on both alleles of exon 1 of the GCN2 gene in clone F27 (S7 Fig) and all three clones showed a very low amount—if any—of residual GCN2 protein, compared to the original HepG2-OA1 cell line (Fig 7A). To assess the ability of GCN2-KO cells to reactivate the transgene upon starvation, we cultured parental HepG2-OA1 cells and the three GCN2-KO clones in media deprived of Met/Cys or His (corresponding to the most effective treatments in this cell line; see Fig 3B) for 24 h, and evaluated the transgene expression by RT-qPCR. As shown in Fig 7B and 7C, we found that the reactivation of the OA1 transgene is neither abolished, nor reduced by KO of GCN2, as in HeLa cells. To further confirm this result, we knocked-down GCN2 by RNA interference (RNAi), and incubated the cells with or without L-Histidinol for 6 h. As shown in Fig 8, treatment of HepG2-OA1 cells with L-Histidinol results in efficient transgene reactivation, even upon significant GCN2 downregulation, both at the mRNA and protein levels. Taken together, these data strongly support the conclusion that GCN2 is not necessary for transgene reactivation in response to EAA starvation, either in HeLa or in HepG2 cells.\nFig 7. GCN2 knockout does not interfere with transgene reactivation in HepG2 cells.\n(A) Immunoblotting of protein extracts from the HepG2-OA1 parental cell line and GCN2-KO clones 185#27, E23, F22, F27, immunodecorated with anti-GCN2 antibody. Clone 185#27 results from the first round of selection, and was used to generate clones E23, F22, F27. Arrow, GCN2 specific band. For GCN2 protein quantification, Ponceau staining was used as loading control and data are expressed as fold change vs. parental cell line (= 1). (B, C) Relative transgene (OA1) mRNA abundance in HepG2-OA1 cells and GCN2-KO clones, cultured in Met/Cys (B) or His (C) deprived medium for 24 h, compared to full medium. Mean ± SD of 3 technical replicates from 1 experiment.", "answers": ["No, it is not necessary."], "length": 6900, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "1d46294ee8fcc0a64828778b04198fcbe4f75841775e1205"} {"input": "What type of distribution do the tail distributions of price returns follow?", "context": "Paper Info\n\nTitle: Age and market capitalization drive large price variations of cryptocurrencies\nPublish Date: 23 Feb 2023\nAuthor List: \n\nFigure\n\nFigure 3. Illustration of different effects of age and market capitalization on power-law exponents of cryptocurrencies.(a) Posterior probability distributions of the linear coefficients associated with the effects of age [p(A)] and (b) the effects of market capitalization [p(C)] on power-law exponents related to large positive returns.Panels (c) and (d) show the analogous distributions for the association with power-law exponents related to large negative returns.In all panels, the different curves show the distributions for each of the top 20 cryptoassets by market capitalization.Cryptocurrencies significantly affected by age or market capitalization are highlighted in boldface, and the numbers between brackets show their positions in the market capitalization rank.\nFigure S5.There is more probability mass in the positive tail than in the negative tail of price returns.(a) Probability distributions of the lower cut-offs (r min ) obtained by applying the Clauset-Shalizi-Newman method to positive (blue) and negative (red) returns.The vertical dashed lines indicate the median values of r min for positive and negative returns.(b) Probability distributions of 90th percentiles (r 90 ) estimated from the power-law models adjusted to positive (blue) and negative (red) returns.The vertical dashed lines indicate the median values of r 90 for positive and negative returns.(c) Probability distributions of the fraction of weeks that r 90 estimated from positive returns (r + 90 ) is larger than r 90 estimated from negative returns (r − 90 ).This fraction is calculated only for weeks in which the power-law hypothesis is not rejected for both tails.The percentage of cryptoassets for which r + 90 > r − 90 is shown in the panels.The first column of panels depicts the results when considering data from all cryptocurrencies, while the second and third columns present the results for the top 2000 and top 200 cryptocurrencies by market capitalization, respectively.\nFigure S7.Robustness of the results of Fig. 2(b)-(d) against considering only cryptocurrencies with fraction of rejection f r < 0.1.Panels (a) and (b) show the same distributions of Fig. S4 but after filtering out all time series of cryptocurrencies with fraction of rejections f r ≥ 0.1.As in the case related to sampling issues, we observe that these distributions barely change when considering only cryptocurrencies with f r < 0.1.Indeed, the distributions in this figure are not significantly distinguishable from their counterparts in Fig. S4 (two-sample Kolmogorov-Smirnov test, p > 0.05).\n\nabstract\n\nCryptocurrencies are considered the latest innovation in finance with considerable impact across social, technological, and economic dimensions. This new class of financial assets has also motivated a myriad of scientific investigations focused on understanding their statistical properties, such as the distribution of price returns.\nHowever, research so far has only considered Bitcoin or at most a few cryptocurrencies, whilst ignoring that price returns might depend on cryptocurrency age or be influenced by market capitalization. Here, we therefore present a comprehensive investigation of large price variations for more than seven thousand digital currencies and explore whether price returns change with the coming-of-age and growth of the cryptocurrency market.\nWe find that tail distributions of price returns follow power-law functions over the entire history of the considered cryptocurrency portfolio, with typical exponents implying the absence of characteristic scales for price variations in about half of them. Moreover, these tail distributions are asymmetric as positive returns more often display smaller exponents, indicating that large positive price variations are more likely than negative ones.\nOur results further reveal that changes in the tail exponents are very often simultaneously related to cryptocurrency age and market capitalization or only to age, with only a minority of cryptoassets being affected just by market capitalization or neither of the two quantities. Lastly, we find that the trends in power-law exponents usually point to mixed directions, and that large price variations are likely to become less frequent only in about 28% of the cryptocurrencies as they age and grow in market capitalization.\nSince the creation of Bitcoin in 2008 , various different cryptoassets have been developed and are now considered to be at the cutting edge of innovation in finance . These digital financial assets are vastly diverse in design characteristics and intended purposes, ranging from peer-to-peer networks with underlying cash-like digital currencies (e.g.\nBitcoin) to general-purpose blockchains transacting in commodity-like digital assets (e.g. Ethereum), and even to cryptoassets that intend to replicate the price of conventional assets such as the US dollar or gold (e.g. Tether and Tether Gold) . With more than nine thousand cryptoassets as of 2022 , the total market value of cryptocurrencies has grown massively to a staggering $2 trillion peak in 2021 .\nDespite long-standing debates over the intrinsic value and legality of cryptoassets , or perhaps even precisely due to such controversies, it is undeniable that cryptocurrencies are increasingly attracting the attention of academics, investors, and central banks, around the world . Moreover, these digital assets have been at the forefront of sizable financial gains and losses in recent years , they have been recognized as the main drivers of the brand-new phenomena of cryptoart and NFTs , but also as facilitators of illegal activities, such as money laundering and dark trade .\nFinancial research dedicated Our results are based on daily price time series of 7111 cryptocurrencies that comprise a significant part of all currently available cryptoassets (see Methods for details). From these price series, we have estimated their logarithmic returns 2/16 Log-return, r ). The black horizontal arrow represents a given position of the expanding time window (at t = 2004 days) used to sample the return series over the entire history of Bitcoin.\nThis time window expands in weekly steps (seven time series observations), and for each position, we separate the positive (blue) from the negative (red) price returns. The gray line illustrates observations that will be included in future positions of the expanding time window (t > 2004). (b) Survival functions or the complementary cumulative distributions of positive (blue) and negative (red) price returns within the expanding time window for t = 2004 days and above the lower bound of the power-law regime estimated from the Clauset-Shalizi-Newman method .\nThe dashed lines show the adjusted power-law functions, p(r) ∼ r −α , with α = 4.5 for positive returns and α = 3.0 for negative returns. (c) Time series of the power-law exponents α t for the positive (blue) and negative (red) return distributions obtained by expanding the time window from the hundredth observation (t = 100) to the latest available price return of Bitcoin.\nThe circular markers represent the values for the window position at t = 2004 days and the dashed lines indicate the median of the power-law exponents ( α+ = 4.50 for positive returns and α− = 2.99 for negative returns). (d) Time series of the p-values related to the power-law hypothesis of positive (blue) and negative (red) price returns for every position of the expanding time window.\nThe dashed line indicates the threshold (p = 0.1) above which the power-law hypothesis cannot be rejected. For Bitcoin, the power-law hypothesis is never rejected for positive returns (fraction of rejection f r = 0) and rejected in only 4% of the expanding time window positions (fraction of rejection f r = 0.04).\nwhere x t represents the price of a given cryptocurrency at day t. All return time series in our analysis have at least 200 observations (see Supplementary Figure for the length distribution). Figure (a) illustrates Bitcoin's series of daily returns. To investigate whether and how returns have changed over the aging and growing processes of all cryptocurrencies, we sample all time series of log-returns using a time window that expands in weekly steps (seven time series observations), starting from the hundredth observation to the latest return observation.\nIn each step, we separate the positive from the negative return values and estimate their power-law behavior using the Clauset-Shalizi-Newman method . Figure (a) further illustrates this procedure, where the vertical dashed line represents a given position of the time window (t = 2004 days), the blue and red lines indicate positive and negative returns, respectively, and the gray lines show the return observations that will be included in the expanding time window in future steps.\nMoreover, Fig. (b) shows the corresponding survival functions (or complementary cumulative distributions) for the positive (blue) and negative (red) returns of Bitcoin within the time window highlighted in Fig. (a). These survival functions correspond to return values above the lower bound of the power-law regime (r min ) and dashed lines in Fig. (b) show the power-law functions adjusted to data, that is,\nwith α = 4.5 for the positive returns and α = 3.0 for the negative returns in this particular position of the time window (t = 2004 days). We have further verified the goodness of the power-law fits using the approach proposed by Clauset et al. (see also Preis et al. ). As detailed in the Methods section, this approach consists in generating several synthetic samples under the power-law hypothesis, adjusting these simulated samples, and estimating the fraction of times the Kolmogorov-Smirnov distance between the adjusted power-law and the synthetic samples is larger than the value calculated from the empirical data.\nThis fraction defines a p-value and allows us to reject or not the power-law hypothesis of the return distributions under a given confidence level. Following Refs. we consider the more conservative 90% confidence level (instead of the more lenient and commonly used 95% confidence level), rejecting the power-law hypothesis when p-value ≤ 0.1.\nFor the particular examples in Fig. (b), the p-values are respectively 1.00 and 0.17 for the positive and negative returns, and thus we cannot reject the power-law hypotheses. After sampling the entire price return series, we obtain time series for the power-law exponents (α t ) associated with positive and negative returns as well as the corresponding p-values time series for each step t of the expanding time window.\nThese time series allow us to reconstruct the aging process of the return distributions over the entire history of each cryptoasset and probe possible time-dependent patterns. Figures ) and 1(d) show the power-law exponents and p-values time series for the case of Bitcoin. The power-law hypothesis is never rejected for positive returns and rarely rejected for negative returns (about 4% of times).\nMoreover, the power-law exponents exhibit large fluctuations at the beginning of the time series and become more stable as Bitcoin matures as a financial asset (a similar tendency as reported by Begušić et al. ). The time evolution of these exponents further shows that the asymmetry between positive and negative returns observed in Fig. ) is not an incidental feature of a particular moment in Bitcoin's history.\nIndeed, the power-law exponent for positive returns is almost always larger than the exponent for negative returns, implying that large negative price returns have been more likely to occur than their positive counterparts over nearly the entire history of Bitcoin covered by our data. However, while the difference between positive and negative exponents has approached a constant value, both exponents exhibit an increasing trend, indicating that large price variations are becoming less frequent with the coming-of-age of Bitcoin.\nThe previous analysis motivates us to ask whether the entire cryptocurrency market behaves similarly to Bitcoin and what other common patterns digital currencies tend to follow. To start answering this question, we have considered the p-values series of all cryptocurrencies to verify if the power-law hypothesis holds in general.\nFigure (a) shows the percentage of cryptoassets rejecting the power-law hypothesis in at most a given fraction of the weekly positions of the expanding time window ( f r ). Remarkably, the hypothesis that large price movements (positive or negative) follow a power-law distribution is never rejected over the entire history of about 70% of all digital currencies in our dataset.\nThis analysis also shows that only ≈2% of cryptocurrencies reject the power-law hypothesis in more than half of the positions of the expanding time window ( f r ≥ 0.5). For instance, considering a 10% threshold as a criterion ( f r ≤ 0.1), we find that about 85% of cryptocurrencies have return distributions adequately modeled by power laws.\nIncreasing this threshold to a more lenient 20% threshold ( f r ≤ 0.2), we find large price movements to be power-law distributed for about 91% of cryptocurrencies. These results thus provide strong evidence that cryptoassets, fairly generally, present large price movements quite well described by power-law distributions.\nMoreover, this conclusion is robust when starting the expanding window with a greater . Large price movements are power-law distributed over the entire history of most cryptocurrencies with median values typically smaller than those found for traditional assets. (a) Percentage of cryptoassets rejecting the power-law hypothesis for large positive (blue) or negative (red) price returns in at most a given fraction of the weekly positions of the expanding time window ( f r ) used to sample the return series.\nRemarkably, 68% of all 7111 digital currencies are compatible with the power-law hypothesis over their entire history, and about 91% of them reject the power-law hypothesis in less than 20% of the positions of the expanding time window ( f r ≤ 0.2). (b) Probability distributions obtained via kernel density estimation of the median values of the power-law exponents along the history of each digital currency.\nThe blue curve shows the distribution of the median exponents related to positive returns ( α+ ) and the red curve does the same for negative returns ( α− ). The medians of α+ and α− are indicated by vertical dashed lines. Panels (c) and (d) show the distributions of these median exponents when considering the top 2000 and the top 200 cryptocurrencies by market capitalization, respectively.\nWe observe that the distributions of α+ and α− tend to shift toward larger values when considering the largest cryptoassets. number of return observations (between 100 and 300 days) and filtering out cryptoassets with missing observations (Supplementary Figures ). Still, it is worth noticing the existence of a few cryptoassets (9 of them) with relatively small market capitalization (ranking below the top 1000) for which the power-law hypothesis is always rejected (Supplementary Table ).\nHaving verified that large price movements in the cryptocurrency market are generally well-described by powerlaw distributions, we now focus on the power-law exponents that typically characterize each cryptoasset. To do so, we select all exponent estimates over the entire history of each digital asset for which the power-law hypothesis is not rejected and calculate their median values for both the positive ( α+ ) and negative ( α− ) returns.\nThe dashed lines in Fig. ) show these median values for Bitcoin where α+ = 4.50 and α− = 2.99. It is worth noticing that the variance of large price movements σ 2 is finite only for α > 3, as the integral σ 2 ∼ ∞ r min r 2 p(r)dr diverges outside this interval. Thus, while the typical variance of large positive returns is finite for Bitcoin, negative returns are at the limit of not having a typical scale and are thus susceptible to much larger variations.\nFigure shows the probability distribution for the median power-law exponents of all cryptoassets grouped by large positive and negative returns. We note that the distribution of typical power-law exponents associated with large positive returns is shifted to smaller values when compared with the distribution of exponents related to large negative returns.\nThe medians of these typical exponents are respectively 2.78 and 3.11 for positive and negative returns. This result suggests that the asymmetry in large price movements we have observed for Bitcoin is an overall feature of the cryptocurrency market. By calculating the difference between the typical exponents related to positive and negative large returns (∆α = α+ − α− ) for each digital currency, we find that about 2/3 of cryptocurrencies have α+ < α− (see Supplementary Figure for the probability distribution of ∆α).\nThus, unlike Bitcoin, most cryptocurrencies have been more susceptible to large positive price variations than negative ones. While this asymmetry in the return distributions indicates that extremely large price variations tend to be positive, it does not necessarily imply positive price variations are more common for any threshold in the return values.\nThis happens because the fraction of events in each tail is also related to the lower bound of the power-law regime (r min ). However, we have found the distribution of r min to be similar among the positive and negative returns [Supplementary Figure ]. The distribution of high percentile scores (such as the 90th percentile) is also shifted to larger values for positive returns [Supplementary Figure ].\nMoreover, this asymmetry in high percentile scores related to positive and negative returns is systematic along the evolution of the power-law exponents [Supplementary Figure ]. These results thus indicate that there is indeed more probability mass in the positive tails than in the negative ones, a feature that likely reflects the current expansion of the cryptocurrency market as a whole.\nThe distributions in Fig. ) also show that large price variations do not have a finite variance for a significant part of cryptoassets, that is, α+ ≤ 3 for 62% of cryptocurrencies and α− ≤ 3 for 44% of cryptocurrencies. A significant part of the cryptocurrency market is thus prone to price variations with no typical scale.\nIntriguingly, we further note the existence of a minority group of cryptoassets with α+ ≤ 2 (7%) or α− ≤ 2 (3%). These cryptocurrencies, whose representative members are Counos X (CCXX, rank 216) with α − = 1.96 and α + = 1.84 and Chainbing (CBG, rank 236) with α + = 1.87, are even more susceptible to extreme price variations as one cannot even define the average value µ for large price returns, as the integral µ ∼ ∞ r min rp(r)dr diverges for α ≤ 2. We have also replicated the previous analysis when considering cryptocurrencies in the top 2000 and top 200 rankings of market capitalization (as of July 2022).\nFigures ) and 2(d) show the probability distribution for the median power-law exponents of these two groups. We observe that these distributions are more localized (particularly for the top 200) than the equivalent distributions for all cryptocurrencies. The fraction of cryptocurrencies with no typical scale for large price returns ( α+ ≤ 3 and α− ≤ 3) is significantly lower in these two groups compared to all cryptocurrencies.\nIn the top 2000 cryptocurrencies, 51% have α+ ≤ 3 and 26% have α− ≤ 3. These fractions are even smaller among the top 200 cryptocurrencies, with only 44% and 15% not presenting a typical scale for large positive and negative price returns, respectively. We further observe a decrease in the fraction of cryptoassets for which the average value for large price returns is not even finite, as only 2% and 1% of top 2000 cryptoassets have α+ ≤ 2 and α− ≤ 2. This reduction is more impressive among the top 200 cryptocurrencies as only the cryptoasset Fei USD (FEI, rank 78) has α+ = 1.97 and none is characterized by α− ≤ 2. The medians of α+ and α− also increase from 2.78 and 3.11 for all cryptocurrencies to 2.98 and 3.35 for the top 2000 and to 3.08 and 3.58 for the top 200 cryptocurrencies.\nConversely, the asymmetry between positive and negative large price returns does not differ much among the three groups, with the condition α+ < α− holding only for a slightly larger fraction of top 2000 (69.1%) and top 200 (70.6%) cryptoassets compared to all cryptocurrencies (66.4%). Moreover, all these patterns are robust when filtering out time series with sampling issues or when considering only cryptoassets that stay compatible with the power-law hypothesis in more than 90% of the positions of the expanding time window (Supplementary Figures ).\nWe also investigate whether the patterns related to the median of the power-law exponents differ among groups of cryptocurrencies with different designs and purposes. To do so, we group digital assets using the 50 most common tags in our dataset (e.g. \"bnb-chain\", \"defi\", and \"collectibles-nfts\") and estimate the probability distributions of the median exponents α+ and α− (Supplementary Figures ).\nThese results show that design and purpose affect the dynamics of large price variations in the cryptocurrency market as the medians of typical exponents range from 2.4 to 3.7 among the groups. The lowest values occur for cryptocurrencies tagged as \"doggone-doggerel\" (medians of α+ and α− are 2.38 and 2.83), \"memes\" (2.41 and 2.87), and \"stablecoin\" (2.65 and 2.79).\nDigital currencies belonging to the first two tags overlap a lot and have Dogecoin (DOGE, rank 9) and Shiba Inu (SHIB, rank 13) as the most important representatives. Cryptoassets with these tags usually have humorous characteristics (such as an Internet meme) and several have been considered as a form of pump-and-dump scheme , a type of financial fraud in which false statements artificially inflate asset prices so the scheme operators sell their overvalued cryptoassets.\nConversely, cryptoassets tagged as \"stablecoin\" represent a class of cryptocurrencies designed to have a fixed exchange rate to a reference asset (such as a national currency or precious metal) . While the price of stablecoins tends to stay around the target values, their price series are also marked by sharp variations, which in turn are responsible for their typically small power-law exponents.\nThis type of cryptoasset has been shown to be prone to failures , such as the recent examples of TerraUSD (UST) and Tron's USDD (USDD) that lost their pegs to the US Dollar producing large variations in their price series. The asymmetry between positive and negative large returns also emerges when grouping the cryptocurrencies using their tags.\nAll 50 tags have distributions of α+ shifted to smaller values when compared with the distributions of α− , with differences between their medians ranging from −0.74 (\"okex-blockdream-ventures-portfolio\") to −0.14 (\"stablecoin\"). Indeed, only four ('stablecoin\", \"scrypt\", \"fantom-ecosystem\" and \"alameda-research-portfolio\") out of the fifty groupings have both distributions indistinguishable under a two-sample Kolmogorov-Smirnov test (p-value > 0.05).\nFocusing now on the evolution of the power-law exponents quantified by the time series α t for positive and negative returns, we ask whether these exponents present particular time trends. For Bitcoin [Fig. )], α t seems to increase with time for both positive and negative returns. At the same time, the results of Fig. also suggest that market capitalization affects these power-law exponents.\nTo verify these possibilities, we assume the power-law exponents (α t ) to be linearly associated with the cryptocurrency's age (y t , measured in years) and the logarithm of market capitalization (log c t ). As detailed in the Methods section, we frame this problem using a hierarchical Bayesian model.\nThis approach assumes that the linear coefficients associated with the effects of age (A) and market capitalization (C) of each digital currency are drawn from distributions with means µ A and µ C and standard deviations σ A and σ C , which are in turn distributed according to global distributions representing the overall impact of these quantities on the cryptocurrency market.\nThe Bayesian inference process consists of estimating the posterior probability distributions of the linear coefficients for each cryptocurrency as well as the posterior distributions of µ A , µ C , σ A , and σ C , allowing us to simultaneously probe asset-specific tendencies and overall market characteristics.\nMoreover, we restrict this analysis to the 2140 digital currencies having more than 50 observations of market capitalization concomitantly to the time series of the power-law exponents in order to have enough data points for detecting possible trends. When considering the overall market characteristics, we find that the 94% highest density intervals for µ A ([-0.01, 0.06] for positive and [-0.02, 0.03] for negative returns) and µ C ([-0.02, 0.03] for positive and [-0.001, 0.04] for negative returns) include the zero (see Supplementary Figure for their distributions).\nThus, there is no evidence of a unique overall pattern for the association between the power-law exponents and age or market capitalization followed by a significant part of the cryptocurrency market. Indeed, the 94% highest density intervals for σ A ([0.87, 0.93] for positive and [0.63, 0.70] for negative returns) and σ C ([0.57, 0.61] for positive and [0.49, 0.52] for negative returns) indicate that the cryptocurrency market is highly heterogeneous regarding the evolution of power-law exponents associated with large price variations (see Supplementary Figure for the distributions of σ A and σ C ). Figure illustrates these heterogeneous behaviors by plotting the posterior probability distributions for the linear coefficients associated with the effects of age (A) and market capitalization (C) for the top 20 digital assets, where cryptocurrencies which are significantly affected (that is, the 94% highest density intervals for A or C do not include the zero) by these quantities are highlighted in boldface.\nEven this small selection of digital currencies already presents a myriad of patterns. First, we observe that the power-law exponents of a few top 20 cryptocurrencies are neither correlated with age nor market capitalization. That is the case of Shiba Inu (SHIB, rank 13) and Dai (DAI, rank 11) for both positive and negative returns, UNUS SED LEO (LEO, rank 18) and Polkadot (DOT, rank 12) for the positive returns, and USDCoin (USDC, rank 4) and Solana (SOL, rank 9) for negative returns.\nThere are also cryptocurrencies with exponents positively or negatively correlated only with market capitalization. Examples include Tether (USDT, rank 3) and Dogecoin (DOGE, rank 10), for which the power-law exponents associated with positive returns increase with market capitalization, and Binance USD (BUSD, rank 6), for which power-law exponents associated with positive and negative returns decrease with market capitalization.\nWe also observe cryptocurrencies for which age and market capitalization simultaneously affect the power-law exponents. Polygon (MATIC, rank 14) is an example where the power-law exponents associated with positive returns tend to increase with age and decrease with market capitalization. Finally, there are also cryptocurrencies with power-law exponents only associated with age.\nThat is the case of Bitcoin (BTC, rank 1), Ethereum (ETH, rank 2), and Cardano (ADA, rank 8), for which the power-law exponents related to positive and negative returns increase with age, but also the case of Uniswap (UNI, rank 19), for which the exponents decrease with age. Figure systematically extends the observations made for the top 20 cryptoassets to all 2140 digital currencies for which we have modeled the changes in the power-law exponents as a function of age and market capitalization.\nFirst, we note that only 10% of cryptocurrencies have power-law exponents not significantly affected by age and market capitalization. The vast majority (90%) displays some relationship with these quantities. However, these associations are as varied as the ones we have observed for the top 20 cryptoassets.\nAbout 52% of cryptocurrencies have power-law exponents simultaneously affected by age and market capitalization. In this group, these quantities simultaneously impact the exponents related to positive and negative returns of 34% of cryptoassets, whereas the remainder is affected only in the positive tail (9%) or only in the negative tail (9%).\nMoving back in the hierarchy, we find that the power-law exponents of 32% of cryptocurrencies are affected only by age while a much minor fraction (6%) is affected only by market capitalization. Within the group only affected by age, we observe that the effects are slightly more frequent only on the exponents related to negative returns (12%), compared to cases where effects are restricted only to positive returns (10%) or simultaneously affect both tails (10%).\nFinally, within the minor group only affected by market capitalization, we note that associations more frequently involve only exponents related to negative returns (3%) compared to the other two cases (2% only positive returns and 1% for both positive and negative returns). Beyond the previous discussion about whether positive or negative returns are simultaneously or individually affected by age and market capitalization, we have also categorized the direction of the trend imposed by these two quantities on the power-law exponents.\nBlue rectangles in Fig. represent the fraction of relationships for which increasing age or market capitalization (or both) is associated with a raise in the power-law exponents. About 28% of all cryptocurrencies exhibit this pattern in which large price variations are expected to occur less frequently as they grow and age.\nConversely, the red rectangles in Fig. depict the fraction of relationships for which increasing age or market capitalization (or both) is associated with a reduction in the power-law exponents. This case comprises about 25% of all cryptocurrencies for which large price variations are likely to become more frequent as they grow in market capitalization and age.\nStill, the majority of associations represented by green rectangles refer to the case where the effects of age and market capitalization point in different directions (e.g. exponents increasing with age while decreasing with market capitalization). About 36% of cryptocurrencies fit this condition which in turn contributes to consolidating the cumbersome hierarchical structure of patterns displayed by cryptocurrencies regarding the dynamics of large price variations.\nThis complex picture is not much different when considering only cryptocurrencies in the top 200 market capitalization rank (Supplementary Figure ). However, we do observe an increased prevalence of patterns characterized by exponents that rise with age and market capitalization (37%), suggesting that large price variations are becoming less frequent among the top 200 cryptocurrencies than in the overall market.\n). Each of the previous three levels is further classified regarding whether both positive and negative returns are simultaneously affected or whether the effect involves only positive or only negative returns. Finally, the former levels are classified regarding whether the power-law exponents increase, decrease or have a mixed trend with the predictive variables.\nOverall, 36% of the associations are classified as mixed trends (green rectangles), 28% are increasing trends (blue rectangles), and 26% are decreasing trends (red rectangles). We have studied the distributions of large price variations of a significant part of the digital assets that currently comprise the entirety of the cryptocurrency market.\nUnlike previous work, we have estimated these distributions for entire historical price records of each digital currency, and we have identified the patterns under which the return distributions change as cryptoassets age and grow in market capitalization. Similarly to conventional financial assets , our findings show that the return distributions of the vast majority of cryptoassets have tails that are described well by power-law functions along their entire history.\nThe typical power-law exponents of cryptocurrencies (α ∼ 3) are, however, significantly smaller than those reported for conventional assets (α ∼ 4) . This feature corroborates the widespread belief that cryptoassets are indeed considerably more risky for investments than stocks or other more traditional financial assets.\nIndeed, we have found that about half of the cryptocurrencies in our analysis do not have a characteristic scale for price variations, and are thus prone to much higher price variations than those typically observed in stock markets. On the upside, we have also identified an asymmetry in the power-law exponents for positive and negative returns in about 2/3 of all considered cryptocurrencies, such that these exponents are smaller for positive than they are for negative returns.\nThis means that sizable positive price variations have generally been more likely to occur than equally sizable negative price variations, which in turn may also reflect the recent overall expansion of the cryptocurrency market. Using a hierarchical Bayesian linear model, we have also simultaneously investigated the overall market characteristics and asset-specific tendencies regarding the effects of age and market capitalization on the power-law exponents.\nWe have found that the cryptocurrency market is highly heterogeneous regarding the trends exhibited by each cryptocurrency; however, only a small fraction of cryptocurrencies (10%) have power-law exponents neither correlated with age nor market capitalization. These associations have been mostly ignored by the current literature and are probably related to the still-early developmental stage of the cryptocurrency market as a whole.\nOverall, 36% of cryptocurrencies present trends that do not systematically contribute to increasing or decreasing their power-law exponents as they age and grow in market capitalization. On the other hand, for 26% of cryptocurrencies, aging and growing market capitalization are both associated with a reduction in their power-law exponents, thus contributing to the rise in the frequency of large price variations in their dynamics.\nOnly about 28% of cryptocurrencies present trends in which the power-law exponents increase with age and market capitalization, favoring thus large price variations to become less likely. These results somehow juxtapose with findings about the increasing informational efficiency of the cryptocurrency market .\nIn fact, if on the one hand the cryptocurrency market is becoming more informationally efficient, then on the other our findings indicate that there is no clear trend toward decreasing the risks of sizable variations in the prices of most considered cryptoassets. In other words, risk and efficiency thus appear to be moving towards different directions in the cryptocurrency market.\nTo conclude, we hope that our findings will contribute significantly to the better understanding of the dynamics of large price variations in the cryptocurrency market as a whole, and not just for a small subset of selected digital assets, which is especially relevant due to the diminishing concentration of market capitalization among the top digital currencies, and also because of the considerable impact these new assets may have in our increasingly digital economy.\nOur results are based on time series of the daily closing prices (in USD) for all cryptoassets listed on CoinMar-ketCap (coinmarketcap.com) as of 25 July 2022 [see Supplementary Figure (a) for a visualization of the increasing number cryptoassets listed on CoinMarketCap since 2013]. These time series were automatically gathered using the cryptoCMD Python package and other information such as the tags associated with each cryptoasset were obtained via the CoinMarketCap API .\nIn addition, we have also obtained the daily market capitalization time series (in USD) from all cryptoassets which had this information available at the time. Earliest records available from CoinMarketCap date from 29 April 2013 and the latest records used in our analysis correspond to 25 July 2022. Out of 9943 cryptocurrencies, we have restricted our analysis to the 7111 with at least 200 price-return observations.\nThe median length of these time series is 446 observations [see the distribution of series length in Supplementary Figure . We have estimated the power-law behavior of the return distributions by applying the Clauset-Shalizi-Newman method to the return time series r t . In particular, we have sampled each of these time series using an expanding time window that starts at the hundredth observation and grows in weekly steps (seven data points each step).\nFor each position of the expanding time window, we have separated the positive returns from the negative ones and applied the Clauset-Shalizi-Newman method to each set. This approach consists of obtaining the maximum likelihood estimate for the power-law exponent, α = 1 + n/ (∑ n t=1 ln r t /r min ) , where r min is the lower bound of the power-law regime and n is the number of (positive or negative) return observations in the power-law regime for a given position of the expanding time window.\nThe value r min is estimated from data by minimizing the Kolmogorov-Smirnov statistic between the empirical distribution and the power-law model. The Clauset-Shalizi-Newman method yields an unbiased and consistent estimator , in a sense that as the sample increases indefinitely, the estimated power-law exponent converges in distribution to the actual value.\nMoreover, we have used the implementation available on the powerlaw Python package . In addition to obtaining the power-law exponents, we have also verified the adequacy of the power-law hypothesis using the procedure originally proposed by Clauset et al. as adapted by Preis et al. . This procedure consists of generating synthetic samples under the power-law hypothesis with the same properties of the empirical data under analysis (that is, same length and parameters α and r min ), adjusting the simulated data with the power-law model via the Clauset-Shalizi-Newman method, and calculating the Kolmogorov-Smirnov statistic (κ syn ) between the distributions obtained from the simulated samples and the adjusted power-law model.\nNext, the values of κ syn are compared to the Kolmogorov-Smirnov statistic calculated between empirical data and the power-law model (κ). Finally, a p-value is defined by calculating the fraction of times for which κ syn > κ. We have used one thousand synthetic samples for each position of the expanding time window and the more conservative 90% confidence level (instead of the more lenient and commonly used 95% confidence level), such that the power-law hypothesis is rejected whenever p-value ≤ 0.1.\nWe have estimated the effects of age and market capitalization on the power-law exponents associated with positive or negative returns of a given cryptocurrency using the linear model where α t represents the power-law exponent, log c t is the logarithm of the market capitalization, and y t is the age (in years) of the cryptocurrency at t-th observation.\nMoreover, K is the intercept of the association, while C and A are linear coefficients quantifying the effects of market capitalization and age, respectively. Finally, N (µ, σ ) stands for the normal distribution with mean µ and standard deviation σ , such that the parameter ε accounts for the unobserved determinants in the dynamics of the power-law exponents.\nWe have framed this problem using the hierarchical Bayesian approach such that each power-law exponent α t is nested within a cryptocurrency with model parameters considered as random variables normally distributed with parameters that are also random variables. Mathematically, for each cryptocurrency, we have\n12/16 where µ K , σ K , µ C , σ C , µ A , and σ A are hyperparameters. These hyperparameters are assumed to be distributed according to distributions that quantify the overall impact of age and market capitalization on the cryptocurrency market as a whole. We have performed this Bayesian regression for exponents related to positive and negative returns separately, and used noninformative prior and hyperprior distributions in order not to bias the posterior estimation .\nSpecifically, we have considered and ε ∼ U (0, 10 2 ) , where U (a, b) stands for the uniform distribution in the interval [a, b] and Inv−Γ(θ , γ) represents the inverse gamma distribution with shape and scale parameters θ and γ, respectively. For the numerical implementation, we have relied on the PyMC Python package and sampled the posterior distributions via the gradient-based Hamiltonian Monte Carlo no-U-Turn-sampler method.\nWe have run four parallel chains with 2500 iterations each (1000 burn-in samples) to allow good mixing and estimated the Gelman-Rubin convergence statistic (R-hat) to ensure the convergence of the sampling approach (R-hat was always close to one). In addition, we have also verified that models describing the power-law exponents as a function of only age (C → 0 in Eq. 3) or only market capitalization (A → 0 in Eq. 3) yield significantly worse descriptions of our data as quantified by the Widely Applicable Information Criterion (WAIC) and the Pareto Smoothed Importance Sampling Leave-One-Out cross-validation (PSIS-LOO) (see Supplementary Table ). ) is larger than r 90 estimated from negative returns (r − 90 ).\nThis fraction is calculated only for weeks in which the power-law hypothesis is not rejected for both tails. The percentage of cryptoassets for which r + 90 > r − 90 is shown in the panels. The first column of panels depicts the results when considering data from all cryptocurrencies, while the second and third columns present the results for the top 2000 and top 200 cryptocurrencies by market capitalization, respectively.\nSampling issues refer to missing data and problems caused by prices of cryptoassets decreasing to zero. We note that these distributions barely change when considering only cryptocurrencies without any sampling issue. Indeed, the distributions in this figure are not significantly distinguishable from their counterparts in Fig. (two-sample Kolmogorov-Smirnov test, p > 0.05).\n). Each of the previous three levels is further classified regarding whether both positive and negative returns are simultaneously affected or whether the effect involves only positive or only negative returns. Finally, the former levels are classified regarding whether the power-law exponents increase, decrease or have a mixed trend with the predictive variables.\nOverall, 35% of the associations are classified as mixed trends (green rectangles), 37% are increasing trends (blue rectangles), and 18% are decreasing trends (red rectangles).", "answers": ["Power-law functions."], "length": 6766, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "b16c38bf627891a3bf60ee57f9f3c2f5730f4ea3a0f44b0e"} {"input": "What is the main topic of the text?", "context": "Ann's Mega Dub: 12/19/10 - 12/26/10\nGot o have a penis to be an expert\nThursday on NPR's Fresh Air, Terry Gross wanted to talk film and music. Since women don't know a thing about either and aren't interested in either, Terry had to find men who were 'experts.'This is C.I.'s \" Iraq snapshot Friday, December 24, 2010. Chaos and violence continue, Nouri's incomplete Cabinet continues to receive criticism, a father offers an 'excuse' for killing his own daughter, and more.Marci Stone (US Headlines Examiner) reports, \"Friday afternoon, Santa is currently in Baghdad, Iraq and on his next stop is Moscow, Russia, according to the 2010 NORAD Santa Tracker. The North American Aerospace Defense Command (NORAD) has been tracking Santa as he makes his annual journey throughout the world.\" Gerald Skoning (Palm Beach Post) quotes Santa saying, \"We send our special wishes for peace and goodwill to all. That includes the people of Iraq, Afghanistan, Iran and North Korea.\" Please note that this is Santa's seventh trip to Iraq since the start of the Iraq War and, as usual, his journey was known in advance. No waiting until he hit the ground to announce he was going to Iraq -- the way George The Bully Boy Bush had to and the way US President Barack Obama still has to. In the lead up to Santa's yearly visit, many 'authorities' in Iraq began insisting that Christmas couldn't be celebrated publicly, that even Santa was banned. Gabriel Gatehouse (BBC News) quotes Shemmi Hanna stating, \"I wasn't hurt but I wish that I had been killed. I wish I had become a martyr for this church, but God kept me alive for my daughters.\" Shemmi Hanna was in Our Lady of Salvation Church in Baghdad when it was assaulted October 31st and she lost her husband, her son, her daughter-in-law and her infant grandson in the attack. The October 31st attack marks the latest wave of violence targeting Iraqi Christians. The violence has led many to flee to northern Iraq (KRG) or to other countries. Zvi Bar'el (Haaretz) notes, \"This week the Iraqi legislature discussed the Christians' situation and passed a resolution in principle to help families who fled. However, the parliament does not know where the Christians are, how many are still in Iraq, in their homes, and how many have found asylum in Iraqi Kurdistan.\" John Leland (New York Times) reports:The congregants on Friday night were fewer than 100, in a sanctuary built for four or five times as many. But they were determined. This year, even more than in the past, Iraqi's dwindling Christian minority had reasons to stay home for Christmas. \"Yes, we are threatened, but we will not stop praying,\" the Rev. Meyassr al-Qaspotros told the Christmas Eve crowd at the Sacred Church of Jesus, a Chaldean Catholic church. \"We do not want to leave the country because we will leave an empty space.\" Raheem Salman (Los Angeles Times) reports, \"Rimon Metti's family will go to Christian services on Christmas Day, but his relatives will be praying for their own survival and wondering whether this is their last holiday season in Baghdad. If they had any grounds for optimism about the future of their faith in Iraq, it vanished this year amid repeated attacks on fellow believers.\" Shahsank Bengali (McClatchy Newspapers) adds, \"Nearly two months after a shocking assault by Islamist militants, Our Lady of Salvation Catholic Church will commemorate Christmas quietly, with daytime mass and prayers for the dead, under security fit more for a prison than a house of worship. It is the same at Christian churches across Baghdad and northern Iraq, where what's left of one of the world's oldest Christian communities prepares to mark perhaps the most somber Christmas since the start of the Iraq war.\"Meanwhile Taylor Luck (Jordan Times) reports on Iraqi refugees in Jordan:Although the calendar will say December 25, for Theresa, Saturday will not be Christmas. There will be no cinnamon klecha cooling on the dining room table, no outdoor ceramic nativity scene, no readings of hymns with relatives. The 63-year-old Iraqi woman has even refused to put up Christmas lights in the crowded two-room Amman hotel apartment she has called home since fleeing Baghdad last month.\"There is no holiday spirit. All we have is fear,\" she said.This holiday will instead mark another year without news from her 46-year-old son, who was kidnapped outside Baghdad in late 2006.From Turkey, Sebnem Arsu (New York Times -- link has text and video) notes the increase in Iraq refugees to the country since October 31st and quotes Father Emlek stating, \"I've never seen as many people coming here as I have in the last few weeks. They also go to Lebanon, Jordan and Syria but it seems that Turkey is the most popular despite the fact that they do not speak the language.\" Jeff Karoub (AP) reports on the small number of Iraqi refugees who have made it to the US and how some of them \"struggle with insomnia, depression and anxiety.\"One group in Iraq who can openly celebrate Christmas are US service members who elect to. Barbara Surk (AP) reports that tomorrow Chief Warrant Officer Archie Morgan will celebrate his fourth Christmas in Iraq and Captain Diana Crane is celebrating her second Christmas in Iraq: \"Crane was among several dozen troops attending a Christmas Eve mass in a chapel in Camp Victory, an American military base just outside Baghdad.\" Marc Hansen (Des Moines Reigster) speaks with six service members from Iowa who are stationed in Iraq. Sgt 1st Class Dennis Crosser tells Hansen, \"I certainly understand from reading the paper what's going on in Afghanistan and the attention definitely needs to be on the troops there. But everyone serving here in Operation New Dawn appreciates a little bit of attention as we finish this up.\"Today Jiang Yu, China's Foreign Minister, issued the following statement, \"We welcome and congratulate Iraq on forming a new government. We hope that the Iraqi Government unite all its people, stabilize the security situation, accelerate economic reconstruction and make new progress in building its country.\" James Cogan (WSWS) reports:US State Department official Philip Crowley declared on Wednesday that Washington had not \"dictated the terms of the government\". In reality, constant American pressure was applied to Maliki, Allawi, Kurdish leaders and other prominent Iraqi politicians throughout the entire nine-month process to form a cabinet. The US intervention included numerous personal phone calls and visits to Baghdad by both President Barack Obama and Vice President Joe Biden.The key objective of the Obama administration has been to ensure that the next Iraqi government will \"request\" a long-term military partnership with the US when the current Status of Forces Agreement (SOFA) expires at the end of 2011. The SOFA is the legal basis upon which some 50,000 American troops remain in Iraq, operating from large strategic air bases such as Balad and Tallil and Al Asad. US imperialism spent billions of dollars establishing these advanced bases as part of its wider strategic plans and has no intention of abandoning them.Cogan's only the second person to include the SOFA in his report. Some are impressed with the 'feat' of taking nearly ten months to form a government, stringing the country along for ten months while no decisions could go through. The editorial board of the Washington Post, for example, was full of praise yesterday. Today they're joined by Iran's Ambassador to Iraq, Hassan Danaiifar. The Tehran Times reports that Danaiifar was full of praise today hailing the \"positive and final step which ended the 10-month political limbo in Iraq.\" However, Danaiifar was less pie-in-the-sky than the Post editorial board because he can foresee future problems as evidenced by his statement, \"We may witness the emergence of some problems after one and half of a year -- for example, some ministers may be impeached.\" Of course, there are already many clouds on the horizon, even if Iranian diplomats and Post editorial boards can't suss them out. For example, Ben Bendig (Epoch Times) noted the objection of Iraq's female politicians to Nouri al-Maliki's decision to nominate only one woman (so far) to his Cabinet: \"Some 50 female lawmakers went to the country's top leadership, the United Nations and the Arab League to voice their concern and desire for increased representation.\" BNO notes that protest and also that a group of Iraqi MPs are alleging that Iraqiya bought seats in the Cabinet via money exchanged in Jordan. UPI adds, \"Maliki, a Shiite who has a long history of working with Tehran, has named himself acting minister of defense, interior and national security, three most powerful and sensitive posts in the government he is stitching together. Although Maliki appears to be bending over backward to accommodate rivals among Iraq's Shiite majority as well as minority Sunnis and Kurds in his administration in a spirit of reconciliation, he is unlikely to relinquish those ministries that dominate the security sector.\" DPA reports, \"Sheikh Abdel-Mahdi al-Karbalaei, a confident of influential Shiite spiritual leader Ayatollah Ali al-Sistani, said that the new cabinet is 'below the standards' Iraqi citizens had hoped for and suggested it could prove to be weaker than the previous government.\" Ranj Alaaldin (Guardian) also spots clouds on the horizon:Lasting peace and stability depends on resolving outstanding disputes with the Kurds on oil, revenue-sharing, security and the disputed territories (Kirkuk in particular). The Kurds, rather than exploiting their kingmaker position to take a stronger proportion of ministries in Baghdad (they are taking just one major portfolio – the foreign ministry), are instead banking on guarantees from Maliki to implement their list of 19 demands that includes resolving the above disputes in their favour.They may have been naive, though. With their historical and federalist partners, the Islamic supreme council of Iraq in decline, the Kurds may be isolated in the new government – a government dominated by the nationalistic and centrist characteristics of the INM, the Sadrists and indeed State of Law.Maliki may, therefore, turn out to be unable to grant concessions even if he wanted to and could use Osama Nujayfi, the new ultra-nationalist speaker of parliament and Kurdish foe, to absorb the Kurdish criticism and insulate himself from any attacks.AP reports that Iraqi police sought out a 19-year-old woman because of rumors that she was working with al Qaida in Mesopotamia only to be greeted with the news that her father allegedly killed her and the father showed the police where he buried the woman . . . last month. The story begs for more than it offers. The most obvious observation is: what does it say that a woman's allegedly killed by her father and no one says a word for over a month? After that, it should probably be noted that there are many men in Iraq killing women who, no doubt, would love to also be able to pin the blame on al Qaida. In other violence, Reuters notes a house bombing in Haswa which claimed the life of Mohammed al-Karrafi, \"his wife, two sons and a nephew\" -- as well as injuring four more people, and a Samarra roadside bombing which claimed the lives of 2 police officers. DPA notes it was two homes bombed in Haswa and that the Samarra roadside bombing also injured four Iraqi soldiers. Jomana Karadsheh (CNN) reports, \"Another policeman was wounded in Baghdad Friday night when a roadside bomb detonated by a police patrol, an Interior Ministry official told CNN.\"And we'll close with this from Peace Mom Cindy Sheehan's latest Al Jazeera column:The recent repeal of the US military policy of \"Don't ask, don't tell\" is far from being the human rights advancement some are touting it to be. I find it intellectually dishonest, in fact, illogical on any level to associate human rights with any military, let alone one that is currently dehumanising two populations as well as numerous other victims of it's clandestine \"security\" policies.Placing this major contention aside, the enactment of the bill might be an institutional step forward in the fight for \"equality\"; however institutions rarely reflect reality.Do we really think that the US congress vote to repeal the act and Obama signing the bill is going to stop the current systemic harassment of gays in the military?While I am a staunch advocate for equality of marriage and same-sex partnership, I cannot - as a peace activist - rejoice in the fact that now homosexuals can openly serve next to heterosexuals in one of the least socially responsible organisations that currently exists on earth: The US military.It is an organisation tainted with a history of intolerance towards anyone who isn't a Caucasian male from the Mid-West. Even then I'm sure plenty fitting that description have faced the terror and torment enshrined into an institution that transforms the pride and enthusiasm of youth into a narrow zeal for dominating power relations.And we'll close with this from Francis A. Boyle's \"2011: Prospects for Humanity?\" (Global Research):Historically, this latest eruption of American militarism at the start of the 21st Century is akin to that of America opening the 20th Century by means of the U.S.-instigated Spanish-American War in 1898. Then the Republican administration of President William McKinley stole their colonial empire from Spain in Cuba, Puerto Rico, Guam, and the Philippines; inflicted a near genocidal war against the Filipino people; while at the same time illegally annexing the Kingdom of Hawaii and subjecting the Native Hawaiian people (who call themselves the Kanaka Maoli) to near genocidal conditions. Additionally, McKinley's military and colonial expansion into the Pacific was also designed to secure America's economic exploitation of China pursuant to the euphemistic rubric of the \"open door\" policy. But over the next four decades America's aggressive presence, policies, and practices in the \"Pacific\" would ineluctably pave the way for Japan's attack at Pearl Harbor on Dec. 7, 194l, and thus America's precipitation into the ongoing Second World War. Today a century later the serial imperial aggressions launched and menaced by the Republican Bush Jr. administration and now the Democratic Obama administration are threatening to set off World War III. By shamelessly exploiting the terrible tragedy of 11 September 2001, the Bush Jr. administration set forth to steal a hydrocarbon empire from the Muslim states and peoples living in Central Asia and the Persian Gulf under the bogus pretexts of (1) fighting a war against international terrorism; and/or (2) eliminating weapons of mass destruction; and/or (3) the promotion of democracy; and/or (4) self-styled \"humanitarian intervention.\" Only this time the geopolitical stakes are infinitely greater than they were a century ago: control and domination of two-thirds of the world's hydrocarbon resources and thus the very fundament and energizer of the global economic system – oil and gas. The Bush Jr./ Obama administrations have already targeted the remaining hydrocarbon reserves of Africa, Latin America, and Southeast Asia for further conquest or domination, together with the strategic choke-points at sea and on land required for their transportation. In this regard, the Bush Jr. administration announced the establishment of the U.S. Pentagon's Africa Command (AFRICOM) in order to better control, dominate, and exploit both the natural resources and the variegated peoples of the continent of Africa, the very cradle of our human species. This current bout of U.S. imperialism is what Hans Morgenthau denominated \"unlimited imperialism\" in his seminal work Politics Among Nations (4th ed. 1968, at 52-53): The outstanding historic examples of unlimited imperialism are the expansionist policies of Alexander the Great, Rome, the Arabs in the seventh and eighth centuries, Napoleon I, and Hitler. They all have in common an urge toward expansion which knows no rational limits, feeds on its own successes and, if not stopped by a superior force, will go on to the confines of the political world. This urge will not be satisfied so long as there remains anywhere a possible object of domination--a politically organized group of men which by its very independence challenges the conqueror's lust for power. It is, as we shall see, exactly the lack of moderation, the aspiration to conquer all that lends itself to conquest, characteristic of unlimited imperialism, which in the past has been the undoing of the imperialistic policies of this kind…. On 10 November 1979 I visited with Hans Morgenthau at his home in Manhattan. It proved to be our last conversation before he died on 19 July 1980. Given his weakened physical but not mental condition and his serious heart problem, at the end of our necessarily abbreviated one-hour meeting I purposefully asked him what he thought about the future of international relations. iraqbbc newsgabriel gatehousethe new york timesjohn lelandhaaretzzvi bar'elthe jordan timestaylor luckthe associated pressjeff karoubthe los angeles timesraheem salmancnnjomana karadsheh\nTerry thinks she's a man\nYesterday on NPR's Fresh Air the hour went to a male TV critic. It's always a man with Terry. Always. And somebody tell her that a snotty, snooty TV critic really doesn't make for good programming.This is C.I.'s \"Iraq snapshot:\" Thursday, December 23, 2010. Chaos and violence continue, Iraqi women make clear their displeasure over the Cabinet make up, Daniel Ellsberg and Veterans for Peace get some recognition, and more. Last Thursday a protest held outside the White House. One of the organizers was Veterans for Peace and Pentagon Papers whistle blower Daniel Ellsberg participated and spoke. Juana Bordas (Washington Post) advocates for both of them to be named persons of the year: Veterans for Peace and Daniel Ellsberg should be this year's person of the year because of their courage and bravery to stand up for all of us who believe that \"war is not the answer.\" Moreover in a time of economic recession, the war machine is bankrupting our country. As John Amidon, a Marine Corps veteran from Albany asked at the White House protest, \"How is the war economy working for you?\"While unemployment rates hover near 10 percent, there is no doubt that the U.S. economy and quality of life is faltering. Worldwide we are 14th in education, 37th in the World Health Organization's ranking on medical systems, and 23rd in the U.N. Environmental Sustainability Index on being most livable and greenest benefits. There is one place we take the undeniable world lead. The US military spending accounts for a whopping 46.5 percent of world military spending--the next ten countries combined come in at only 20.7 percent. Linda Pershing (Truthout) reports, \"Responding to a call from the leaders of Stop These Wars(1) - a new coalition of Veterans for Peace and other activists - participants came together in a large-scale performance of civil resistance. A group of veterans under the leadership of Veterans for Peace members Tarak Kauff, Will Covert and Elaine Brower, mother of a Marine who has served three tours of duty in Iraq, sponsored the event with the explicit purpose of putting their bodies on the line. Many participants were Vietnam War veterans; others ranged from Iraq and Afghanistan war veterans in their 20s and 30s to World War II vets in their 80s and older. They were predominately white; men outnumbered women by at least three to one. After a short rally in Lafayette Park, they formed a single-file procession, walking across Pennsylvania Avenue to the solemn beat of a drum. As they reached the police barricade (erected to prevent them from chaining themselves to the gate, a plan they announced on their web site), the activists stood shoulder to shoulder, their bodies forming a human link across the 'picture postcard' tableau in front of the White House.\" Maria Chutchian (Arlington Advocate) quotes, participant Nate Goldshlag (Vietnam veteran) stating, \"\"There was a silent, single file march around Lafayette Park to a drum beat. Then we went in front of the White House,. There were barricades set up in front of white house fence. So when we got there, we jumped over barricades and were able to get right next to the White House fence.\" Participant Linda LeTendre (Daily Gazette) reports: At the end of the rally, before the silent, solemn procession to the White House fence, in honor of those killed in Iraq and Afghan wars of lies and deceptions, the VFP played taps and folded an American flag that had been left behind at a recent funeral for the veteran of one of those wars. Two attendees in full dress uniform held and folded the flag. I had the image of all of the people who stood along the roads and bridges when the bodies of the two local men, Benjamin Osborn and David Miller, were returned to the Capital District. I thought if all of those people were here now or spoke out against war these two fine young men might still be with us.I was blessed enough to be held in custody with one of those in uniform; a wonderful young man who had to move from his hometown in Georgia because no one understood why as a veteran he was against these wars. Even his family did not understand. (He remains in my prayers.)Our plan was to attach ourselves to the White House fence until President Obama came out and talked to us or until we were arrested and dragged away. I don't have to tell you how it ended.Mr. Ellsberg was one of 139 people arrested at that action. We've noted the protest in pretty much every snapshot since last Thursday. If something else comes out that's worth noting on the protest, we'll include it. We will not include people who don't have their facts and it's really sad when they link to, for example, Guardian articles and the links don't even back them up. It's real sad, for example, when they're trashing Hillary (big strong men that they are) and ripping her apart and yet Barack? \"Obama's inaccurate statements\"??? What the hell is that? You're inferring he lied, say so. Don't be such a little chicken s**t. It's especially embarrasing when you're grandstanding on 'truth.' Especially when you're the little s**t that clogged up the public e-mail account here in the summer of 2008 whining that you were holding Barack to a standard, then admitting that you weren't, then whining that if you did people would be mean to you. Oh, that's sooooooo sad. Someone might say something bad about you. The horror. You must suffer more than all the people in Iraq and Afghanistan combined. While the action took place in DC, actions also took place in other cities. We've already noted NYC's action this week, Doug Kaufmann (Party for Socialism & Liberation) reports on the Los Angeles action: Despite heavy rain, over 100 people gathered in Los Angeles on the corner of Hollywood and Highland to demand an end to the U.S. wars on Afghanistan and Iraq. People came from as far as Riverside to protest, braving what Southern California media outlets have dubbed the \"storm of the decade.\" The demonstration, initiated and led by the ANSWER Coalition, broke the routine of holiday shopping and garnered support from activists and even passers by, who joined in chanting \"Money for jobs and education -- not for war and occupation!\" and \"Occupation is a crime -- Iraq, Afghanistan, Palestine!\" Protesters held banners reading, \"U.S./NATO Out of Afghanistan!\" and \"Yes to jobs, housing and education -- no to war, racism and occupation!\"Speakers at the demonstration included representatives of Korean Americans for Peace, ANSWER Coalition, KmB Pro-People Youth, Veterans for Peace, Party for Socialism and Liberation and National Lawyers Guild. Tuesday, Nouri al-Maliki managed to put away the political stalemate thanks to a lot of Scotch -- tape to hold the deal together and booze to keep your eyes so crossed you don't question how someone can claim to have formed a Cabinet when they've left over ten positions to be filled at a later date. One group speaking out is women. Bushra Juhi and Qassmi Abdul-Zahra (AP) report, \"Iraq's female lawmakers are furious that only one member of the country's new Cabinet is a woman and are demanding better representation in a government that otherwise has been praised by the international community for bringing together the country's religious sects and political parties.\" As noted Tuesday, though represenation in Parliament is addressed in Iraq's Constitution, there is nothing to address women serving in the Cabinet. Aseel Kami (Reuters) notes one of the most damning aspects of Nouri's chosen men -- a man is heaing the Ministry of Women's Affairs. Iraqiya's spokesperson Maysoon Damluji states, \"There are really good women who could do wel . . . they cannot be neglected and marginalized.\" Al-Amal's Hanaa Edwar states, \"They call it a national (power) sharing government. So where is the sharing? Do they want to take us back to the era of the harem? Do they want to take us back to the dark ages, when women were used only for pleasure.\" Deborah Amos (NPR's All Things Considered) reports that a struggle is going on between secular impulses and fundamentalist ones. Gallery owner Qasim Sabti states, \"We know it's fighting between the religious foolish man and the civilization man. We know we are fighting like Gandhi, and this is a new language in Iraqi life. We have no guns. We do not believe in this kind of fighting.\" Deborah Amos is the author of Eclipse of the Sunnis: Power, Exile, and Upheaval in the Middle East. Meanwhile Nizar Latif (The National) reports that distrust is a common reaction to the new government in Baghdad and quotes high school teacher Hussein Abed Mohammad stating, \"Promises were made that trustworthy, competent people would be ministers this time around, but it looks as if everything has just been divided out according to sectarian itnerests. No attention has been paid to forming a functioning government, it is just a political settlement of vested interests. I'm sure al Maliki will have the same problems in his next four years as he had in the last four years.\" Days away from the ten months mark, Nouri managed to finally end the stalemate. Some try to make sense of it and that must have been some office party that the editorial board of the Washington Post is still coming down from judging by \"A good year in Iraq.\" First up, meet the new Iraqi Body Count -- an organization that provides cover for the war and allows supporters of the illegal war to point to it and insist/slur \"Things aren't so bad!\" Sure enough, the editorial board of the Post does just that noting the laughable \"civilian deaths\" count at iCasualities. As we noted -- long, long before we walked away from that crap ass website, they're not doing a civilian count. They're noting how many deaths Reuters reports.", "answers": ["The main topic of the text is Iraq's politics and current situation."], "length": 4468, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "40a85211345155859f3d2f7268da928f516a65ac6f8c6e08"} {"input": "What is the main advantage of a horizontal business model for mobile devices?", "context": "The future of mobile CPUs, part 1: Today’s fork in the road | Ars Technica\n2013 may be a big year for the evolution of smartphones and tablets.\nMobile computing's rise from niche market to the mainstream is among the most significant technological trends in our lifetimes. And to a large extent, it's been driven by the bounty of Moore’s Law—the rule that transistor density doubles every 24 months. Initially, most mobile devices relied on highly specialized hardware to meet stringent power and size budgets. But with so many transistors available, devices inevitably grew general-purpose capabilities. Most likely, that wasn't even the real motivation. The initial desire was probably to reduce costs by creating a more flexible software ecosystem with better re-use and faster time to market. As such, the first smartphones were very much a novelty, and it took many years before the world realized the potential of such devices. Apple played a major role by creating innovative smartphones that consumers craved and quickly adopted.\nTo some extent, this is where we still stand today. Smartphones are still (relatively) expensive and primarily interesting to the developed world. But over the next 10 years, this too will change. As Moore’s Law rolls on, the cost of a low-end smartphone will decline. At some point, the incremental cost will be quite minimal and many feature phones of today will be supplanted by smartphones. A $650 unsubsidized phone is well beyond the reach of most of the world compared to a $20 feature phone, but a $30 to $40 smartphone would naturally be very popular.\nIn this grand progression, 2013 will certainly be a significant milestone for mobile devices, smartphones and beyond. It's likely to be the first year in which tablets out-ship notebooks in the US. And in the coming years, this will lead to a confluence of high-end tablets and ultra-mobile notebooks as the world figures out how these devices co-exist, blend, hybridize, and/or merge.\nAgainst this backdrop, in this two-part series, we'll explore the major trends and evolution for mobile SoCs. More importantly, we'll look to where the major vendors are likely going in the next several years.\nTablet and phone divergence\nWhile phones and tablets are mobile devices that often share a great deal of software, it's becoming increasingly clear the two are very different products. These two markets have started to diverge and will continue doing so over time.\nFrom a technical perspective, smartphones are far more compact and power constrained. Smartphone SoCs are limited to around 1W, both by batteries and by thermal dissipation. The raison d’etre of a smartphone is connectivity, so a cellular modem is an absolute necessity. For the cost sensitive-models that make up the vast majority of the market, the modem is integrated into the SoC itself. High-end designs favor discrete modems with a greater power budget instead. The main smartphone OSes today are iOS and Android, though Windows is beginning to make an appearance (perhaps with Linux or BlackBerry on the horizon). Just as importantly, phone vendors like HTC must pass government certification and win the approval of carriers. There is very much a walled-garden aspect, where carriers control which devices can be attached to their networks, and in some cases devices can only be sold through a certain carrier. The business model places consumers quite far removed from the actual hardware.\nIn contrast, tablets are far more akin to the PC both technically and economically. The power budget for tablet SoCs is much greater, up to 4W for a passively cooled device and as high as 7-8W for systems with fans. This alone means there is a much wider range of tablet designs than smartphones. Moreover, the default connectivity for tablets is Wi-Fi rather than a cellular modem. The vast majority of tablets do not have cellular modems, and even fewer customers actually purchase a wireless data plan. As a result, cellular modems are almost always optional discrete components of the platform. The software ecosystem is relatively similar, with Microsoft, Apple, and Google OSes available. Because tablets eschew cellular modems, the time to market is faster, and they are much more commonly sold directly to consumers rather than through carriers. In terms of usage models, tablets are much more PC-like, with reasonable-sized screens that make games and media more attractive.\nLooking forward, these distinctions will likely become more pronounced. Many tablets today use high-end smartphone SoCs, but the difference in power targets and expected performance is quite large. As the markets grow in volume, SoCs will inevitably bifurcate to focus on one market or the other. Even today, Apple is doing so, with the A6 for phones and the larger A6X for tablets. Other vendors may need to wait a few years to have the requisite volume, but eventually the two markets will be clearly separate.\nHorizontal business model evolution\nAnother aspect of the mobile device market that is currently in flux and likely to change in the coming years is the business model for the chip and system vendors. Currently, Apple is the only company truly pursuing a vertically integrated model, where all phones and tablets are based on Apple’s own SoC designs and iOS. The tight integration between hardware and software has been a huge boon for Apple, and it has yielded superb products.\nSamsung is one of the few others companies that takes a vertically integrated approach to phones and tablets, although in truth its strategy seems to be ambivalent on that point. Unlike Apple, Samsung’s SoCs are readily available to third parties, and some Samsung devices, such as the S7562 Galaxy S Duos, use SoCs from competitors. More recently though, there has been a trend of Samsung devices using Samsung SoCs, at least for the premier products. For the moment, Samsung’s approach is best characterized as a hybrid, particularly as the company lacks a bespoke OS.\nThe rest of the major SoC vendors (e.g., Intel, Qualcomm, Nvidia, TI, Mediatek, etc.) have stayed pretty far away from actual mobile devices. These companies tend to focus on horizontal business models that avoid competing with customers or suppliers.\nIn the long term, mobile devices are likely to evolve similarly to the PC and favor a horizontal business model. The real advantage is one of flexibility; as costs drop and the market expands, it will be increasingly necessary for vendors like HTC to offer a wide range of phones based on radically different SoCs. While a vertically integrated company like Apple can focus and maintain leadership in a specific (and highly lucrative) niche, it would be very difficult to expand in many growing areas of the market. The differences between an iPhone 6 and a $20 feature phone are tremendous and would be very difficult for a single company to bridge.\nHowever, SoC vendors will attempt to reap the benefits of vertical integration by providing complete reference platforms to OEMs. Conceptually, this is a form of \"optional\" system integration, where the phone vendor or carrier can get the entire platform from the SoC supplier. This has the principal advantages of reducing time to market while also providing a baseline quality and experience for consumers. Currently, this approach has mostly been tested in emerging markets, but it's likely to become more common over time. There is a crucial distinction between reference platforms and vertical integration. Namely, OEMs can always choose to customize a platform to differentiate, and the SoC vendor avoids dealing with consumers directly. Typically, most of the customization is in terms of software on top of a base operating system.\nQuote:Moreover, that will make the transition to a 10nm node even more difficult, as the foundries will have to move from 20nm interconnects to 10nm interconnects and skip a generation.The advances in technology lately allowing components on such a small scale to even be envisioned, much less planned for, are truly amazing.\nOff topic: show\nI present the first generation 'non-technical' rock:\nI don't think your horizontal market development theory is supported by facts. Samsung and Apple are more vertically oriented than their competition, for starters. I know this article is narrowly focused on the hardware, but MS and Intel getting into hardware, Amazon getting into hardware, Google buying Moto, this is all vertical integration. How can you support the idea that this trend will be reversed with no real justification? I'm sure mobile chips will continue to specialize, but I don't think this means what you think it means. Automobile companies started making their own engines and with rare exceptions, never went back to being more horizontal. Same with retail and their store brands. Same with cloud companies and their servers. Same with mobile companies and their OSs. The horizontal market of PCs created by long-lasting standards and loose hegemony is the exception, not the norm.\nWhy wouldn't the foundries be able close the process gap with Intel? Is it a matter of money? Scale?\nI'm not so sure about several things:1- Moore's law's relevance. Moore's Law is about ICs. ICs are not as big a part of mobile computers as they are of desktops, even of laptops: screens, batteries, radios are a huge part of tablets' and phones' costs, as opposed to the bare SoC + RAM.2- The tablet vs phone dichotomy. For some reason (probably price insensitivity due to subsidies), Phones have a tendency to be more powerful than Tablets, ie phone SoCs are more than good enough for tablets. Since the OS and peripherals are the same, it makes more sense to design and build just one type of SoC, and just disable the phone-modem part of it (even the other radios are still required: BT, Wifi, GPS...), same as Intel disable cache and cores for their entry-level CPUs. Once you're fabbing a SoC, it makes more sense to make more of the same than to setup a separate run of a cut-down SoC on an older process, unless volumes are huge. We might still be getting previous-generation, well amortized SoCs in cheaper tablets, though.3- On the contrary, I see a tablet and phone convergence (the ugly phablet). I'm patiently waiting for the new 6\"+ phones to replace my Nook Color and Galaxy Note 1 with a single device.4- The advantage of diversity ? Software is becoming ever more important than hardware. Multiplying SoCs means multiplying product development costs, making support and updates more difficult... Again, unless volumes are huge, OEMs are probaly better off going the way of the car industry and using modular \"platforms\" housed in different chassis with various screen sizes, keyboards, radios, digitizers...I'm wondering why the \"single device\" trend does not figure in your analysis. Is it stillborn ? Does it have no impact nor dependency on/with SoCs ?\nSamsung has its own bespoke OS: Bada and it is used on an extensive line of devices. I think there are numbers somewhere that it outsold Windows Phone 7 for a time.\ngypsumfantastic wrote:Why wouldn't the foundries be able close the process gap with Intel? Is it a matter of money? Scale?First mover advantage.\nSoC? System on a Chip I guess?\nYou're way off on the Moore's Law/cost of smartphones point. The processors used in today's high-end smartphones are already cheap, around $25. And there are less expensive options if you want a lower end product. In fact, the hardware in the whole smartphone is relatively cheap. Analyst's estimate the Z10's materials cost around $160, the iPhone 5 around $140. They're using expensive glass and metals, then there's the battery, memory, etc. which means the processor is a small factor of the cost.And then there's the jump from $140 in materials to the unsubsidized costs. The reason these phones cost $650 is because of the high margins these companies are able to get and the high cost of hardware design and/or software development. But the point is that making the processors 4 times better/cheaper isn't going to change the economics of the smartphone. What will change the economics is commoditized designs and software and cheaper materials all around. Then you'll have a $50 smartphone that's decent.\nLast edited by ggeezz on Wed Feb 13, 2013 9:17 am\nbigterp wrote:SoC? System on a Chip I guess?Yup.\ngypsumfantastic wrote:Why wouldn't the foundries be able close the process gap with Intel? Is it a matter of money? Scale?Money and momentum, the x86 market is a huge money maker for Intel so it is able to recoup its huge investments for advanced foundries.\nQuote:Currently, the only products using 3D integration are FPGAs from Xilinx,Doesn't Sony use it in the PS Vita? I thought I read somewhere that they had the CPU, main memory (2 dies) and video memory, so 4 dies in total, sitting on top of each other all on the same chip.\nrenoX wrote:gypsumfantastic wrote:Why wouldn't the foundries be able close the process gap with Intel? Is it a matter of money? Scale?Money and momentum, the x86 market is a huge money maker for Intel so it is able to recoup its huge investments for advanced foundries.Exactly and I would clarify that it's all about margins, the difference between what it costs to make a chip and what it sells for. The margins for desktop and server processors is huge because a) the whole product is expensive so $200 to $1000 for the chip is acceptable, and b) Intel has huge advantages in that space and little competition.So Intel can afford to do the R&D to stay ahead of the curve and keep their position. When your smartphone chip sells for $25 you can't do the R&D to leapfrog a company that sells Xeons for $1000 and Core i5's for $200.\nI am happy to see Kanter here at Ars, I like his writing and he maintains Real World Tech, where Linus Torvalds often shows up to comment on CPU arch and other interesting topics.\nggeezz wrote:renoX wrote:gypsumfantastic wrote:Why wouldn't the foundries be able close the process gap with Intel? Is it a matter of money? Scale?Money and momentum, the x86 market is a huge money maker for Intel so it is able to recoup its huge investments for advanced foundries.Exactly and I would clarify that it's all about margins, the difference between what it costs to make a chip and what it sells for. The margins for desktop and server processors is huge because a) the whole product is expensive so $200 to $1000 for the chip is acceptable, and b) Intel has huge advantages in that space and little competition.So Intel can afford to do the R&D to stay ahead of the curve and keep their position. When your smartphone chip sells for $25 you can't do the R&D to leapfrog a company that sells Xeons for $1000 and Core i5's for $200.Spot on.Intel are able to piggyback other development efforts off the highly lucrative mainstream x86 market which generates the huge sums of money to fund their amazing fab technology.The question for the future is how the economics will stack up when overall device costs fall significantly and there is a big downward pressure on SoC prices. In that situation, can Intel still justify bringing their A-game to a market where products are essentially commoditised and you have processors selling for a only a few dollars each?The lesson from their previous involvement in the DRAM market is that they probably won't want to be involved because there isn't enough money to be made to justify manufacturing phone SoCs on a cutting edge, or near cutting edge process. In that scenario, Intel may not totally abandon the market but they might just stick to manufacturing SoCs on nodes that are a step or two behind the state of the art.\nsolomonrex wrote:I don't think your horizontal market development theory is supported by facts. Samsung and Apple are more vertically oriented than their competition, for starters. I know this article is narrowly focused on the hardware, but MS and Intel getting into hardware, Amazon getting into hardware, Google buying Moto, this is all vertical integration. How can you support the idea that this trend will be reversed with no real justification? I'm sure mobile chips will continue to specialize, but I don't think this means what you think it means. Automobile companies started making their own engines and with rare exceptions, never went back to being more horizontal. Same with retail and their store brands. Same with cloud companies and their servers. Same with mobile companies and their OSs. The horizontal market of PCs created by long-lasting standards and loose hegemony is the exception, not the norm.Yea, each year Amazon, MS, Apple and Google look more and more the same.\nIntel cannot abandon the phone/tablet market. Desktop/laptop sales are stagnating/decreasing and phones/tablets are on the rise. This trend is only going to increase going forward.But you're right, they're going to have use their fabs that are a step or two behind the cutting the edge. But they're going to have to up their game in the tablet space to even be able to do that.\ngypsumfantastic wrote:Why wouldn't the foundries be able close the process gap with Intel? Is it a matter of money? Scale?Intel's called Chipzilla for a reason up\nLagrange wrote:The question for the future is how the economics will stack up when overall device costs fall significantly and there is a big downward pressure on SoC prices. In that situation, can Intel still justify bringing their A-game to a market where products are essentially commoditised and you have processors selling for a only a few dollars each?The lesson from their previous involvement in the DRAM market is that they probably won't want to be involved because there isn't enough money to be made to justify manufacturing phone SoCs on a cutting edge, or near cutting edge process. In that scenario, Intel may not totally abandon the market but they might just stick to manufacturing SoCs on nodes that are a step or two behind the state of the art.I think the processing is a bigger advantage than many realize. If Intel can stay ahead in process design - which this article seems to indicate - they should have a major advantage. All else being equal a 14nm chip should be significantly faster and more efficient than the same chip at 22nm. Add in the fact that yields increase geometrically - you can fit a lot more 14nm chips on a given wafer size vs 22nm (or 32nm for the other manufacturers.) and you have a very appealing proposition. And then add in the fact that Intel actually has a pretty good graphics stack and IP. It's not a sure thing by any means, but I suspect ARM may have just prodded a sleeping giant.edit: Also worth noting, Intel, TSMC, and Samsung are the only manufacturers who are building out 450nm wafers. This will increase yields dramatically. Of course Samsung and TSMC will build ARM out, but it definitely puts quite a bit of pressure on all other manufacturers. As the article mentions Intel and Samsung are the only ones who control production top to bottom, and Samsung must share some of the benefits with ARM.\nAs someone who has worked in the semiconductor industry for longer than contemporary fanboys have existed, I'm getting a bit fed-up seeing many of these articles which rewrite history with analyses distorted by the lens of the contemporary Apple or Samsung fanboy.The mobile industry moved to horizontal integration a long time ago. Better indicators than these odd contemporary obsessions with the relatively non-innovative Samsung and Apple where when Motorola spun out its semiconductor division as Freescale, Nokia stopped making it's own custom designs with TI and ST, and Ericsson spun out it's Ericsson Mobile Platforms division and formed ST-Ericsson with ST.The true innovation in the mobile market was done a decade or more ago mainly by Moto/Nokia/Ericsson/TI/Qualcomm, and Samsung and Apple had little to do with it. Under the hood most stuff has been on a simple linear evolutionary path by the SoC providers since then. The phone manufacturers have then mainly been simply sticking these off-the-shelf SoCs (and their included low-level software stacks) in a box, a job made all the easier with the SoC manufacturer collaboration providing the bulk of the work for the AOSP.\nLast edited by paul5ra on Wed Feb 13, 2013 11:06 am\nintroiboad wrote:I am happy to see Kanter here at Ars, I like his writing and he maintains Real World Tech, where Linus Torvalds often shows up to comment on CPU arch and other interesting topics.Indeed. Most tech writing in this area is atrocious. This piece is one of the few well informed articles I've read in a long time.\nggeezz wrote:Intel cannot abandon the phone/tablet market. Desktop/laptop sales are stagnating/decreasing and phones/tablets are on the rise. This trend is only going to increase going forward.But you're right, they're going to have use their fabs that are a step or two behind the cutting the edge. But they're going to have to up their game in the tablet space to even be able to do that.The word you're looking for is Haswell, as far as I know.\nMabsark\nggeezz wrote:Intel cannot abandon the phone/tablet market. Desktop/laptop sales are stagnating/decreasing and phones/tablets are on the rise. This trend is only going to increase going forward.But you're right, they're going to have use their fabs that are a step or two behind the cutting the edge. But they're going to have to up their game in the tablet space to even be able to do that.Actually, that trend will not simply keep increasing going forward. The reason desktop/laptop sales are stagnating/decreasing is due to the fact that most people already have one and therefore don't need to buy another one. The exact same thing will happen with tablets as well. Sales are increasing now because people without tablets are buying them. When most people already own a tablet, they won't be buying a new one every year and therefore sales will stagnate/decrease.The PC market is saturated and in a couple of years, the tablet market will be saturated too. Basically, in order to increase sales in a saturated market, you need to increase the population growth or decrease the longevity of the product.\ngypsumfantastic wrote:Why wouldn't the foundries be able close the process gap with Intel? Is it a matter of money? Scale?Probably a mix of a lot of things. One big thing was during this recession, Intel was the ONLY fab company that didn't scale back their R&D. That alone gave Intel a large advantage.Intel has almost always been ahead. One of the reasons could be that Intel works with much higher margins than many of the commodity companies like Samsung and TSMC.Outside of the P4 flop and some of the monopolistic abuses, Intel has typically been selling to high end customers that are willing to pay a premium for \"the best\".Intel has a large benefit of having a relatively \"good name\" when it comes to CPUs, so they can effectively charge a brand-name premium.I'm sure there are other reasons, and probably better reasons, but these are the main ones that I think of.\nMabsark wrote:Actually, that trend will not simply keep increasing going forward. The reason desktop/laptop sales are stagnating/decreasing is due to the fact that most people already have one and therefore don't need to buy another one. The exact same thing will happen with tablets as well. Sales are increasing now because people without tablets are buying them. When most people already own a tablet, they won't be buying a new one every year and therefore sales will stagnate/decrease.The PC market is saturated and in a couple of years, the tablet market will be saturated too. Basically, in order to increase sales in a saturated market, you need to increase the population growth or decrease the longevity of the product.That's true as long as most people are still buying both a tablet and a laptop when each needs to be replaced. I think the assumption is that, as you say, the tablet market will saturate, with people just replacing existing ones, but the desktop/laptop market could decrease much farther than that, if most people stop replacing them at all. I'm not sure of the likelihood of that, but I think that's where this idea comes from.\nggeezz wrote:Intel cannot abandon the phone/tablet market. Desktop/laptop sales are stagnating/decreasing and phones/tablets are on the rise. This trend is only going to increase going forward.But you're right, they're going to have use their fabs that are a step or two behind the cutting the edge. But they're going to have to up their game in the tablet space to even be able to do that.The upcoming Haswell chip is showing to consume 1/3 the power of IvyBridge at peak, consumes 1/20th the power at idle, all the while maintaining Identical or better performance.This chip should actually compete with ARM CPUs on both power/performance and idle.I am expecting a large war.\nApple once again is dictating the performance in the mobile industry. Nice to see others being able to keep the pace, as well.\npaul5ra wrote:As someone who has worked in the semiconductor industry for longer than contemporary fanboys have existed, I'm getting a bit fed-up seeing many of these articles which rewrite history with analyses distorted by the lens of the contemporary Apple or Samsung fanboy.The mobile industry moved to horizontal integration a long time ago. Better indicators than these odd contemporary obsessions with the relatively non-innovative Samsung and Apple where when Motorola spun out its semiconductor division as Freescale, Nokia stopped making it's own custom designs with TI and ST, and Ericsson spun out it's Ericsson Mobile Platforms division and formed ST-Ericsson with ST.The true innovation in the mobile market was done a decade or more ago mainly by Moto/Nokia/Ericsson/TI/Qualcomm, and Samsung and Apple had little to do with it. Under the hood most stuff has been on a simple evolutionary path by the SoC providers since then.Yeah, and most of the innovation in the automobile industry came about before Henry Ford came into the business. Doesn't change the fact that cars would probably have been an asterisk in the history books under \"toys for rich people\" if it weren't for him.The same applies to to mobile computing for Apple, Samsung, et al.\nSheldonRoss wrote:Lagrange wrote:The question for the future is how the economics will stack up when overall device costs fall significantly and there is a big downward pressure on SoC prices. In that situation, can Intel still justify bringing their A-game to a market where products are essentially commoditised and you have processors selling for a only a few dollars each?The lesson from their previous involvement in the DRAM market is that they probably won't want to be involved because there isn't enough money to be made to justify manufacturing phone SoCs on a cutting edge, or near cutting edge process. In that scenario, Intel may not totally abandon the market but they might just stick to manufacturing SoCs on nodes that are a step or two behind the state of the art.I think the processing is a bigger advantage than many realize. If Intel can stay ahead in process design - which this article seems to indicate - they should have a major advantage. All else being equal a 14nm chip should be significantly faster and more efficient than the same chip at 22nm. Add in the fact that yields increase geometrically - you can fit a lot more 14nm chips on a given wafer size vs 22nm (or 32nm for the other manufacturers.) and you have a very appealing proposition. And then add in the fact that Intel actually has a pretty good graphics stack and IP. My point was that Intel might have a one or two process advantage over the rest of the industry at the cutting edge but that doesn't mean that they can afford to manufacture on those processes for very low margin parts. If the SoC market becomes increasingly commoditised, there isn't going to be the money to justify making them in a state of the art fab.Remember that one of the big selling points of Itanium was that it would make use of process advantages that were effectively paid for by the mainstream x86 market. That didn't quite work out in practice and Itanium processors were often well behind Xeons in process technology.\npaul5ra wrote:As someone who has worked in the semiconductor industry for longer than contemporary fanboys have existed, I'm getting a bit fed-up seeing many of these articles which rewrite history with analyses distorted by the lens of the contemporary Apple or Samsung fanboy.The mobile industry moved to horizontal integration a long time ago. Better indicators than these odd contemporary obsessions with the relatively non-innovative Samsung and Apple where when Motorola spun out its semiconductor division as Freescale, Nokia stopped making it's own custom designs with TI and ST, and Ericsson spun out it's Ericsson Mobile Platforms division and formed ST-Ericsson with ST.The true innovation in the mobile market was done a decade or more ago mainly by Moto/Nokia/Ericsson/TI/Qualcomm, and Samsung and Apple had little to do with it. Under the hood most stuff has been on a simple linear evolutionary path by the SoC providers since then. The phone manufacturers have then mainly been simply sticking these off-the-shelf SoCs (and their included low-level software stacks) in a box, a job made all the easier with the SoC manufacturer collaboration providing the bulk of the work for the AOSP.Just goes to show that people who have worked in an industry for a long time don't always understand what that industry is doing.You haven't been working in it long enough to seem to know that it was Acorn and Apple that invented the mobile ARM CPU in the first place. All those companies you've mentioned have just been working off Acorn and Apple's pioneering work. Now, Apple is back at it again, very successfully, and all the companies you mentioned that produce chips with ARM IP in them are licensing them from the company Acorn and Apple formed—ARM.\nLast edited by melgross on Wed Feb 13, 2013 11:13 am\nMark Havel wrote:ggeezz wrote:Intel cannot abandon the phone/tablet market. Desktop/laptop sales are stagnating/decreasing and phones/tablets are on the rise. This trend is only going to increase going forward.But you're right, they're going to have use their fabs that are a step or two behind the cutting the edge. But they're going to have to up their game in the tablet space to even be able to do that.The word you're looking for is Haswell, as far as I know.If tablets move into the $100-200 range, is there going to be room for Haswell?So long as there is a higher-end tablet market, then Haswell will be able to shine, but it's going to be a much more powerful and costly part than the sort of ARM based hardware that often runs tablets. If we see a race to the bottom where price is the dominant motivator behind purchases, then a high performance SoC will struggle to make its mark.\nmelgross wrote:paul5ra wrote:As someone who has worked in the semiconductor industry for longer than contemporary fanboys have existed, I'm getting a bit fed-up seeing many of these articles which rewrite history with analyses distorted by the lens of the contemporary Apple or Samsung fanboy.The mobile industry moved to horizontal integration a long time ago. Better indicators than these odd contemporary obsessions with the relatively non-innovative Samsung and Apple where when Motorola spun out its semiconductor division as Freescale, Nokia stopped making it's own custom designs with TI and ST, and Ericsson spun out it's Ericsson Mobile Platforms division and formed ST-Ericsson with ST.The true innovation in the mobile market was done a decade or more ago mainly by Moto/Nokia/Ericsson/TI/Qualcomm, and Samsung and Apple had little to do with it. Under the hood most stuff has been on a simple linear evolutionary path by the SoC providers since then. The phone manufacturers have then mainly been simply sticking these off-the-shelf SoCs (and their included low-level software stacks) in a box, a job made all the easier with the SoC manufacturer collaboration providing the bulk of the work for the AOSP.Just goes to show that people who have worked in an industry for a long time don't always understand what that industry is doing.You haven't been working in it long enough to seem to know that it was Acorn and Apple that invented the mobile ARM CPU in the first place. All those companies you've mentioned have just been working off Acorn and Apple's pioneering work. Now, Apple is back at it again, very successfully, and all the companies you mentioned that produce chips with ARM IP in them are licensing them from the company Acorn and Apple formed—ARM.Of course I realise ARM IP has indeed been a major driving factor too (though only one if several architectures before ARM became dominant), though I see ARM's influence on the mobile industry as having nothing to do with modern day Apple and only one small piece of the puzzle. My point is that the hard electrical engineering, mathematics, DSP, semiconductor physics/chemistry, RF engineering, analogue design, CAD etc. that make modern telecommunications possible has very little to do with the fashion companies who consumers (and unfortunately much of the tech media) associate with it and give the credit (though in this respect Samsung does deserve a bit more credit for their work on NAND flash and displays). The industry simply would not exist TODAY without the overwhelming horizontal integration that already dominates.\nQuote:In the long term, mobile devices are likely to evolve similarly to the PC and favor a horizontal business model. The real advantage is one of flexibility; as costs drop and the market expands, it will be increasingly necessary for vendors like HTC to offer a wide range of phones based on radically different SoCs. You don't mention in the article that each SoC necessarily requires a bit of parallel dev work unlike the PC. In the PC world there is a standard BIOS and HW architecture that allows for pluggable designs. On a highly integrated SoC this is untrue. HTC suffers because it has to support radically different SoCs, their drivers and boot loaders, etc. Quote:While a vertically integrated company like Apple can focus and maintain leadership in a specific (and highly lucrative) niche, it would be very difficult to expand in many growing areas of the market. The differences between an iPhone 6 and a $20 feature phone are tremendous and would be very difficult for a single company to bridge.It's only difficult because Apple chooses to ignore that market, not because they can't. If they can release a $99 Apple TV, they can surely cobble together a $20 feature phone if they chose to eschew 8GB of NAND, BT, WiFi, a specialized dock connector, LTE, and their specialized processors. In other words, build the equivalent of an iPod shuffle with a horrible screen and no OS to speak of.\npaul5ra wrote:melgross wrote:paul5ra wrote:As someone who has worked in the semiconductor industry for longer than contemporary fanboys have existed, I'm getting a bit fed-up seeing many of these articles which rewrite history with analyses distorted by the lens of the contemporary Apple or Samsung fanboy.The mobile industry moved to horizontal integration a long time ago. Better indicators than these odd contemporary obsessions with the relatively non-innovative Samsung and Apple where when Motorola spun out its semiconductor division as Freescale, Nokia stopped making it's own custom designs with TI and ST, and Ericsson spun out it's Ericsson Mobile Platforms division and formed ST-Ericsson with ST.The true innovation in the mobile market was done a decade or more ago mainly by Moto/Nokia/Ericsson/TI/Qualcomm, and Samsung and Apple had little to do with it. Under the hood most stuff has been on a simple linear evolutionary path by the SoC providers since then. The phone manufacturers have then mainly been simply sticking these off-the-shelf SoCs (and their included low-level software stacks) in a box, a job made all the easier with the SoC manufacturer collaboration providing the bulk of the work for the AOSP.Just goes to show that people who have worked in an industry for a long time don't always understand what that industry is doing.You haven't been working in it long enough to seem to know that it was Acorn and Apple that invented the mobile ARM CPU in the first place. All those companies you've mentioned have just been working off Acorn and Apple's pioneering work. Now, Apple is back at it again, very successfully, and all the companies you mentioned that produce chips with ARM IP in them are licensing them from the company Acorn and Apple formed—ARM.Of course I realise ARM IP has indeed been a major driving factor too (though only one if several architectures before ARM became dominant), though I see ARM's influence on the mobile industry as having nothing to do with modern day Apple and only one piece of the puzzle. My point is that the hard electrical engineering, mathematics, DSP, semiconductor physics/chemistry, RF engineering, analogue design,etc. that make modern telecommunications possible has very little to do with the fashion companies who consumers (and unfortunately much of the tech media) associate with it and give the credit (though in this respect Samsung does deserve a bit more credit for their work on NAND flash and displays). The industry simply would not exist TODAY without the overwhelming horizontal integration that already dominates.Yes the efforts of these companies getting cellular communications standardized were immense. And the technology matured. And then they didn't do much with it. It took some youngin's to look at the problem fresh and add the UI that make today's smartphones work. As we have all seen, the moment your technology has matured is the moment you are screwed because someone else now has the opportunity to look at it as a black box and make something new. Each of those manufacturers knew that smartphones would eventually be awesome, but none of them had the UI and software design to make a truly breakout product. Imagine if Motorola would have been smart enough to buy the Android guys instead of Google. Instead, Google bought a bunch of patents on that cellular black box to try to defend it's platform.And when you think about it, which consumes more man years of engineering effort per year at this point.... iterating that cellular black box or developing the OS, services and apps that power today's smartphones?\nIntel had better decide that they are competing in this space \"for real\", or they are screwed. They've already let the Atom languish for five years, during which ARM has completely caught up in performance.Just like Tim Cook said, if you don't cannibalize your own markets someone else will do it for you.Whether Intel will embrace that concept in time remains to be seen. Personally, I hope they don't; if Intel transforms into a chipless Fab company (like TSMC) everyone benefits.\nI still think Samsung has the advantage long term because they have both the SOC and the memory products. As mentioned in the article, TSV's (Through Silicon Via's) are going to be quite a disruption. Today, people normally stack an LPDDR2 package on top of their SOC package (POP or Package On Package). Within the LPDDR2 package, you could have a stack of DRAM die typically with wire bonding connecting the die within the package.Once you more to TSV's, you can have a LOT more connections between the SOC and its DRAM's. While this is being standardized through JEDEC (http://www.jedec.org/category/technolog ... a/3d-ics-0), Samsung has all the pieces in house to do whatever they want. You could see a 512 bit or higher bus from the SOC to the memory. The trick is that the memory and the SOC need to line up with each other when you stack them. This gives Samsung an inherent advantage.This isn't just going to impact mobile either. Take a look at that JEDEC link. It also lists High Bandwidth Memory (HBM). This is a 1024 bit bus that provides 128GBytes/s to 256GBytes/s of bandwidth to a stack of up to 8 DRAM's. Here is your processor that includes 8-16 cores and 4GBytes of really, really, fast DRAM... No DIMMs required. How many of them do you want in your server rack?If I was Intel or Apple, I would be thinking seriously about making some investments in Micron to guarantee they make some compelling DRAM's to integrate with their SOC's and processors... otherwise Samsung is going to blow them out of the water on bandwidth.\nGreat_Scott wrote:Intel had better decide that they are competing in this space \"for real\", or they are screwed. They've already let the Atom languish for five years, during which ARM has completely caught up in performance.Just like Tim Cook said, if you don't cannibalize your own markets someone else will do it for you.Whether Intel will embrace that concept in time remains to be seen. Personally, I hope they don't; if Intel transforms into a chipless Fab company (like TSMC) everyone benefits.It's true that Atom has stood still for too long, but honestly it's pretty amazing how Atom is still competitive with current ARM chips. The Z2760 is even 32nm vs 28nm of the latest Krait and A15 chips.But that's all changing with Atom moving to the tick tock schedule this year. It wouldn't even surprise me to see Apple move to Intel chips for IOS.And I don't see how Intel moving to a chipless Fab company would help everyone. It certainly wouldn't help Intel.\nMabsark wrote:ggeezz wrote:Intel cannot abandon the phone/tablet market. Desktop/laptop sales are stagnating/decreasing and phones/tablets are on the rise. This trend is only going to increase going forward.But you're right, they're going to have use their fabs that are a step or two behind the cutting the edge. But they're going to have to up their game in the tablet space to even be able to do that.Actually, that trend will not simply keep increasing going forward. The reason desktop/laptop sales are stagnating/decreasing is due to the fact that most people already have one and therefore don't need to buy another one. The exact same thing will happen with tablets as well. Sales are increasing now because people without tablets are buying them. When most people already own a tablet, they won't be buying a new one every year and therefore sales will stagnate/decrease.The PC market is saturated and in a couple of years, the tablet market will be saturated too. Basically, in order to increase sales in a saturated market, you need to increase the population growth or decrease the longevity of the product.Yes and no. I'm not sure the tablet market will saturate in a \"couple of years.\" It may be more like 5 years. But that's a quibble.Here's the real issue. Right now Apple wants you to own an iPhone AND iPad AND Macbook AND iWatch AND Apple TV. Microsoft, OTOH, is making the Surface so that you could ditch your laptop and just use a Surface. Not everyone, but some people.If 5 years from now, we're in a world where a significant number of people use a Surface-type device instead of a laptop, then the PC market is going to contract significantly. Maybe some of the tablet-like devices will use moderately expensive Intel chips, but some of them are going to use cheaper chips.\nGravyGraphics wrote:I still think Samsung has the advantage long term because they have both the SOC and the memory products. As mentioned in the article, TSV's (Through Silicon Via's) are going to be quite a disruption. Today, people normally stack an LPDDR2 package on top of their SOC package (POP or Package On Package). Within the LPDDR2 package, you could have a stack of DRAM die typically with wire bonding connecting the die within the package.Once you more to TSV's, you can have a LOT more connections between the SOC and its DRAM's. While this is being standardized through JEDEC (http://www.jedec.org/category/technolog ... a/3d-ics-0), Samsung has all the pieces in house to do whatever they want. You could see a 512 bit or higher bus from the SOC to the memory. The trick is that the memory and the SOC need to line up with each other when you stack them. This gives Samsung an inherent advantage.This isn't just going to impact mobile either. Take a look at that JEDEC link. It also lists High Bandwidth Memory (HBM). This is a 1024 bit bus that provides 128GBytes/s to 256GBytes/s of bandwidth to a stack of up to 8 DRAM's. Here is your processor that includes 8-16 cores and 4GBytes of really, really, fast DRAM... No DIMMs required. How many of them do you want in your server rack?If I was Intel or Apple, I would be thinking seriously about making some investments in Micron to guarantee they make some compelling DRAM's to integrate with their SOC's and processors... otherwise Samsung is going to blow them out of the water on bandwidth.Why not AMD? Last I checked they still made memory...and processors/GPUs.", "answers": ["Flexibility."], "length": 7565, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "04b588ff2dea15f4a9c4fdbaabc55aaad1ba3446114d6741"} {"input": "What are the three synthetic types of vitamin K?", "context": "Vitamin K - Wikipedia\n(Redirected from Vitamin k)\nThis article needs more medical references for verification or relies too heavily on primary sources. Please review the contents of the article and add the appropriate references if you can. Unsourced or poorly sourced material may be challenged and removed. (November 2015)\nThis article is about the family of vitamers. For vitamin K1 the form usually used as a supplement, see Phytomenadione.\nVitamin K structures. MK-4 and MK-7 are both subtypes of K2.\nVitamin K deficiency, Warfarin overdose\nVitamin K is a group of structurally similar, fat-soluble vitamins the human body requires for complete synthesis of certain proteins that are prerequisites for blood coagulation and which the body also needs for controlling binding of calcium in bones and other tissues. The vitamin K-related modification of the proteins allows them to bind calcium ions, which they cannot do otherwise. Without vitamin K, blood coagulation is seriously impaired, and uncontrolled bleeding occurs. Low levels of vitamin K also weaken bones and promote calcification of arteries and other soft tissues[citation needed].\nChemically, the vitamin K family comprises 2-methyl-1,4-naphthoquinone (3-) derivatives. Vitamin K includes two natural vitamers: vitamin K1 and vitamin K2.[1] Vitamin K2, in turn, consists of a number of related chemical subtypes, with differing lengths of carbon side chains made of isoprenoid groups of atoms.\nVitamin K1, also known as phylloquinone, is made by plants, and is found in highest amounts in green leafy vegetables because it is directly involved in photosynthesis. It may be thought of as the plant form of vitamin K. It is active as a vitamin in animals and performs the classic functions of vitamin K, including its activity in the production of blood-clotting proteins. Animals may also convert it to vitamin K2.\nBacteria in the gut flora can also convert K1 into vitamin K2. In addition, bacteria typically lengthen the isoprenoid side chain of vitamin K2 to produce a range of vitamin K2 forms, most notably the MK-7 to MK-11 homologues of vitamin K2. All forms of K2 other than MK-4 can only be produced by bacteria, which use these forms in anaerobic respiration. The MK-7 and other bacterially derived forms of vitamin K2 exhibit vitamin K activity in animals, but MK-7's extra utility over MK-4, if any, is unclear and is a matter of investigation.\nThree synthetic types of vitamin K are known: vitamins K3, K4, and K5. Although the natural K1 and all K2 homologues and synthetic K4 and K5 have proven nontoxic, the synthetic form K3 (menadione) has shown toxicity.[2]\n1.2 Cardiovascular health\n1.4 Coumarin poisoning\n4.1 Conversion of vitamin K1 to vitamin K2\n4.2 Vitamin K2\n6 Absorption and dietary need\n7 Dietary reference intake\n10 Biochemistry\n10.1 Function in animals\n10.2 Gamma-carboxyglutamate proteins\n10.3 Methods of assessment\n10.4 Function in bacteria\n11 Injection in newborns\n11.3 Controversy\nA review of 2014 concluded that there is positive evidence that monotherapy using MK-4, one of the forms of Vitamin K2, reduces fracture incidence in post-menopausal women with osteoporosis, and suggested further research on the combined use of MK-4 with bisphosphonates. In contrast, an earlier review article of 2013 concluded that there is no good evidence that vitamin K supplementation helps prevent osteoporosis or fractures in postmenopausal women.[3]\nA Cochrane systematic review of 2006 suggested that supplementation with Vitamin K1 and with MK4 reduces bone loss; in particular, a strong effect of MK-4 on incident fractures among Japanese patients was emphasized.[4]\nA review article of 2016 suggested to consider, as one of several measures for bone health, increasing the intake of foods rich in vitamins K1 and K2.[5]\nCardiovascular health[edit]\nAdequate intake of vitamin K is associated with the inhibition of arterial calcification and stiffening,[6] but there have been few interventional studies and no good evidence that vitamin K supplementation is of any benefit in the primary prevention of cardiovascular disease.[7]\nOne 10-year population study, the Rotterdam Study, did show a clear and significant inverse relationship between the highest intake levels of menaquinone (mainly MK-4 from eggs and meat, and MK-8 and MK-9 from cheese) and cardiovascular disease and all-cause mortality in older men and women.[8]\nVitamin K has been promoted in supplement form with claims it can slow tumor growth; there is however no good medical evidence that supports such claims.[9]\nCoumarin poisoning[edit]\nVitamin K is part of the suggested treatment regime for poisoning by rodenticide (coumarin poisoning).[10]\nAlthough allergic reaction from supplementation is possible, no known toxicity is associated with high doses of the phylloquinone (vitamin K1) or menaquinone (vitamin K2) forms of vitamin K, so no tolerable upper intake level (UL) has been set.[11]\nBlood clotting (coagulation) studies in humans using 45 mg per day of vitamin K2 (as MK-4)[12] and even up to 135 mg per day (45 mg three times daily) of K2 (as MK-4),[13] showed no increase in blood clot risk. Even doses in rats as high as 250 mg/kg, body weight did not alter the tendency for blood-clot formation to occur.[14]\nUnlike the safe natural forms of vitamin K1 and vitamin K2 and their various isomers, a synthetic form of vitamin K, vitamin K3 (menadione), is demonstrably toxic at high levels. The U.S. FDA has banned this form from over-the-counter sale in the United States because large doses have been shown to cause allergic reactions, hemolytic anemia, and cytotoxicity in liver cells.[2]\nPhylloquinone (K1)[15][16] or menaquinone (K2) are capable of reversing the anticoagulant activity of the anticoagulant warfarin (tradename Coumadin). Warfarin works by blocking recycling of vitamin K, so that the body and tissues have lower levels of active vitamin K, and thus a deficiency of vitamin K.\nSupplemental vitamin K (for which oral dosing is often more active than injectable dosing in human adults) reverses the vitamin K deficiency caused by warfarin, and therefore reduces the intended anticoagulant action of warfarin and related drugs.[17] Sometimes small amounts of vitamin K are given orally to patients taking warfarin so that the action of the drug is more predictable.[17] The proper anticoagulant action of the drug is a function of vitamin K intake and drug dose, and due to differing absorption must be individualized for each patient.[citation needed] The action of warfarin and vitamin K both require two to five days after dosing to have maximum effect, and neither warfarin or vitamin K shows much effect in the first 24 hours after they are given.[18]\nThe newer anticoagulants dabigatran and rivaroxaban have different mechanisms of action that do not interact with vitamin K, and may be taken with supplemental vitamin K.[19][20]\nVitamin K2 (menaquinone). In menaquinone, the side chain is composed of a varying number of isoprenoid residues. The most common number of these residues is four, since animal enzymes normally produce menaquinone-4 from plant phylloquinone.\nA sample of phytomenadione for injection, also called phylloquinone\nThe three synthetic forms of vitamin K are vitamins K3 (menadione), K4, and K5, which are used in many areas, including the pet food industry (vitamin K3) and to inhibit fungal growth (vitamin K5).[21]\nConversion of vitamin K1 to vitamin K2[edit]\nVitamin K1 (phylloquinone) – both forms of the vitamin contain a functional naphthoquinone ring and an aliphatic side chain. Phylloquinone has a phytyl side chain.\nThe MK-4 form of vitamin K2 is produced by conversion of vitamin K1 in the testes, pancreas, and arterial walls.[22] While major questions still surround the biochemical pathway for this transformation, the conversion is not dependent on gut bacteria, as it occurs in germ-free rats[23][24] and in parenterally-administered K1 in rats.[25][26] In fact, tissues that accumulate high amounts of MK-4 have a remarkable capacity to convert up to 90% of the available K1 into MK-4.[27][28] There is evidence that the conversion proceeds by removal of the phytyl tail of K1 to produce menadione as an intermediate, which is then condensed with an activated geranylgeranyl moiety (see also prenylation) to produce vitamin K2 in the MK-4 (menatetrione) form.[29]\nVitamin K2[edit]\nMain article: Vitamin K2\nVitamin K2 (menaquinone) includes several subtypes. The two subtypes most studied are menaquinone-4 (menatetrenone, MK-4) and menaquinone-7 (MK-7).\nVitamin K1, the precursor of most vitamin K in nature, is a stereoisomer of phylloquinone, an important chemical in green plants, where it functions as an electron acceptor in photosystem I during photosynthesis. For this reason, vitamin K1 is found in large quantities in the photosynthetic tissues of plants (green leaves, and dark green leafy vegetables such as romaine lettuce, kale and spinach), but it occurs in far smaller quantities in other plant tissues (roots, fruits, etc.). Iceberg lettuce contains relatively little. The function of phylloquinone in plants appears to have no resemblance to its later metabolic and biochemical function (as \"vitamin K\") in animals, where it performs a completely different biochemical reaction.\nVitamin K (in animals) is involved in the carboxylation of certain glutamate residues in proteins to form gamma-carboxyglutamate (Gla) residues. The modified residues are often (but not always) situated within specific protein domains called Gla domains. Gla residues are usually involved in binding calcium, and are essential for the biological activity of all known Gla proteins.[30]\nAt this time[update], 17 human proteins with Gla domains have been discovered, and they play key roles in the regulation of three physiological processes:\nBlood coagulation: prothrombin (factor II), factors VII, IX, and X, and proteins C, S, and Z[31]\nBone metabolism: osteocalcin, also called bone Gla protein (BGP), matrix Gla protein (MGP),[32] periostin,[33] and the recently discovered Gla-rich protein (GRP).[34][35]\nVascular biology: growth arrest-specific protein 6 (Gas6)[36]\nUnknown function: proline-rich γ-carboxyglutamyl proteins (PRGPs) 1 and 2, and transmembrane γ-carboxy glutamyl proteins (TMGs) 3 and 4.[37]\nLike other lipid-soluble vitamins (A, D and E), vitamin K is stored in the fatty tissue of the human body.\nAbsorption and dietary need[edit]\nPrevious theory held that dietary deficiency is extremely rare unless the small intestine was heavily damaged, resulting in malabsorption of the molecule. Another at-risk group for deficiency were those subject to decreased production of K2 by normal intestinal microbiota, as seen in broad spectrum antibiotic use.[38] Taking broad-spectrum antibiotics can reduce vitamin K production in the gut by nearly 74% in people compared with those not taking these antibiotics.[39] Diets low in vitamin K also decrease the body's vitamin K concentration.[40] Those with chronic kidney disease are at risk for vitamin K deficiency, as well as vitamin D deficiency, and particularly those with the apoE4 genotype.[41] Additionally, in the elderly there is a reduction in vitamin K2 production.[42]\nThe National Academy of Medicine (NAM) updated an estimate of what constitutes an adequate intake (AI) for vitamin K in 2001. The NAM does not distinguish between K1 and K2 – both are counted as vitamin K. At that time there was not sufficient evidence to set the more rigorous estimated average requirement (EAR) or recommended dietary allowance (RDA) given for most of the essential vitamins and minerals. The current daily AIs for vitamin K for adult women and men are 90 μg and 120 μg respectively. The AI for pregnancy and lactation is 90 μg. For infants up to 12 months the AI is 2–2.5 μg, and for children aged 1 to 18 years the AI increases with age from 30 to 75 μg. As for safety, the FNB also sets tolerable upper intake levels (known as ULs) for vitamins and minerals when evidence is sufficient. In the case of vitamin K no UL is set, as evidence for adverse effects is not sufficient. Collectively EARs, RDAs, AIs and ULs are referred to as dietary reference intakes.[43] The European Food Safety Authority reviewed the same safety question and did not set an UL.[44]\nFor U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percentage of daily value (%DV). For vitamin K labeling purposes the daily value was 80 μg, but as of May 2016 it has been revised upwards to 120 μg. A table of the pre-change adult daily values is provided at reference daily intake. Food and supplement companies have until 28 July 2018 to comply with the change.\nSee also: Vitamin K2 § Dietary sources\nK1 (μg)[45]\nKale, cooked\nCollards, cooked\nCollards, raw\nSwiss chard, cooked\nSwiss chard, raw\nTurnip greens, raw\nRomaine lettuce, raw\nTable from \"Important information to know when you are taking: Warfarin (Coumadin) and Vitamin K\", Clinical Center, National Institutes of Health Drug Nutrient Interaction Task Force.[46]\nVitamin K1 is found chiefly in leafy green vegetables such as dandelion greens (which contain 778.4 μg per 100 g, or 741% of the recommended daily amount), spinach, swiss chard, lettuce and Brassica vegetables (such as cabbage, kale, cauliflower, broccoli, and brussels sprouts) and often the absorption is greater when accompanied by fats such as butter or oils; some fruits, such as avocados, kiwifruit and grapes, are also high in vitamin K. By way of reference, two tablespoons of parsley contains 153% of the recommended daily amount of vitamin K.[47] Some vegetable oils, notably soybean oil, contain vitamin K, but at levels that would require relatively large calorie consumption to meet the USDA-recommended levels.[48] colonic bacteria synthesize a significant portion of humans' vitamin K needs; newborns often receive a vitamin K shot at birth to tide them over until their colons become colonized at five to seven days of age from the consumption of breast milk.\nThe tight binding of vitamin K1 to thylakoid membranes in chloroplasts makes it less bioavailable. For example, cooked spinach has a 5% bioavailability of phylloquinone, however, fat added to it increases bioavailability to 13% due to the increased solubility of vitamin K in fat.[49]\nMain article: Vitamin K deficiency\nAverage diets are usually not lacking in vitamin K, and primary deficiency is rare in healthy adults. Newborn infants are at an increased risk of deficiency. Other populations with an increased prevalence of vitamin K deficiency include those who suffer from liver damage or disease (e.g. alcoholics), cystic fibrosis, or inflammatory bowel diseases, or have recently had abdominal surgeries. Secondary vitamin K deficiency can occur in people with bulimia, those on stringent diets, and those taking anticoagulants. Other drugs associated with vitamin K deficiency include salicylates, barbiturates, and cefamandole, although the mechanisms are still unknown. Vitamin K1 deficiency can result in coagulopathy, a bleeding disorder.[50]Symptoms of K1 deficiency include anemia, bruising, nosebleeds and bleeding of the gums in both sexes, and heavy menstrual bleeding in women.\nOsteoporosis[51][52] and coronary heart disease[53][54] are strongly associated with lower levels of K2 (menaquinone). Vitamin K2 (as menaquinones MK-4 through MK-10) intake level is inversely related to severe aortic calcification and all-cause mortality.[8]\nFunction in animals[edit]\nMechanism of action of vitamin K1.\nThe function of vitamin K2 in the animal cell is to add a carboxylic acid functional group to a glutamate (Glu) amino acid residue in a protein, to form a gamma-carboxyglutamate (Gla) residue. This is a somewhat uncommon posttranslational modification of the protein, which is then known as a \"Gla protein\". The presence of two −COOH (carboxylic acid) groups on the same carbon in the gamma-carboxyglutamate residue allows it to chelate calcium ions. The binding of calcium ions in this way very often triggers the function or binding of Gla-protein enzymes, such as the so-called vitamin K-dependent clotting factors discussed below.\nWithin the cell, vitamin K undergoes electron reduction to a reduced form called vitamin K hydroquinone, catalyzed by the enzyme vitamin K epoxide reductase (VKOR).[55] Another enzyme then oxidizes vitamin K hydroquinone to allow carboxylation of Glu to Gla; this enzyme is called gamma-glutamyl carboxylase[56][57] or the vitamin K-dependent carboxylase. The carboxylation reaction only proceeds if the carboxylase enzyme is able to oxidize vitamin K hydroquinone to vitamin K epoxide at the same time. The carboxylation and epoxidation reactions are said to be coupled. Vitamin K epoxide is then reconverted to vitamin K by VKOR. The reduction and subsequent reoxidation of vitamin K coupled with carboxylation of Glu is called the vitamin K cycle.[58] Humans are rarely deficient in vitamin K1 because, in part, vitamin K1 is continuously recycled in cells.[59]\nWarfarin and other 4-hydroxycoumarins block the action of VKOR.[60] This results in decreased concentrations of vitamin K and vitamin K hydroquinone in tissues, such that the carboxylation reaction catalyzed by the glutamyl carboxylase is inefficient. This results in the production of clotting factors with inadequate Gla. Without Gla on the amino termini of these factors, they no longer bind stably to the blood vessel endothelium and cannot activate clotting to allow formation of a clot during tissue injury. As it is impossible to predict what dose of warfarin will give the desired degree of clotting suppression, warfarin treatment must be carefully monitored to avoid overdose.\nGamma-carboxyglutamate proteins[edit]\nMain article: Gla domain\nThe following human Gla-containing proteins (\"Gla proteins\") have been characterized to the level of primary structure: blood coagulation factors II (prothrombin), VII, IX, and X, anticoagulant proteins C and S, and the factor X-targeting protein Z. The bone Gla protein osteocalcin, the calcification-inhibiting matrix Gla protein (MGP), the cell growth regulating growth arrest specific gene 6 protein (Gas6), and the four transmembrane Gla proteins (TMGPs), the function of which is at present unknown. Gas6 can function as a growth factor to activate the Axl receptor tyrosine kinase and stimulate cell proliferation or prevent apoptosis in some cells. In all cases in which their function was known, the presence of the Gla residues in these proteins turned out to be essential for functional activity.\nGla proteins are known to occur in a wide variety of vertebrates: mammals, birds, reptiles, and fish. The venom of a number of Australian snakes acts by activating the human blood-clotting system. In some cases, activation is accomplished by snake Gla-containing enzymes that bind to the endothelium of human blood vessels and catalyze the conversion of procoagulant clotting factors into activated ones, leading to unwanted and potentially deadly clotting.\nAnother interesting class of invertebrate Gla-containing proteins is synthesized by the fish-hunting snail Conus geographus.[61] These snails produce a venom containing hundreds of neuroactive peptides, or conotoxins, which is sufficiently toxic to kill an adult human. Several of the conotoxins contain two to five Gla residues.[62]\nMethods of assessment[edit]\nVitamin K status can be assessed by:\nThe prothrombin time (PT) test measures the time required for blood to clot. A blood sample is mixed with citric acid and put in a fibrometer; delayed clot formation indicates a deficiency. This test is insensitive to mild deficiency, as the values do not change until the concentration of prothrombin in the blood has declined by at least 50%.[63]\nUndercarboxylated prothrombin (PIVKA-II); in a study of 53 newborns, found \"PT (prothrombin time) is a less sensitive marker than PIVKA II\",[64] and as indicated above, PT is unable to detect subclinical deficiencies that can be detected with PIVKA-II testing.\nPlasma phylloquinone was found to be positively correlated with phylloquinone intake in elderly British women, but not men,[65] but an article by Schurgers et al. reported no correlation between FFQ[further explanation needed] and plasma phylloquinone.[66]\nUrinary γ-carboxyglutamic acid responds to changes in dietary vitamin K intake. Several days are required before any change can be observed. In a study by Booth et al., increases of phylloquinone intakes from 100 μg to between 377 and 417 μg for five days did not induce a significant change. Response may be age-specific.[67]\nUndercarboxylated osteocalcin (UcOc) levels have been inversely correlated with stores of vitamin K[68] and bone strength in developing rat tibiae. Another study following 78 post-menopausal Korean women found a supplement regimen of vitamins K and D, and calcium, but not a regimen of vitamin D and calcium, was inversely correlated with reduced UcOc levels.[69]\nFunction in bacteria[edit]\nMany bacteria, such as Escherichia coli found in the large intestine, can synthesize vitamin K2 (menaquinone-7 or MK-7, up to MK-11),[70] but not vitamin K1 (phylloquinone). In these bacteria, menaquinone transfers two electrons between two different small molecules, during oxygen-independent metabolic energy production processes (anaerobic respiration).[71] For example, a small molecule with an excess of electrons (also called an electron donor) such as lactate, formate, or NADH, with the help of an enzyme, passes two electrons to menaquinone. The menaquinone, with the help of another enzyme, then transfers these two electrons to a suitable oxidant, such fumarate or nitrate (also called an electron acceptor). Adding two electrons to fumarate or nitrate converts the molecule to succinate or nitrite plus water, respectively.\nSome of these reactions generate a cellular energy source, ATP, in a manner similar to eukaryotic cell aerobic respiration, except the final electron acceptor is not molecular oxygen, but fumarate or nitrate. In aerobic respiration, the final oxidant is molecular oxygen (O2), which accepts four electrons from an electron donor such as NADH to be converted to water. E. coli, as facultative anaerobes, can carry out both aerobic respiration and menaquinone-mediated anaerobic respiration.\nInjection in newborns[edit]\nThe blood clotting factors of newborn babies are roughly 30–60% that of adult values; this may be due to the reduced synthesis of precursor proteins and the sterility of their guts. Human milk contains 1–4 μg/L of vitamin K1, while formula-derived milk can contain up to 100 μg/L in supplemented formulas. Vitamin K2 concentrations in human milk appear to be much lower than those of vitamin K1. Occurrence of vitamin K deficiency bleeding in the first week of the infant's life is estimated at 0.25–1.7%, with a prevalence of 2–10 cases per 100,000 births.[72] Premature babies have even lower levels of the vitamin, so they are at a higher risk from this deficiency.\nBleeding in infants due to vitamin K deficiency can be severe, leading to hospitalization, blood transfusions, brain damage, and death. Supplementation can prevent most cases of vitamin K deficiency bleeding in the newborn. Intramuscular administration is more effective in preventing late vitamin K deficiency bleeding than oral administration.[73][74]\nAs a result of the occurrences of vitamin K deficiency bleeding, the Committee on Nutrition of the American Academy of Pediatrics has recommended 0.5–1 mg of vitamin K1 be administered to all newborns shortly after birth.[74]\nIn the UK vitamin K supplementation is recommended for all newborns within the first 24 hours.[75] This is usually given as a single intramuscular injection of 1 mg shortly after birth but as a second-line option can be given by three oral doses over the first month.[76]\nControversy arose in the early 1990s regarding this practice, when two studies suggested a relationship between parenteral administration of vitamin K and childhood cancer,[77] however, poor methods and small sample sizes led to the discrediting of these studies, and a review of the evidence published in 2000 by Ross and Davies found no link between the two.[78] Doctors reported emerging concerns in 2013,[79] after treating children for serious bleeding problems. They cited lack-of newborn vitamin K administration, as the reason that the problems occurred, and recommended that breastfed babies could have an increased risk unless they receive a preventative dose.\nIn the early 1930s, Danish scientist Henrik Dam investigated the role of cholesterol by feeding chickens a cholesterol-depleted diet.[80] He initially replicated experiments reported by scientists at the Ontario Agricultural College (OAC).[81] McFarlane, Graham and Richardson, working on the chick feed program at OAC, had used chloroform to remove all fat from chick chow. They noticed that chicks fed only fat-depleted chow developed hemorrhages and started bleeding from tag sites.[82] Dam found that these defects could not be restored by adding purified cholesterol to the diet. It appeared that – together with the cholesterol – a second compound had been extracted from the food, and this compound was called the coagulation vitamin. The new vitamin received the letter K because the initial discoveries were reported in a German journal, in which it was designated as Koagulationsvitamin. Edward Adelbert Doisy of Saint Louis University did much of the research that led to the discovery of the structure and chemical nature of vitamin K.[83] Dam and Doisy shared the 1943 Nobel Prize for medicine for their work on vitamin K (K1 and K2) published in 1939. Several laboratories synthesized the compound(s) in 1939.[84]\nFor several decades, the vitamin K-deficient chick model was the only method of quantifying vitamin K in various foods: the chicks were made vitamin K-deficient and subsequently fed with known amounts of vitamin K-containing food. The extent to which blood coagulation was restored by the diet was taken as a measure for its vitamin K content. Three groups of physicians independently found this: Biochemical Institute, University of Copenhagen (Dam and Johannes Glavind), University of Iowa Department of Pathology (Emory Warner, Kenneth Brinkhous, and Harry Pratt Smith), and the Mayo Clinic (Hugh Butt, Albert Snell, and Arnold Osterberg).[85]\nThe first published report of successful treatment with vitamin K of life-threatening hemorrhage in a jaundiced patient with prothrombin deficiency was made in 1938 by Smith, Warner, and Brinkhous.[86]\nThe precise function of vitamin K was not discovered until 1974, when three laboratories (Stenflo et al.,[87] Nelsestuen et al.,[88] and Magnusson et al.[89]) isolated the vitamin K-dependent coagulation factor prothrombin (factor II) from cows that received a high dose of a vitamin K antagonist, warfarin. It was shown that, while warfarin-treated cows had a form of prothrombin that contained 10 glutamate (Glu) amino acid residues near the amino terminus of this protein, the normal (untreated) cows contained 10 unusual residues that were chemically identified as γ-carboxyglutamate (Gla). The extra carboxyl group in Gla made clear that vitamin K plays a role in a carboxylation reaction during which Glu is converted into Gla.\nThe biochemistry of how vitamin K is used to convert Glu to Gla has been elucidated over the past thirty years in academic laboratories throughout the world.\n^ \"Vitamin K Overview\". University of Maryland Medical Center. ^ a b Higdon, Jane (Feb 2008). \"Vitamin K\". Linus Pauling Institute, Oregon State University. Retrieved 12 Apr 2008. ^ Hamidi, M. S.; Gajic-Veljanoski, O.; Cheung, A. M. (2013). \"Vitamin K and bone health\". Journal of Clinical Densitometry (Review). 16 (4): 409–413. doi:10.1016/j.jocd.2013.08.017. PMID 24090644. ^ Cockayne, S.; Adamson, J.; Lanham-New, S.; Shearer, M. J.; Gilbody, S; Torgerson, D. J. (Jun 2006). \"Vitamin K and the prevention of fractures: systematic review and meta-analysis of randomized controlled trials\". Archives of Internal Medicine (Review). 166 (12): 1256–1261. doi:10.1001/archinte.166.12.1256. PMID 16801507. ^ O'Keefe, J. H.; Bergman, N.; Carrera Bastos, P.; Fontes Villalba, M.; Di Nicolantonio, J. J.; Cordain, L. (2016). \"Nutritional strategies for skeletal and cardiovascular health: hard bones, soft arteries, rather than vice versa\". Open Heart (Review). 3 (1): e000325. doi:10.1136/openhrt-2015-000325. PMC 4809188. PMID 27042317. ^ Maresz, K. (Feb 2015). \"Proper Calcium Use: Vitamin K2 as a Promoter of Bone and Cardiovascular Health\". Integrative Medicine (Review). 14 (1): 34–39. PMC 4566462. PMID 26770129. ^ Hartley, L.; Clar, C.; Ghannam, O.; Flowers, N.; Stranges, S.; Rees, K. (Sep 2015). \"Vitamin K for the primary prevention of cardiovascular disease\". The Cochrane Database of Systematic Reviews (Systematic review). 9 (9): CD011148. doi:10.1002/14651858.CD011148.pub2. PMID 26389791. ^ a b Geleijnse, J. M.; Vermeer, C.; Grobbee, D. E.; Schurgers, L. J.; Knapen, M. H.; van der Meer, I. M.; Hofman, A.; Witteman, J. C. (Nov 2004). \"Dietary intake of menaquinone is associated with a reduced risk of coronary heart disease: the Rotterdam Study\". Journal of Nutrition. 134 (11): 3100–3105. PMID 15514282. ^ Ades, T. B., ed. (2009). \"Vitamin K\". American Cancer Society Complete Guide to Complementary and Alternative Cancer Therapies (2nd ed.). American Cancer Society. pp. 558–563. ISBN 978-0-944235-71-3. ^ Lung, D. (Dec 2015). Tarabar, A., ed. \"Rodenticide Toxicity Treatment & Management\". Medscape. WebMD. ^ Rasmussen, S. E.; Andersen, N. L.; Dragsted, L. O.; Larsen, J. C. (Mar 2006). \"A safe strategy for addition of vitamins and minerals to foods\". European Journal of Nutrition. 45 (3): 123–135. doi:10.1007/s00394-005-0580-9. PMID 16200467. ^ Ushiroyama, T.; Ikeda, A.; Ueki, M (Mar 2002). \"Effect of continuous combined therapy with vitamin K2 and vitamin D3 on bone mineral density and coagulofibrinolysis function in postmenopausal women\". Maturitas. 41 (3): 211–221. doi:10.1016/S0378-5122(01)00275-4. PMID 11886767. ^ Asakura, H.; Myou, S.; Ontachi, Y.; Mizutani, T.; Kato, M.; Saito, M.; Morishita, E.; Yamazaki, M.; Nakao, S. (Dec 2001). \"Vitamin K administration to elderly patients with osteoporosis induces no hemostatic activation, even in those with suspected vitamin K deficiency\". Osteoporosis International. 12 (12): 996–1000. doi:10.1007/s001980170007. PMID 11846334. ^ Ronden, J. E.; Groenen-van Dooren, M. M.; Hornstra, G.; Vermeer, C. (Jul 1997). \"Modulation of arterial thrombosis tendency in rats by vitamin K and its side chains\". Atherosclerosis. 132 (1): 61–67. doi:10.1016/S0021-9150(97)00087-7. PMID 9247360. ^ Ansell, J.; Hirsh, J.; Poller, L.; Bussey, H.; Jacobson, A.; Hylek, E (Sep 2004). \"The pharmacology and management of the vitamin K antagonists: the Seventh ACCP Conference on Antithrombotic and Thrombolytic Therapy\". Chest. 126 (3 Suppl.): 204S–233S. doi:10.1378/chest.126.3_suppl.204S. PMID 15383473. ^ Crowther, M. A.; Douketis, J. D.; Schnurr, T.; Steidl, L.; Mera, V.; Ultori, C.; Venco, A.; Ageno, W. (Aug 2002). \"Oral vitamin K lowers the international normalized ratio more rapidly than subcutaneous vitamin K in the treatment of warfarin-associated coagulopathy. A randomized, controlled trial\". Annals of Internal Medicine. 137 (4): 251–254. doi:10.7326/0003-4819-137-4-200208200-00009. PMID 12186515. ^ a b \"Important Information to Know When You Are Taking: Warfarin (Coumadin) and Vitamin K\" (PDF). National Institute of Health Clinical Center Drug-Nutrient Interaction Task Force. Retrieved 17 Apr 2015. ^ \"Guidelines For Warfarin Reversal With Vitamin K\" (PDF). American Society of Health-System Pharmacists. Retrieved 17 Apr 2015. ^ \"Pradaxa Drug Interactions\". Pradaxapro.com. 19 Mar 2012. Retrieved 21 Apr 2013. ^ Bauersachs, R.; Berkowitz, S. D.; Brenner, B.; Buller, H. R.; Decousus, H.; Gallus, A. S.; Lensing, A. W.; Misselwitz, F.; Prins, M. H.; Raskob, G. E.; Segers, A.; Verhamme, P.; Wells, P.; Agnelli, G.; Bounameaux, H.; Cohen, A.; Davidson, B. L.; Piovella, F.; Schellong, S. (Dec 2010). \"Oral rivaroxaban for symptomatic venous thromboembolism\". New England Journal of Medicine. 363 (26): 2499–2510. doi:10.1056/NEJMoa1007903. PMID 21128814. ^ McGee, W. (1 Feb 2007). \"Vitamin K\". MedlinePlus. Retrieved 2 Apr 2009. ^ Shearer, M. J.; Newman, P. (Oct 2008). \"Metabolism and cell biology of vitamin K\". Thrombosis and Haemostasis. 100 (4): 530–547. doi:10.1160/TH08-03-0147. PMID 18841274. ^ Davidson, R. T.; Foley, A. L.; Engelke, J. A.; Suttie, J. W. (Feb 1998). \"Conversion of dietary phylloquinone to tissue menaquinone-4 in rats is not dependent on gut bacteria\". Journal of Nutrition. 128 (2): 220–223. PMID 9446847. ^ Ronden, J. E.; Drittij-Reijnders, M. J.; Vermeer, C.; Thijssen, H. H. (Jan 1998). \"Intestinal flora is not an intermediate in the phylloquinone–menaquinone-4 conversion in the rat\". Biochimica et Biophysica Acta. 1379 (1): 69–75. doi:10.1016/S0304-4165(97)00089-5. PMID 9468334. ^ Thijssen, H. .H.; Drittij-Reijnders, M. J. (Sep 1994). \"Vitamin K distribution in rat tissues: dietary phylloquinone is a source of tissue menaquinone-4\". The British Journal of Nutrition. 72 (3): 415–425. doi:10.1079/BJN19940043. PMID 7947656. ^ Will, B. H.; Usui, Y.; Suttie, J. W. (Dec 1992). \"Comparative metabolism and requirement of vitamin K in chicks and rats\". Journal of Nutrition. 122 (12): 2354–2360. PMID 1453219. ^ Davidson, R. T.; Foley, A. L.; Engelke, J. A.; Suttie, J. W. (Feb 1998). \"Conversion of dietary phylloquinone to tissue menaquinone-4 in rats is not dependent on gut bacteria\". Journal of Nutrition. 128 (2): 220–223. PMID 9446847. ^ Ronden, J. E.; Drittij-Reijnders, M. J.; Vermeer, C.; Thijssen, H. H. (Jan 1998). \"Intestinal flora is not an intermediate in the phylloquinone-menaquinone-4 conversion in the rat\". Biochimica et Biophysica Acta. 1379 (1): 69–75. doi:10.1016/S0304-4165(97)00089-5. PMID 9468334. ^ Al Rajabi, Ala (2011). The Enzymatic Conversion of Phylloquinone to Menaquinone-4 (PhD thesis). Tufts University, Friedman School of Nutrition Science and Policy. ^ Furie, B.; Bouchard, B. A.; Furie, B. C. (Mar 1999). \"Vitamin K-dependent biosynthesis of gamma-carboxyglutamic acid\". Blood. 93 (6): 1798–1808. PMID 10068650. ^ Mann, K. G. (Aug 1999). \"Biochemistry and physiology of blood coagulation\". Thrombosis and Haemostasis. 82 (2): 165–174. PMID 10605701. ^ Price, P. A. (1988). \"Role of vitamin-K-dependent proteins in bone metabolism\". Annual Review of Nutrition. 8: 565–583. doi:10.1146/annurev.nu.08.070188.003025. PMID 3060178. ^ Coutu, D. L.; Wu, J. H.; Monette, A.; Rivard, G. E.; Blostein, M. D.; Galipeau, J (Jun 2008). \"Periostin, a member of a novel family of vitamin K-dependent proteins, is expressed by mesenchymal stromal cells\". Journal of Biological Chemistry. 283 (26): 17991–18001. doi:10.1074/jbc.M708029200. PMID 18450759. ^ Viegas, C. S.; Simes, D. C.; Laizé, V.; Williamson, M. K.; Price, P. A.; Cancela, M. L. (Dec 2008). \"Gla-rich protein (GRP), a new vitamin K-dependent protein identified from sturgeon cartilage and highly conserved in vertebrates\". Journal of Biological Chemistry. 283 (52): 36655–36664. doi:10.1074/jbc.M802761200. PMC 2605998. PMID 18836183. ^ Viegas, C. S.; Cavaco, S.; Neves, P. L.; Ferreira, A.; João, A.; Williamson, M. K.; Price, P. A.; Cancela, M. L.; Simes, D. C. (Dec 2009). \"Gla-rich protein is a novel vitamin K-dependent protein present in serum that accumulates at sites of pathological calcifications\". American Journal of Pathology. 175 (6): 2288–2298. doi:10.2353/ajpath.2009.090474. PMC 2789615. PMID 19893032. ^ Hafizi, S.; Dahlbäck, B. (Dec 2006). \"Gas6 and protein S. Vitamin K-dependent ligands for the Axl receptor tyrosine kinase subfamily\". The FEBS Journal. 273 (23): 5231–5244. doi:10.1111/j.1742-4658.2006.05529.x. PMID 17064312. ^ Kulman, J. D.; Harris, J. E.; Xie, L.; Davie, E. W. (May 2007). \"Proline-rich Gla protein 2 is a cell-surface vitamin K-dependent protein that binds to the transcriptional coactivator Yes-associated protein\". Proceedings of the National Academy of Sciences of the United States of America. 104 (21): 8767–8772. doi:10.1073/pnas.0703195104. PMC 1885577. PMID 17502622. ^ \"Vitamin K\". MedlinePlus. US National Library of Medicine, National Institutes of Health. Sep 2016. Retrieved 26 May 2009. ^ Conly, J; Stein, K. (Dec 1994). \"Reduction of vitamin K2 concentrations in human liver associated with the use of broad spectrum antimicrobials\". Clinical and Investigative Medicine. 17 (6): 531–539. PMID 7895417. ^ Ferland, G.; Sadowski, J. A.; O'Brien, M. E. (Apr 1993). \"Dietary induced subclinical vitamin K deficiency in normal human subjects\". Journal of Clinical Investigation. 91 (4): 1761–1768. doi:10.1172/JCI116386. PMC 288156. PMID 8473516. ^ Holden, R. M.; Morton, A. R.; Garland, J. S.; Pavlov, A.; Day, A. G.; Booth, S. L. (Apr 2010). \"Vitamins K and D status in stages 3-5 chronic kidney disease\". Clinical Journal of the American Society of Nephrology. 5 (4): 590–597. doi:10.2215/CJN.06420909. PMC 2849681. PMID 20167683. ^ Hodges, S. J.; Pilkington, M. J.; Shearer, M. J.; Bitensky, L.; Chayen, J (Jan 1990). \"Age-related changes in the circulating levels of congeners of vitamin K2, menaquinone-7 and menaquinone-8\". Clinical Science. 78 (1): 63–66. PMID 2153497. ^ \"Vitamin K\". Dietary Reference Intakes for Vitamin A, Vitamin K, Arsenic, Boron, Chromium, Copper, Iodine, Iron, Manganese, Molybdenum, Nickel, Silicon, Vanadium, and Zinc (PDF). National Academy Press. 2001. p. 162–196. ^ Tolerable Upper Intake Levels For Vitamins And Minerals (PDF), European Food Safety Authority, 2006 ^ a b Rhéaume-Bleue, p. 42\n^ \"Important information to know when you are taking: Warfarin (Coumadin) and Vitamin K\" (PDF). National Institutes of Health Clinical Center. ^ \"Nutrition Facts and Information for Parsley, raw\". Nutritiondata.com. Retrieved 21 Apr 2013. ^ \"Nutrition facts, calories in food, labels, nutritional information and analysis\". Nutritiondata.com. 13 Feb 2008. Retrieved 21 Apr 2013. ^ \"Vitamin K\". Vivo.colostate.edu. 2 Jul 1999. Retrieved 21 Apr 2013. ^ \"Vitamin K\". Micronutrient Data Centre. ^ Ikeda, Y.; Iki, M.; Morita, A.; Kajita, E.; Kagamimori, S.; Kagawa, Y.; Yoneshima, H. (May 2006). \"Intake of fermented soybeans, natto, is associated with reduced bone loss in postmenopausal women: Japanese Population-Based Osteoporosis (JPOS) Study\". Journal of Nutrition. 136 (5): 1323–1328. PMID 16614424. ^ Katsuyama, H.; Ideguchi, S.; Fukunaga, M.; Saijoh, K.; Sunami, S. (Jun 2002). \"Usual dietary intake of fermented soybeans (Natto) is associated with bone mineral density in premenopausal women\". Journal of Nutritional Science and Vitaminology. 48 (3): 207–215. doi:10.3177/jnsv.48.207. PMID 12350079. ^ Sano, M.; Fujita, H.; Morita, I.; Uematsu, H.; Murota, S. (Dec 1999). \"Vitamin K2 (menatetrenone) induces iNOS in bovine vascular smooth muscle cells: no relationship between nitric oxide production and gamma-carboxylation\". Journal of Nutritional Science and Vitaminology. 45 (6): 711–723. doi:10.3177/jnsv.45.711. PMID 10737225. ^ Gast, G. C ; de Roos, N. M.; Sluijs, I.; Bots, M. L.; Beulens, J. W.; Geleijnse, J. M.; Witteman, J. C.; Grobbee, D. E.; Peeters, P. H.; van der Schouw, Y. T. (Sep 2009). \"A high menaquinone intake reduces the incidence of coronary heart disease\". Nutrition, Metabolism, and Cardiovascular Diseases. 19 (7): 504–510. doi:10.1016/j.numecd.2008.10.004. PMID 19179058. ^ Oldenburg, J.; Bevans, C. G.; Müller, C. R.; Watzka, M. (2006). \"Vitamin K epoxide reductase complex subunit 1 (VKORC1): the key protein of the vitamin K cycle\". Antioxidants & Redox Signaling. 8 (3–4): 347–353. doi:10.1089/ars.2006.8.347. PMID 16677080. ^ Suttie, J. W. (1985). \"Vitamin K-dependent carboxylase\". Annual Review of Biochemistry. 54: 459–477. doi:10.1146/annurev.bi.54.070185.002331. PMID 3896125. ^ Presnell, S. R.; Stafford, D. W. (Jun 2002). \"The vitamin K-dependent carboxylase\". Thrombosis and Haemostasis. 87 (6): 937–946. PMID 12083499. ^ Stafford, D. W. (Aug 2005). \"The vitamin K cycle\". Journal of Thrombosis and Haemostasis. 3 (8): 1873–1878. doi:10.1111/j.1538-7836.2005.01419.x. PMID 16102054. ^ Rhéaume-Bleue, p. 79.\n^ Whitlon, D. S.; Sadowski, J. A.; Suttie, J. W. (Apr 1978). \"Mechanism of coumarin action: significance of vitamin K epoxide reductase inhibition\". Biochemistry. 17 (8): 1371–1377. doi:10.1021/bi00601a003. PMID 646989. ^ Terlau, H.; Olivera, B. M. (Jan 2004). \"Conus venoms: a rich source of novel ion channel-targeted peptides\". Physiological Reviews. 84 (1): 41–68. doi:10.1152/physrev.00020.2003. PMID 14715910. ^ Buczek, O.; Bulaj, G.; Olivera, BM (Dec 2005). \"Conotoxins and the posttranslational modification of secreted gene products\". Cellular and Molecular Life Sciences. 62 (24): 3067–3079. doi:10.1007/s00018-005-5283-0. PMID 16314929. ^ \"Prothrombin Time\". WebMD. ^ Dituri, F.; Buonocore, G.; Pietravalle, A.; Naddeo, F.; Cortesi, M; Pasqualetti, P; Tataranno M. L.; R., Agostino (Sep 2012). \"PIVKA-II plasma levels as markers of subclinical vitamin K deficiency in term infants\". Journal of Maternal, Fetal & Neonatal Medicine. 25 (9): 1660–1663. doi:10.3109/14767058.2012.657273. PMID 22280352. ^ Thane, C. W.; Bates, C. J.; Shearer, M. J.; Unadkat, N; Harrington, D. J.; Paul, A. A.; Prentice, A.; Bolton-Smith, C. (Jun 2002). \"Plasma phylloquinone (vitamin K1) concentration and its relationship to intake in a national sample of British elderly people\". British Journal of Nutrition. 87 (6): 615–622. doi:10.1079/BJNBJN2002582. PMID 12067432. ^ McKeown, N. M.; Jacques, P. F.; Gundberg, C. M.; Peterson, J. W.; Tucker, K. L.; Kiel, D. P.; Wilson, P. W.; Booth, SL (Jun 2002). \"Dietary and nondietary determinants of vitamin K biochemical measures in men and women\" (PDF). Journal of Nutrition. 132 (6): 1329–1334. PMID 12042454. ^ Yamano, M.; Yamanaka, Y.; Yasunaga, K.; Uchida, K. (Sep 1989). \"Effect of vitamin K deficiency on urinary gamma-carboxyglutamic acid excretion in rats\". Nihon Ketsueki Gakkai Zasshi. 52 (6): 1078–1086. PMID 2588957. ^ Matsumoto, T.; Miyakawa, T.; Yamamoto, D. (Mar 2012). \"Effects of vitamin K on the morphometric and material properties of bone in the tibiae of growing rats\". Metabolism. 61 (3): 407–414. doi:10.1016/j.metabol.2011.07.018. PMID 21944271. ^ Je, S.-H.; Joo, N.-S.; Choi, B.-H.; Kim, K.-M.; Kim, B.-T.; Park, S.-B.; Cho, D.-Y.; Kim, K.-N.; Lee, D.-J. (Aug 2011). \"Vitamin K supplement along with vitamin D and calcium reduced serum concentration of undercarboxylated osteocalcin while increasing bone mineral density in Korean postmenopausal women over sixty-years-old\". Journal of Korean Medical Science. 26 (8): 1093–1098. doi:10.3346/jkms.2011.26.8.1093. PMC 3154347. PMID 21860562. ^ Bentley, R.; Meganathan, R. (Sep 1982). \"Biosynthesis of vitamin K (menaquinone) in bacteria\" (PDF). Microbiological Reviews. 46 (3): 241–280. PMC 281544. PMID 6127606. ^ Haddock, B. A.; Jones, C. W. (Mar 1977). \"Bacterial respiration\" (PDF). Bacteriological Reviews. 41 (1): 47–99. PMC 413996. PMID 140652. ^ Shearer, M. J. (Jan 1995). \"Vitamin K\". Lancet. 345 (8944): 229–234. doi:10.1016/S0140-6736(95)90227-9. PMID 7823718. ^ Greer, J. P.; Foerster, J.; Lukens, J. N.; Rodgers, G. M.; Paraskevas, F.; Glader, B. (eds.). Wintrobe's Clinical Hematology (11th ed.). Philadelphia, Pennsylvania: Lippincott, Williams and Wilkens. ^ a b American Academy of Pediatrics Committee on Fetus Newborn. (Jul 2003). \"Controversies concerning vitamin K and the newborn. American Academy of Pediatrics Committee on Fetus and Newborn\" (PDF). Pediatrics. 112 (1.1): 191–192. doi:10.1542/peds.112.1.191. PMID 12837888. ^ Logan, S.; Gilbert, R. (1998). \"Vitamin K For Newborn Babies\" (PDF). Department of Health. Retrieved 12 Oct 2014. ^ \"Postnatal care: Routine postnatal care of women and their babies [CG37]\". www.nice.org.uk. NICE. Jul 2006. Retrieved 12 Oct 2014. ^ Parker, L.; Cole, M.; Craft, A. W.; Hey, E. N. (1998). \"Neonatal vitamin K administration and childhood cancer in the north of England: retrospective case-control study\". BMJ (Clinical Research Edition). 316 (7126): 189–193. doi:10.1136/bmj.316.7126.189. PMC 2665412. PMID 9468683. ^ McMillan, D. D. (1997). \"Routine administration of vitamin K to newborns\". Paediatric Child Health. 2 (6): 429–431. ^ \"Newborns get rare disorder after parents refused shots\". Having four cases since February just at Vanderbilt was a little bit concerning to me ^ Dam, C. P. H. (1935). \"The Antihaemorrhagic Vitamin of the Chick: Occurrence And Chemical Nature\". Nature. 135 (3417): 652–653. doi:10.1038/135652b0. ^ Dam, C. P. H. (1941). \"The discovery of vitamin K, its biological functions and therapeutical application\" (PDF). Nobel Prize Laureate Lecture. ^ McAlister, V. C. (2006). \"Control of coagulation: a gift of Canadian agriculture\" (PDF). Clinical and Investigative Medicine. 29 (6): 373–377. ^ MacCorquodale, D. W.; Binkley, S. B.; Thayer, S. A.; Doisy, E. A. (1939). \"On the constitution of Vitamin K1\". Journal of the American Chemical Society. 61 (7): 1928–1929. doi:10.1021/ja01876a510. ^ Fieser, L. F. (1939). \"Synthesis of Vitamin K1\". Journal of the American Chemical Society. 61 (12): 3467–3475. doi:10.1021/ja01267a072. ^ Dam, C. P. H. (12 Dec 1946). \"The discovery of vitamin K, its biological functions and therapeutical application\" (PDF). Nobel Prize lecture. ^ Warner, E. D.; Brinkhous, K. M.; Smith, H. P. (1938). \"Bleeding Tendency of Obstructive Jaundice\". Proceedings of the Society of Experimental Biology and Medicine. 37 (4): 628–630. doi:10.3181/00379727-37-9668P. ^ Stenflo, J; Fernlund, P.; Egan, W.; Roepstorff, P. (Jul 1974). \"Vitamin K dependent modifications of glutamic acid residues in prothrombin\". Proceedings of the National Academy of Sciences of the United States of America. 71 (7): 2730–2733. doi:10.1073/pnas.71.7.2730. PMC 388542. PMID 4528109. ^ Nelsestuen, G. L.; Zytkovicz, T. H.; Howard, J. B. (Oct 1974). \"The mode of action of vitamin K. Identification of gamma-carboxyglutamic acid as a component of prothrombin\" (PDF). Journal of Biological Chemistry. 249 (19): 6347–6350. PMID 4214105. ^ Magnusson, S.; Sottrup-Jensen, L.; Petersen, T. E.; Morris, H. R.; Dell, A. (Aug 1974). \"Primary structure of the vitamin K-dependent part of prothrombin\". FEBS Letters. 44 (2): 189–193. doi:10.1016/0014-5793(74)80723-4. PMID 4472513. Bibliography[edit]\nRhéaume-Bleue, Kate (2012). Vitamin K2 and the Calcium Paradox. John Wiley & Sons, Canada. ISBN 1-118-06572-7. External links[edit]\n\"Vitamin K: Another Reason to Eat Your Greens\". v\nTPP / ThDP (B1)\nFMN, FAD (B2)\nNAD+, NADH, NADP+, NADPH (B3)\nCoenzyme A (B5)\nPLP / P5P (B6)\nTHFA / H4FA, DHFA / H2FA, MTHF (B9)\nAdoCbl, MeCbl (B12)\nPhylloquinone (K1), Menaquinone (K2)\nnon-vitamins\nCoenzyme B\nHeme / Haem (A, B, C, O)\nMolybdopterin/Molybdenum cofactor\nTHMPT / H4MPT\nFe2+, Fe3+\nvitamins: see vitamins\nAntihemorrhagics (B02)\n(coagulation)\nPhytomenadione (K1)\nMenadione (K3)\nintrinsic: IX/Nonacog alfa\nVIII/Moroctocog alfa/Turoctocog alfa\nextrinsic: VII/Eptacog alfa\ncommon: X\nII/Thrombin\nI/Fibrinogen\nXIII/Catridecacog\ncombinations: Prothrombin complex concentrate (II, VII, IX, X, protein C and S)\nCarbazochrome\nthrombopoietin receptor agonist (Romiplostim\nEltrombopag)\nTetragalacturonic acid hydroxymethylester\nEpinephrine/Adrenalone\namino acids (Aminocaproic acid\nAminomethylbenzoic acid)\nserpins (Aprotinin\nAlfa1 antitrypsin\nCamostat).", "answers": ["Vitamins K3, K4, and K5."], "length": 7133, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "c7ad556387e8215bae3f8ccd30ba35e0093218fe48168718"} {"input": "How are the relationships between catch per set and fishing behavior variables different for different measures of catch per unit effort (CPUE)?", "context": "Overfishing is a major threat to the survival of shark species, primarily driven by international trade in high-value fins, as well as meat, liver oil, skin and cartilage. The Convention on the International Trade in Endangered Species of Wild Fauna and Flora (CITES) aims to ensure that commercial trade does not threaten wild species, and several shark species have recently been listed on CITES as part of international efforts to ensure that trade does not threaten their survival. However, as international trade regulations alone will be insufficient to reduce overexploitation of sharks, they must be accompanied by practical fisheries management measures to reduce fishing mortality. To examine which management measures might be practical in the context of a targeted shark fishery, we collected data from 52 vessels across 595 fishing trips from January 2014 to December 2015 at Tanjung Luar fishing port in East Lombok, Indonesia. We recorded 11,920 landed individuals across 42 species, a high proportion of which were threatened and regulated species. Catch per unit effort depended primarily on the number of hooks and type of fishing gear used, and to a lesser degree on month, boat engine power, number of sets and fishing ground. The most significant factors influencing the likelihood of catching threatened and regulated species were month, fishing ground, engine power and hook number. We observed significant negative relationships between standardised catch per unit effort and several indicators of fishing effort, suggesting diminishing returns above relatively low levels of fishing effort. Our results suggest that management measures focusing on fishing effort controls, gear restrictions and modifications and spatiotemporal closures could have significant benefits for the conservation of shark species, and may help to improve the overall sustainability of the Tanjung Luar shark fishery. These management measures may also be applicable to shark fisheries in other parts of Indonesia and beyond, as sharks increasingly become the focus of global conservation efforts.\nCopyright: © 2018 Yulianto et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.\nFunding: Data collection of this study was funded by the Darwin Initiative (grant number 2805). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.\nOverfishing is the greatest global threat to marine fish stocks [1–5]. Several shark species (Selachimorpha) are particularly vulnerable to overexploitation due to their conservative life history strategies, large body sizes and the high economic value of their preserved body parts [6–8]. With increasing fishing pressure in recent decades, primarily driven by international demand for a range of consumer goods (including fins, liver oil, skin, cartilage and meat), it is estimated that annual fishing mortality now exceeds the intrinsic rebound potential of most commercially exploited species [5, 9, 10]. This fishing pressure is taking its toll, with an estimated one in four Chondrichthyan species now threatened with extinction, making sharks amongst the most threatened species groups in the world .\nIt is also increasingly acknowledged that sharks play a critical role in maintaining functional and productive ocean ecosystems , as well as providing an important source of food and income for many coastal communities . Recognising both the plight and importance of shark populations, there is growing professional and public interest to improve shark conservation, and the management of shark fisheries and trade . This is reflected in several recent policy decisions to afford new international regulations for 12 species of sharks across seven genera under the Convention on the International Trade of Endangered Species of Wild Fauna and Flora (CITES). This is a promising step for shark conservation; however, in order to create tangible outcomes for species conservation CITES must be implemented through domestic measures that are adapted to national and local contexts.\nIndonesia is the world’s largest shark fishing nation [9, 14], and a global priority for shark conservation . Until recently Indonesia’s shark fishery has largely functioned as de facto open-access [12, 16]. However, in the past five years the Indonesian government has demonstrated a clear commitment to shark conservation and resource management, with domestic measures put in place to implement international obligations under CITES . Exploitation of all CITES-listed species is now regulated, either through full species protection or export controls (these species are hereafter referred to as ‘regulated’ species). However, CITES only affords protection to a small number of Indonesia’s 112 known shark species , of which 83 are threatened with extinction according to the IUCN Red List of Threatened Species (i.e. Vulnerable (VU), Endangered (EN) or Critically Endangered (CR) , these species are hereafter referred to as ‘threatened’ species), many of which continue to be landed throughout the country . Further, these policy measures predominantly regulate trade at the point of export, but do not necessarily influence fisher behaviour or local demand at the point of catch, such that the ‘trickle-down’ impacts on species mortality are unknown. In addition, effectively implementing species-specific shark mortality controls remains challenging due to the non-selectivity of fishing gears, and practical and cultural barriers to changing fisher preferences for certain gear-types and fishing methods. As such, existing regulations alone (e.g. Indonesian Law on Fisheries 31/2004 and its derivative regulations) will likely be insufficient to curb mortality of threatened and regulated species, as fishers must be both willing and able to change their fishing behaviour . Moreover, most of Indonesia’s shark fisheries are small-scale, and in relatively poor coastal communities where there are often no legal, sustainable marine-based alternatives to shark fishing that offer similar financial returns [22, 23]. It is therefore imperative to consider the ethical and socioeconomic impacts of shark trade controls. Most shark species listed under CITES are listed on Appendix II, which is designed for sustainable use. International trade is permitted for CITES Appendix II species provided it is non-detrimental to wild populations of the species, as proven through a scientific non-detriment finding (NDF) report and implemented through a system of export permits. However, in Indonesia there is currently a lack of species-specific trade data for conducting NDFs and setting sustainable export quotas, such that the Indonesian government has to introduce trade bans for these species in order to meet CITES obligations. With new CITES-listings for thresher sharks (Alopias spp.) and silky shark (Carcharhinus falciformis) recently coming into force, this is likely to have huge implications for Indonesia’s economically important shark industry, and the coastal communities depending on it. In order to balance conservation and socioeconomic objectives, robust management systems must be put in place that ensure and allow sustainable fishing and trade. This necessitates the identification of practical management measures that can reduce mortality of threatened and regulated species at the point of catch, and provide realistic options for fishers to effectively and measurably improve the sustainability of their fishing practices.\nThis study analyses two years of qualitative and quantitative data from one of Indonesia’s targeted shark fisheries in Tanjung Luar, West Nusa Tenggara Province. We outline the key characteristics of the fishery, including fishing behaviour and overall catch volumes and composition. We analyse the impacts of different fishing techniques, and present factors influencing overall catch per unit effort (CPUE) of individual shark fishing trips, as well as factors influencing the likelihood of catching threatened and regulated species. Finally, we discuss the implications of our findings, and provide practical recommendations for fisheries management measures, which can support CITES implementation for sharks and reduce the catch of threatened and regulated species, in Indonesia and beyond.\nThis work was conducted under a Memorandum of Understanding (MoU) and Technical Cooperation Agreement (TCA) between the Wildlife Conservation Society (WCS) and the Ministry of Environment and Forestry (MoEF), Ministry Marine Affairs and Fisheries (MMAF) and the Marine and Fisheries Agency (MFA) of West Nusa Tenggara Province. These documents were approved and signed by Sonny Partono (Director General of Conservation of Natural Resources and Ecosystem MoEF), Sjarief Widjaja (Secretary General MMAF), and Djoko Suprianto (Acting Head of MFA of West Nusa Tenggara Province). Due to this MoU and TCA no specific research permit was required. We collected data by measuring sharks that were already caught, dead, and landed by fishers in Tanjung Luar, with no incentives, compensation or specific requests for killing sharks for this study. WCS participates in the Conservation Initiative on Human Rights and the rules and guidelines of our Internal Review Board ensures that any research protects the rights of human subjects. We did not apply for an IRB permit for this study because our study design focused on collecting fish and fisheries data as opposed to personal socio-economic data. The FDGs and interviews were conducted to obtain early scoping information about fishing practices, and to establish protocols for more detailed fisheries data collection (as used in this study), and socio-economic data collection (as used in a later study (Lestari et al ), which underwent further ethical review due to the specific focus on human subjects).\nTanjung Luar, located in East Lombok, West Nusa Tenggara Province, Indonesia (Fig 1), is a landing site for one of Indonesia’s most well-known targeted shark fisheries. Tanjung Luar serves at least 1,000 vessels, and the majority of these are less than 10 gross tonnes (GT) in size . A group of specialised fishers operating from Tanjung Luar village and a neighbouring island, Gili Maringkik, specifically target sharks. Shark catch is landed in a dedicated auction facility at the Tanjung Luar port. The shark industry is well established in Tanjung Luar, with product processing facilities and trade connections to local, national and international markets. Research by Lestari et al. indicates that the shark industry is significantly more profitable than non-shark fisheries in Tanjung Luar, particularly for boat owners. Strong patron-client relationships exist between boat owners and fishers, with shark fishers exhibiting high dependency on shark fishing, limited occupational diversity and low adaptive capacity for shifting into other fisheries .\nFig 1. Sharks landing monitoring site and fishing grounds of shark fishers that land at Tanjung Luar.\nIn January 2014 we conducted preliminary scoping research to better understand the operational and socioeconomic characteristics of Tanjung Luar’s shark fishery. During a three-week scoping visit a team of four trained Indonesian enumerators conducted semi-structured interviews and focus group discussions (FDGs) with fishers, boat owners and traders, alongside naturalistic observation in the field. Respondents were selected through purposive sampling, since the research was exploratory in nature and a priori sampling decisions were not possible . We conducted a total of 34 semi-structured interviews (S1 File) and four FDGs, which were attended by a total of 30 individuals. All interviews and discussions took place in Indonesian, with the help of a local enumerator who was fluent in the Tanjung Luar local dialect. Interviews took approximately 30 minutes, with no remuneration for participating. All respondents gave their full prior and informed consent before contributing to the research. During the interviews and FDGs we gathered information on number of boats, fishing gears used, fishing grounds, fishery operational characteristics, and shark supply chain, including estimated volumes and value of shark catch relative to other fisheries. We improved the accuracy of information on shark fishery characteristics and fishing behaviour through informal daily interactions and discussions with 131 shark fishers during our daily landings data collection and community engagement activities. More detailed socioeconomic data were collected in a full household survey in 2016, as outlined in Lestari et al. .\nShark landings data were collected by three experienced enumerators, who were trained in species identification and data collection methods during a two-day workshop and three weeks of field mentoring to ensure the accuracy of the data collected. Landings were recorded every morning at the Tanjung Luar shark auction facility where shark fishers usually landed dead sharks, from 5am to 10am from January 2014 to December 2015. The enumerators recorded data on catch composition and fishing behaviour (Table 1) from 52 different vessels across a total of 595 fishing trips. The enumerators also measured the weight of selected sharks to calculate biomass and length-weight relationship.\nTable 1. Types of data collected on fishing behaviour and catch composition during daily landings data collection at Tanjung Luar.\nFrom fishing behaviour and catch data we calculated the overall species composition of catch. We calculated catch per unit effort (CPUE) by number of individuals using both catch per set (hereafter CPUE per set) and catch per 100 hooks per set (hereafter standardised CPUE) [25,26]. This was deemed necessary since different vessels and gear-types systematically deploy different numbers of hooks, and standardised CPUE allows for a more meaningful comparison.\nTo understand factors influencing overall CPUE we log transformed CPUE per trip to fit a normal distribution, and fitted linear models (LMs) of CPUE per trip to fishing behaviour variables (Table 1). We considered all variables and used minimum AIC values with stepwise analysis of variance to identify the best fit and most significant influencing variables.\nTo inform the development of practical fisheries management measures (e.g. gear restrictions), we also specifically analysed differences in CPUE for surface and bottom longline gears employed in the fishery, using two-way ANOVAs.\nFactors affecting catch of threatened and regulated species.\nTo identify variables influencing the catch of threatened and regulated species we conducted a two-step process. In the first step, we identified factors influencing the likelihood of catching any threatened/regulated species during a given fishing trip, by creating binary response variables for whether a threatened species had been caught during a trip (yes = 1, no = 0), and separately for whether a regulated species had been caught during a trip (yes = 1, no = 0). We then fitted generalised linear models (GLMs) with binomial errors to the binary response variables, separately for catch of threatened species and catch of regulated species. In the second step we identified variables that significantly influenced the CPUE of threatened species and the CPUE of regulated species, given that any were caught. We removed all records in which no threatened or regulated species were caught, log transformed standardised CPUE of threatened and regulated species, and fitted linear models (LMs) of standardised CPUE of threatened species and standardised CPUE to regulated species to fishing behaviour variables. Again, we considered all meaningful models and used minimum AIC values with stepwise analysis of variance to identify the best fit and most significant influencing variables. This approach was necessary since catch of threatened and regulated species is zero-inflated, and creating binary response variables with a binomial error structure allowed for a simpler and more powerful statistical analysis. Note that we conducted two separate analyses, one for threatened species only and one for regulated species only, but used the same methods and process, as outlined above, for each analysis. We did not group threatened and protected species together, since although some species are both threatened and protected, this is not the case for all shark species landed in Tanjung Luar.\nA total of 52 shark fishing vessels operate from Tanjung Luar, all of which are classified as small-scale according to the Indonesian Ministry of Marine Affairs and Fisheries (MMAF) vessel categorisation system, with <7GT capacity. These vessels are operated by approximately 150 highly-specialised shark fishers, from Tanjung Luar village and Gili Maringkik, who make up roughly 5% of the local fisher population. The shark industry is more profitable than non-shark fisheries, and shark fishers report high household dependency on shark resources, low occupational diversity, and limited capacity and aspirations to move into other fisheries or industries.\nSurface and bottom longlines are used as the primary fishing gears to target sharks, with pelagic fish (e.g. Euthynnus spp., Rastrellinger spp.) used as bait. Surface and bottom longlines systematically vary in length, depth deployed, number of sets, number of hooks used, and soak times (Table 2). Gear types are typically associated with certain vessel types, and fishers–captain and crew—tend to exhibit preferences for specific gear types. Shark fishers also use gillnets and troll lines as secondary gears, to catch bait and opportunistically target other species, such as grouper, snapper, skipjack and mackerel tuna.\nTable 2. Characteristics of surface and bottom longlines.\nThe shark fishing vessels can be divided into two broad categories according to fishing behaviour: larger vessels (≥14 m) with higher horsepower (HP) engines spend more time at sea than smaller vessels (≤12m) (p<0.001), and reach fishing grounds outside of West Nusa Tenggara. These vessels primarily fish in southern Sumbawa and Sumba Islands, however, they also reach as far as eastern Flores, Timor Island, and the Java Sea (Fig 1). Larger, higher HP vessels also tend to employ surface longlines (p<0.001), and since they spend more time at sea, have a higher number of sets per trip than smaller vessels (p<0.001). Smaller vessels (≤12 m) with smaller engines tend to remain in waters around West Nusa Tenggara only, carrying out shorter fishing trips using bottom longlines (Table 3).\nTable 3. Characterisation of the different fishing vessels used to target sharks in Tanjung Luar.\nDuring the study period we recorded shark catch from a total of 595 fishing trips. We recorded 11,678 individual sharks, with an average total catch of 963 individuals per month (SD ± 434) and 19.7 individuals per trip (SD ± 15.6). Standardised CPUE (per 100 hooks per set) ranged from 0.05 to 22.13 individuals, with an average of 0.96 and a mode of 0.20. Catch consisted of 42 different species from 18 families (Table 4). 22% of all landings were classified as threatened species (i.e. VU, EN, CR) according to the IUCN Red List of Threatened Species, and 73% were near threatened. Almost half (46.3%) of landings were regulated (i.e. CITES-listed) species. The most commonly caught species were silky shark (Carcharhinus falciformis), black tip shark (Carcharhinus limbatus) and scalloped hammerhead (Sphyrna lewini).\nTable 4. Sharks species landed in Tanjung Luar from January 2014 –December 2015 (VU = Vulnerable, EN = Endangered, NT = Near Threatened, LC = Least Concern, NE = Not Evaluated (VU and EN classified as ‘threatened’ in this study); II = CITES Appendix II, N = Not CITES-listed (II species classified as ‘regulated’ in this study)).\nMeasures of CPUE for the Tanjung Luar shark fishery vary spatially and temporally, and with several aspects of fishing effort including gear type, hook number, engine power and number of sets. An initial comparison of average catch per trip and catch per set of the two major gear types, surface longline and bottom longline, indicates that CPUE of surface longlines was significantly higher than that of bottom longlines (ANOVA, p<0.001). CPUE (individuals per set) was also positively associated with number of hooks, engine power, and number of sets (Fig 2). However, these relationships are for unstandardised CPUE i.e. without controlling for number of hooks.\nPlots of CPUE: Number of individuals per set (A) and number of individuals per 100 hooks per set (standardised CPUE) (B) by gear type (1), number of hooks (2), number of sets (3) and engine horsepower (4).\nWhen controlling for hook number using standardised CPUE (individuals per 100 hooks per set) the relationships were reversed, with standardised CPUE of bottom longlines significantly higher than that of surface longlines (ANOVA, p<0.001; Fig 2). A similar pattern was observed when comparing relationships between CPUE (individuals per set) and standardised CPUE for other measures of fishing effort, including numbers of hooks, engine power and number of sets (Fig 2). There was a positive relationship between unstandardised CPUE (individuals per set) and number of hooks, number of sets and engine power, but a negative relationship between CPUE and these fishing behaviour variables when CPUE was standardised by hook number (individuals per 100 hooks per set).\nThe best fit LM of standardised CPUE indicated that the most significant factors influencing standardised CPUE were fishing gear and number of hooks (p<0.001). Month, engine power, number of sets and fishing ground were also identified as significant variables (Table 5), although there was considerable covariance between these factors. Standardised CPUE was significantly lower in January, and decreased with higher numbers of hooks, despite a higher total catch per trip and set (Fig 2).\nTable 5. Analysis of variance for linear model of standardised CPUE (individuals per 100 hooks per set) data from Tanjung Luar; significant values (p<0.05) are given in bold.\nBest fit GLMs indicated that the most significant factors influencing the likelihood of catching threatened species were month (January and November were significantly lower: p<0.001 and p<0.05, respectively) and fishing ground (Other (i.e. fishing grounds outside of WNTP and ENTP) was significantly higher: p<0.01). Significant factors associated with standardised CPUE of threatened species were number of hooks (p<0.001), fishing ground (other: p<0.001, ENTP p<0.05), engine power (p<0.001) and trip length (p<0.001) (Table 6 and Fig 3).\nPlots of most significant factors affecting standardised CPUE (number of individuals per 100 hooks per set) of threatened species: a) hook number, b) fishing ground, c) engine power and d) trip length.\nAnalysis of variance for the best fit models of factors affecting: a) the likelihood of catching and the standardised CPUE of threatened species b) the likelihood of catching and the standardised CPUE of regulated species.\nThe most significant factors influencing the likelihood of catching regulated species were month (January was significantly lower: p<0.001), number of hooks (p<0.001) and engine power (<0.01). Significant factors associated with standardised CPUE of regulated species were number of hooks (p<0.001), fishing gear (<0.001), number of sets (p<0.001), engine power (p<0.01) and month (November and January: p<0.05) (Table 5 and Fig 4).\nPlots of most significant factors affecting standardised CPUE (number of individuals per 100 hooks per set) of regulated species: a) hook number, b) gear type, c) number of sets.\nAlthough Tanjung Luar’s targeted shark fishery is small in scale, considerable numbers of shark are landed, including a large proportion of threatened and regulated species. A key finding is that measures of CPUE, for all sharks and for threatened and regulated species, vary spatially and temporally, and with several aspects of fishing effort including gear type, hook number, engine power and number of sets. Moreover, the relationships between CPUE and fishing behaviour variables are different for different measures of CPUE (CPUE per trip, CPUE per set, CPUE per 100 hooks per set). This highlights the importance of using appropriate standardisation for meaningful comparisons of CPUE across different gears and vessel types, and has important implications for fisheries management.\nUnstandardised CPUE (individuals per set) was significantly lower in January. This is during the west monsoon season, which is characterised by high rainfall and adverse conditions at sea for fishing. Unstandardised CPUE was also significantly lower in West Nusa Tenggara Province (WNTP) than East Nusa Tenggara Province (ENTP) and other provinces, suggesting a lower abundance of sharks in this area. Engine power had a significant positive influence on unstandardised CPUE, and was also associated with longer trips and more sets, which was likely due to the ability of vessels with larger engines to travel longer distances, over longer time periods, and with higher numbers of sets, to favoured fishing grounds. Unstandardised CPUE was also significantly higher for surface longlines than bottom longlines. However, when standardising CPUE for the number of hooks (i.e. individuals per 100 hooks per set) this relationship was reversed. Bottom longlines exhibit a higher standardised CPUE, with negative relationships between catch per 100 hooks per set and number of hooks and frequency of sets. Vessels with moderate engine horsepower (50-59hp) also had the highest standardised CPUE. Since surface longlines systematically employ significantly more hooks than bottom longlines (400–600 vs 25–200 hooks), and tend to be associated with larger boats, longer trips and more sets, these findings suggest that although increasing fishing effort increased total catch for these gears and trips, there were diminishing returns of this increased effort above low to moderate levels.\nA large proportion of Tanjung Luar’s shark catch consisted of threatened (22%) and regulated species (46%). Month is a significant factor in explaining standardised CPUE of both threatened and regulated species, which could indicate seasonal variation in the abundance of these species in the Tanjung Luar fishing grounds, or seasonal impacts on CPUE due to poor weather conditions. Fishing ground was a significant factor in explaining the catch of threatened species but not the catch in regulated species. This may be due to differences in range, distribution and relative abundance of species within these groups. Threatened species make up a relatively small proportion of Tanjung Luar’s catch in comparison to regulated species, which make up almost half of the catch (46%). As such, regulated species may generally be more abundant and spatially diffuse than threatened species, and therefore caught more uniformly across fishing grounds. For example, regulated species catch is dominated by silky sharks (Carcharhinus falciformis), which are circum-tropical and coastal-pelagic, and exhibit limited site-fidelity or aggregation behaviour, while threatened species catch is dominated by scalloped hammerheads (Sphyrna lewini), which are known to aggregate in schools. These schools of scalloped hammerheads may be more restricted to specific aggregation sites outside of WNTP and ENTP waters, while silky sharks are found in uniform abundance throughout fishing grounds.\nAs with CPUE of all catch, there was a positive relationship between unstandardised CPUE (catch per set) of threatened and regulated species and number of hooks, but a significant negative relationship between standardised CPUE (catch per 100 hooks per set). This was likely due to diminishing returns of adding additional hooks, and indicates that the effort for threatened and regulated species was exceeding maximum sustainable yield effort, such that increases in effort (e.g. hook number) were leading to decreases in catch [28–30].\nDue to the profitability of the shark industry in Tanjung Luar, and limited adaptive capacity and willingness of shark fishers to move into other industries, it is necessary to identify practical and ethical management interventions that can improve the sustainability of the fishery whilst also mitigating the negative socio-economic consequences for coastal communities. Our findings indicate that spatiotemporal closures and restrictions on fishing effort could improve the overall catch per unit effort and sustainability of the Tanjung Luar shark fishery, and lead to positive conservation outcomes for priority species.\nSince the location of shark fishing grounds plays a significant role in determining the likelihood of catching threatened species and their associated CPUE, improved marine spatial planning, with the identification of marine protected areas (MPAs) that protect critical shark habitat and shark populations, could reduce catch of species of conservation concern [31–33] and increase abundance of sharks [34, 35]. Provincial governments in West Papua and West Nusa Tenggara have already established ‘shark sanctuary’ MPAs, which protect critical shark habitat and ban shark fishing within their boundaries [16, 36], and monitoring data indicates positive impacts of shark-specific closures on shark abundance [37, 38]. Strengthening Indonesia’s existing MPA network for shark conservation, such as making all MPAs no-take zones for sharks and expanding spatial protection to critical shark habitat, including aggregation sites or pupping and nursery grounds for species of conservation concern, could have considerable conservation benefits. It should be noted, however, that MPAs may only be effective for certain species, such as those with small ranges or site-fidelity . More research is required to identify critical shark habitat and life history stages. For Tanjung Luar these efforts could focus on better understanding scalloped hammerhead (Sphyrna lewini) aggregation sites. Well-targeted spatial closures for this species could significantly reduce catch of threatened species in this fishery.\nThe relationships between gear type, several aspects of fishing effort (i.e. hook number, engine power, number of sets, trip length), standardised CPUE of all shark species and standardised CPUE of threatened and regulated species suggest that there is an optimal effort that could increase overall CPUE of the fishery and significantly reduce fishing mortality of species of conservation concern. For example, our data suggest that CPUE peaks with low to intermediate trip lengths and gear sets, intermediate engine power and hook numbers of less than 75 per set longline. Although standardised CPUE of threatened and regulated species is also higher when fewer hooks are deployed, the catch per set and overall mortality is significantly lower. Regulations that control the number of hooks in combination with incentives for shark fishers to tightly manage the number of hooks they deploy could significantly reduce mortality of threatened and endangered species, maximise the overall CPUE of the fishery, and reduce operational costs for fishers, making shark fishing in Tanjung Luar more sustainable and more cost effective [39–41].\nAcknowledging that almost half of Tanjung Luar’s shark catch consists of CITES-listed species, developing measures that ensure both the sustainability of the fishery, and full traceability and control of onward trade, will be crucial for implementing CITES . The Indonesian government has demonstrated a strong commitment to regulating shark trade and implementing CITES [17–18], as demonstrated through several policy decisions to confer full and partial protection to CITES-listed shark and ray species (Marine Affairs and Fisheries Ministerial Decree No 4./KEPMEN-KP/2014, Regulation No. 48/PERMEN-KP/2016). This includes zero quotas/export bans for hammerhead and oceanic whitetip sharks. However, these export bans should be considered intermediate policy measures as monitoring systems and data availability are improved, and sustainable quotas are established. This will be challenging, as shark products are often traded in large volumes of fresh and/or preserved body parts, with high morphological similarity between products from regulated species and non-regulated species. To guarantee that trade is not detrimental to the survival of species, sustainable fisheries management will need to be complemented with species-specific trade quotas. This will require catch documentation systems which trace shark products from point of catch to point of export and rapid, low-cost species identification methods.\nAs baseline data on shark population health are limited, and there is no standardised, fisheries-independent system for monitoring long-term changes in shark populations, indirect bio-indicators (e.g. endo- and ectoparasites, [43–45]) could help to elucidate the impact of management measures on fisheries and populations of wild species. In the future, shark conservation and fisheries management could benefit from long-term monitoring of agreed indices of population abundance and health status.\nThese lessons may also apply to shark fisheries in other parts of the world. As sharks increasingly become the focus of global conservation efforts it should be acknowledged that species protection alone will not be enough to reduce mortality of priority species. More needs to be done to identify practical fisheries management measures that can reduce pressure on the most vulnerable species and populations, but also support sustainable use of species that are less susceptible to overfishing. Shark fishing forms an integral part of the livelihood strategies of many coastal communities [22, 23], and prohibiting catches will not necessarily lead to positive conservation outcomes [21, 46]. Management interventions must take into account local context and the motivations and well-being of fisher communities in order to be ethical, feasible and impactful.\nS1 Dataset. Data of landed sharks at Tanjung Luar auction that had been used for this study.\nS1 File. Questionnaires have been used to interview shark fishers, collector, traders, and processors.\nWe wish to acknowledge the support provided by fishers in Tanjung Luar for their great cooperation during fieldwork. We also thank I Made Dharma Aryawan, Muhsin, Abdul Kohar, and Abdurrafik for their assistance during field research, Benaya M Simeon, Peni Lestari, and Siska Agustina for helping with data processing, Ken Kassem for carefully reading the manuscript and providing useful inputs, and the anonymous reviewers for their constructive comments.\n3. Hutchings JA, Reynolds JD. Marine fish population collapses: consequences for recovery and extinction risk. AIBS Bulletin. 2004 Apr;54(4):297–309.\n4. Costello C, Ovando D, Clavelle T, Strauss CK, Hilborn R, Melnychuk MC, et al. Global fishery prospects under contrasting management regimes. Proceedings of the national academy of sciences. 2016 May 3;113(18):5125–9.\n5. Davidson LN, Krawchuk MA, Dulvy NK. Why have global shark and ray landings declined: improved management or overfishing?. Fish and Fisheries. 2016 Jun 1;17(2):438–58.\n6. Stevens JD, Bonfil R, Dulvy NK, Walker PA. The effects of fishing on sharks, rays, and chimaeras (chondrichthyans), and the implications for marine ecosystems. ICES Journal of Marine Science. 2000 Jun 1;57(3):476–94.\n9. Dent F, Clarke S. State of the global market for shark products. FAO Fisheries and Aquaculture Technical Paper (FAO) eng no. 590. 2015.\n12. Christensen J, Tull M, editors. Historical perspectives of fisheries exploitation in the Indo-Pacific. Springer Science & Business Media; 2014 Apr 1.\n13. Simpfendorfer CA, Heupel MR, White WT, Dulvy NK. The importance of research and public opinion to conservation management of sharks and rays: a synthesis. Marine and Freshwater Research. 2011 Jul 21;62(6):518–27.\n14. Lack M, Sant G. The future of sharks: a review of action and inaction. TRAFFIC International and the Pew Environment Group. 2011 Jan:44.\n15. Bräutigam A, Callow M, Campbell IR, Camhi MD, Cornish AS, Dulvy NK, et al. Global priorities for conserving sharks and rays: A 2015–2025 strategy. The Global Sharks and Rays Initiative; 2015. 27p.\n16. Satria A, Matsuda Y. Decentralization of fisheries management in Indonesia. Marine Policy. 2004 Sep 30;28(5):437–50.\n17. Dharmadi , Fahmi , Satria F. Fisheries management and conservation of sharks in Indonesia. African journal of marine science. 2015 Apr 3;37(2):249–58.\n20. Sembiring A, Pertiwi NP, Mahardini A, Wulandari R, Kurniasih EM, Kuncoro AW, Cahyani ND, Anggoro AW, Ulfa M, Madduppa H, Carpenter KE. DNA barcoding reveals targeted fisheries for endangered sharks in Indonesia. Fisheries Research. 2015 Apr 30;164:130–4.\n21. Clarke S. Re-examining the shark trade as a tool for conservation. SPC Fisheries Newsletter. 2014:49–56.\n22. Jaiteh VF, Loneragan NR, Warren C. The end of shark finning? Impacts of declining catches and fin demand on coastal community livelihoods. Marine Policy. 2017 Mar 24.\n24. Cohen D, Crabtree B. Qualitative research guidelines project. Robert Wood Johnson Foundation, Princeton. 2006 Available from: http://www.qualres.org/index.html Cited in August 2016.\n25. Skud BE. Manipulation of fixed gear and the effect on catch-per-unit effort. FAO Fisheries Report (FAO). 1984.\n26. Damalas D, Megalofonou P, Apostolopoulou M. Environmental, spatial, temporal and operational effects on swordfish (Xiphias gladius) catch rates of eastern Mediterranean Sea longline fisheries. Fisheries Research. 2007 Apr 30;84(2):233–46.\n27. Burnham KP, Anderson DR. Model selection and multimodel inference: a practical information-theoretic approach. Springer Science & Business Media; 2003 Dec 4.\n28. Schaefer MB. Some aspects of the dynamics of populations important to the management of the commercial marine fisheries. Inter-American Tropical Tuna Commission Bulletin. 1954;1(2):23–56.\n29. Fox WW Jr. An exponential surplus-yield model for optimizing exploited fish populations. Transactions of the American Fisheries Society. 1970 Jan 1;99(1):80–8.\n30. Purwanto P, Nugroho D, Suwarso S. Potential production of the five predominant small pelagic fish species groups in the Java Sea. Indonesian Fisheries Research Journal. 2014 Dec 1;20(2):59–67.\n31. Barker MJ, Schluessel V. Managing global shark fisheries: suggestions for prioritizing management strategies. Aquatic Conservation: Marine and Freshwater Ecosystems. 2005 Jul 1;15(4):325–47.\n34. Ward-Paige CA, Worm B. Global evaluation of shark sanctuaries. Global Environmental Change. 2017 Nov 30;47:174–89.\n35. Speed CW, Cappo M, Meekan MG. Evidence for rapid recovery of shark populations within a coral reef marine protected area. Biological Conservation. 2018 Apr 30;220:308–19.\n36. West Nusa Tenggara Provincial Government. 2017. [Management and zoning plan of Lunyuk Marine Protected Area]. Mataram: West Nusa Tenggara Provincial Government; 2017. Indonesian.\n37. Jaiteh VF, Lindfield SJ, Mangubhai S, Warren C, Fitzpatrick B, Loneragan NR. Higher abundance of marine predators and changes in fishers' behavior following spatial protection within the world's biggest shark fishery. Frontiers in Marine Science. 2016 Apr 7;3:43.\n39. Kumoru L. The shark longline fishery in Papua New Guinea. InReport prepared for Billfish and bycatch research group, at the 176th meeting of the standing committee on Tuna and Billfish, Mooloolaba, Australia, 9th-16th July 2003 2003 Jul.\n40. Cartamil D, Santana-Morales O, Escobedo-Olvera M, Kacev D, Castillo-Geniz L, Graham JB, Rubin RD, Sosa-Nishizaki O. The artisanal elasmobranch fishery of the Pacific coast of Baja California, Mexico. Fisheries Research. 2011 Mar 31;108(2):393–403.\n42. Vincent AC, Sadovy de Mitcheson YJ, Fowler SL, Lieberman S. The role of CITES in the conservation of marine fishes subject to international trade. Fish and Fisheries. 2014 Dec 1;15(4):563–92.\n43. Palm HW. Fish parasites as biological indicators in a changing world: can we monitor environmental impact and climate change?. InProgress in Parasitology 2011 (pp. 223–250). Springer Berlin Heidelberg.\n44. Palm HW, Yulianto I, Piatkowski U. Trypanorhynch Assemblages Indicate Ecological and Phylogenetical Attributes of Their Elasmobranch Final Hosts. Fishes. 2017 Jun 17;2(2):8.\n46. Booth H. Using the case of illegal manta ray trade in Indonesia to evaluate the impact of wildlife trade policy (Master Thesis, Imperial College London).\n47. White WT, Last PR, Stevens JD, Yearsley GK. Economically important sharks & rays of Indonesia. Austr-alian Centre for International Agricultural Research (ACIAR); 2006.", "answers": ["The relationships between catch per set and fishing behavior variables differ when comparing unstandardized CPUE and standardized CPUE."], "length": 6133, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "cd0ecda68ad8031330b971fbfdd3794916e815109f004d3b"} {"input": "How does the transition probability of the environment affect the learning rate in the static agent?", "context": "Paper Info\n\nTitle: Environmental variability and network structure determine the optimal plasticity mechanisms in embodied agents\nPublish Date: Unkown\nAuthor List: Sina Khajehabdollahi (from Department of Computer Science, University of Tübingen)\n\nFigure\n\nFigure2: An outline of the network controlling the foraging agent.The sensor layer receives inputs at each time step (the ingredients of the nearest food), which are processed by the plastic layer in the same way as the static sensory network, Fig.1.The output of that network is given as input to the motor network, along with the distance d and angle α to the nearest food, the current velocity v, and energy E of the agent.These signals are processed through two hidden layers to the final output of motor commands as the linear and angular acceleration of the agent\nFigure4: The evolved parameters θ = (θ 1 , . . ., θ 8 ) of the plasticity rule for the reward prediction (a.) and the decision (b.) tasks, for a variety of parameters (p tr = 0.01, d e ∈ 0, 0.1, . . ., 1, and σ ∈ 0, 0.1, . . ., 1 in all 100 combinations).Despite the relatively small difference between the tasks, the evolved learning rules differ considerably.For visual guidance, the lines connect θs from the same run.\nFigure5: a.The trajectory of an agent (blue line) in the 2D environment.A well-trained agent will approach and consume food with positive values (green dots) and avoid negative food (red dots).b.The learning rate of the plastic sensory network eta p grows with the distance between environments d e c. and decreases with the frequency of environmental change.d.The fitness of an agent (measured as the total food consumed over its lifetime) increases over generations of the EA for both the scalar and binary readouts in the sensory network.e.The Pearson correlation coefficient of an evolved agent's weights with the ingredient value vector of the current environment (E 1 -blue, E 2 -red).In this example, the agent's weights are anti-correlated with its environment, which is not an issue for performance since the motor network can interpret the inverted signs of food.\n\nabstract\n\nThe evolutionary balance between innate and learned behaviors is highly intricate, and different organisms have found different solutions to this problem. We hypothesize that the emergence and exact form of learning behaviors is naturally connected with the statistics of environmental fluctuations and tasks an organism needs to solve.\nHere, we study how different aspects of simulated environments shape an evolved synaptic plasticity rule in static and moving artificial agents. We demonstrate that environmental fluctuation and uncertainty control the reliance of artificial organisms on plasticity. Interestingly, the form of the emerging plasticity rule is additionally determined by the details of the task the artificial organisms are aiming to solve.\nMoreover, we show that coevolution between static connectivity and interacting plasticity mechanisms in distinct sub-networks changes the function and form of the emerging plasticity rules in embodied agents performing a foraging task. One of the defining features of living organisms is their ability to adapt to their environment and incorporate new information to modify their behavior.\nIt is unclear how the ability to learn first evolved , but its utility appears evident. Natural environments are too complex for all the necessary information to be hardcoded genetically and more importantly, they keep changing during an organism's lifetime in ways that cannot be anticipated ; . The link between learning and environmental uncertainty and fluctuation has been extensively demonstrated in both natural ; , and artificial environments .\nNevertheless, the ability to learn does not come without costs. For the capacity to learn to be beneficial in evolutionary terms, a costly nurturing period is often required, a phenomenon observed in both biological , and artificial organisms . Additionally, it has been shown that in some complex environments, hardcoded behaviors may be superior to learned ones given limits in the agent's lifetime and envi-ronmental uncertainty ; ; .\nThe theoretical investigation of the optimal balance between learned and innate behaviors in natural and artificial systems goes back several decades. However, it has recently found also a wide range of applications in applied AI systems ; . Most AI systems are trained for specific tasks, and have no need for modification after their training has been completed.\nStill, technological advances and the necessity to solve broad families of tasks make discussions about life-like AI systems relevant to a wide range of potential application areas. Thus the idea of open-ended AI agents that can continually interact with and adapt to changing environments has become particularly appealing.\nMany different approaches for introducing lifelong learning in artificial agents have been proposed. Some of them draw direct inspiration from actual biological systems ; . Among them, the most biologically plausible solution is to equip artificial neural networks with some local neural plasticity , similar to the large variety of synaptic plasticity mechanisms ; ; that performs the bulk of the learning in the brains of living organisms .\nThe artificial plasticity mechanisms can be optimized to modify the connectivity of the artificial neural networks toward solving a particular task. The optimization can use a variety of approaches, most commonly evolutionary computation. The idea of meta-learning or optimizing synaptic plasticity rules to perform specific functions has been recently established as an engineering tool that can compete with stateof-the-art machine learning algorithms on various complex tasks ; ; Pedersen and Risi (2021); .\nAdditionally, it can be used to reverse engineer actual plasticity mechanisms found in biological neural networks and uncover their functions ; . Here, we study the effect that different factors (environ-arXiv:2303.06734v1 [q-bio.NC] 12 Mar 2023 mental fluctuation and reliability, task complexity) have on the form of evolved functional reward-modulated plasticity rules.\nWe investigate the evolution of plasticity rules in static, single-layer simple networks. Then we increase the complexity by switching to moving agents performing a complex foraging task. In both cases, we study the impact of different environmental parameters on the form of the evolved plasticity mechanisms and the interaction of learned and static network connectivity.\nInterestingly, we find that different environmental conditions and different combinations of static and plastic connectivity have a very large impact on the resulting plasticity rules. We imagine an agent who must forage to survive in an environment presenting various types of complex food particles. Each food particle is composed of various amounts and combinations of N ingredients that can have positive (food) or negative (poison) values.\nThe value of a food particle is a weighted sum of its ingredients. To predict the reward value of a given resource, the agent must learn the values of these ingredients by interacting with the environment. The priors could be generated by genetic memory, but the exact values are subject to change. To introduce environmental variability, we stochastically change the values of the ingredients.\nMore precisely, we define two ingredient-value distributions E 1 and E 2 and switch between them, with probability p tr for every time step. We control how (dis)similar the environments are by parametrically setting E 2 = (1 − 2d e )E 1 , with d e ∈ [0, 1] serving as a distance proxy for the environments; when d e = 0, the environment remains unchanged, and when d e = 1 the value of each ingredient fully reverses when the environmental transition happens.\nFor simplicity, we take values of the ingredients in E 1 equally spaced between -1 and 1 (for the visualization, see Fig. ). The static agent receives passively presented food as a vector of ingredients and can assess its compound value using the linear summation of its sensors with the (learned or evolved) weights, see Fig. .\nThe network consists of N sensory neurons that are projecting to a single post-synaptic neuron. At each time step, an input X t = (x 1 , . . . , x N ) is presented, were the value x i , i ∈ {1, . . . , N } represents the quantity of the ingredient i. We draw x i independently form a uniform distribution on the [0, 1] interval (x i ∼ U (0, 1)).\nThe value of each ingredient w c i is determined by the environment (E 1 or E 2 ). The postsynaptic neuron outputs a prediction of the food X t value as y t = g(W X T t ). Throughout the paper, g will be either the identity function, in which case the prediction neuron is linear, or a step-function; however, it could be any other nonlinearity, such as a sigmoid or ReLU.\nAfter outputting the prediction, the neuron receives feedback in the form of the real value of the input R t . The real value is computed as R t = W c X T t + ξ, where W c = (w c 1 , . . . , w c N ) is the actual value of the ingredients, and ξ is a term summarizing the noise of reward and sensing system ξ ∼ N (0, σ).\nFigure : An outline of the static agent's network. The sensor layer receives inputs representing the quantity of each ingredient of a given food at each time step. The agent computes the prediction of the food's value y t and is then given the true value R t ; it finally uses this information in the plasticity rule to update the weight matrix.\nFor the evolutionary adjustment of the agent's parameters, the loss of the static agent is the sum of the mean squared errors (MSE) between its prediction y t and the reward R t over the lifetime of the agent. The agent's initial weights are set to the average of the two ingredient value distributions, which is the optimal initial value for the case of symmetric switching of environments that we consider here.\nAs a next step, we incorporate the sensory network of static agents into embodied agents that can move around in an environment scattered with food. To this end, we merge the static agent's network with a second, non-plastic motor network that is responsible for controlling the motion of the agent in the environment.\nSpecifically, the original plastic network now provides the agent with information about the value of the nearest food. The embodied agent has additional sensors for the distance from the nearest food, the angle between the current velocity and the nearest food direction, its own velocity, and its own energy level (sum of consumed food values).\nThese inputs are processed by two hidden layers (of 30 and 15 neurons) with tanh activation. The network's outputs are angular and linear acceleration, Fig. . The embodied agents spawn in a 2D space with periodic boundary conditions along with a number of food particles that are selected such that the mean of the food value distribution is ∼ 0. An agent can eat food by approaching it sufficiently closely, and each time a food particle is eaten, it is The sensor layer receives inputs at each time step (the ingredients of the nearest food), which are processed by the plastic layer in the same way as the static sensory network, Fig. .\nThe output of that network is given as input to the motor network, along with the distance d and angle α to the nearest food, the current velocity v, and energy E of the agent. These signals are processed through two hidden layers to the final output of motor commands as the linear and angular acceleration of the agent re-spawned with the same value somewhere randomly on the grid (following the setup of ).\nAfter 5000 time steps, the cumulative reward of the agent (the sum of the values of all the food it consumed) is taken as its fitness. During the evolutionary optimization, the parameters for both the motor network (connections) and plastic network (learning rule parameters) are co-evolved, and so agents must simultaneously learn to move and discriminate good/bad food.\nReward-modulated plasticity is one of the most promising explanations for biological credit assignment . In our network, the plasticity rule that updates the weights of the linear sensor network is a rewardmodulated rule which is parameterized as a linear combination of the input, the output, and the reward at each time step:\nAdditionally, after each plasticity step, the weights are normalized by mean subtraction, an important step for the stabilization of Hebbian-like plasticity rules . We use a genetic algorithm to optimize the learning rate η p and amplitudes of different terms θ = (θ 1 , . . . , θ 8 ). The successful plasticity rule after many food presentations must converge to a weight vector that predicts the correct food values (or allows the agent to correctly decide whether to eat a food or avoid it).\nTo have comparable results, we divide θ = (θ 1 , . . . , θ 8 ) by We then multiply the learning rate η p with θ max to maintain the rule's evolved form unchanged, η norm p = η p • θ max . In the following, we always use normalized η p and θ, omitting norm . To evolve the plasticity rule and the moving agents' motor networks, we use a simple genetic algorithm with elitism .\nThe agents' parameters are initialized at random (drawn from a Gaussian distribution), then the sensory network is trained by the plasticity rule and finally, the agents are evaluated. After each generation, the bestperforming agents (top 10 % of the population size) are selected and copied into the next generation.\nThe remaining 90 % of the generation is repopulated with mutated copies of the best-performing agents. We mutate agents by adding independent Gaussian noise (σ = 0.1) to its parameters. To start with, we consider a static agent whose goal is to identify the value of presented food correctly. The static reward-prediction network quickly evolves the parameters of the learning rule, successfully solving the prediction task.\nWe first look at the evolved learning rate η p , which determines how fast (if at all) the network's weight vector is updated during the lifetime of the agents. We identify three factors that control the learning rate parameter the EA converges to: the distance between the environments, the noisiness of the reward, and the rate of environmental transition.\nThe first natural factor is the distance d e between the two environments, with a larger distance requiring a higher learning rate, Fig. . This is an expected result since the convergence time to the \"correct\" weights is highly dependent on the initial conditions. If an agent is born at a point very close to optimality, which naturally happens if the environments are similar, the distance it needs to traverse on the fitness landscape is small.\nTherefore it can afford to have a small learning rate, which leads to a more stable convergence and is not affected by noise. A second parameter that impacts the learning rate is the variance of the rewards. The reward an agent receives for the plasticity step contains a noise term ξ that is drawn from a zero mean Gaussian distribution with standard deviation σ.\nThis parameter controls the unreliability of the agent's sensory system, i.e., higher σ means that the information the agent gets about the value of the foods it consumes cannot be fully trusted to reflect the actual value of the foods. As σ increases, the learning rate η p decreases, which means that the more unreliable an environment becomes, the less an agent relies on plasticity to update its weights, Fig. .\nIndeed for some combinations of relatively small distance d e and high reward variance σ, the EA converges to a learning rate of η p ≈ 0. This means that the agent opts to have no adaptation during its lifetime and remain at the mean of the two environments. It is an optimal solution when the expected loss due to ignoring the environmental transitions is, on average, lower than the loss the plastic network will incur by learning via the (often misleading because of the high σ) environmental cues.\nA final factor that affects the learning rate the EA will converge to is the frequency of environmental change during an agent's lifetime. Since the environmental change is modeled as a simple, two-state Markov process (Fig. ), the control parameter is the transition probability p tr . When keeping everything else the same, the learning rate rapidly rises as we increase the transition probability from 0, and after reaching a peak, it begins to decline slowly, eventually reaching zero (Fig. ).\nThis means that when environmental transition is very rare, agents opt for a very low learning rate, allowing a slow and stable convergence to an environment-appropriate weight vector that leads to very low losses while the agent remains in that environment. As the rate of environmental transition increases, faster learning is required to speed up convergence in order to exploit the (comparatively shorter) stays in each environment.\nFinally, as the environmental transition becomes too fast, the agents opt for slower or even no learning, which keeps them ) and the decision (b.) tasks, for a variety of parameters (p tr = 0.01, d e ∈ 0, 0.1, . . . , 1, and σ ∈ 0, 0.1, . . . , 1 in all 100 combinations). Despite the relatively small difference between the tasks, the evolved learning rules differ considerably.\nFor visual guidance, the lines connect θs from the same run. near the middle of the two environments, ensuring that the average loss of the two environments is minimal (Fig. ). The form of the evolved learning rule depends on the task: Decision vs. Prediction The plasticity parameters θ = (θ 1 , . . . , θ 8 ) for the rewardprediction task converge on approximately the same point, regardless of the environmental parameters (Fig. ).\nIn particular, θ 3 → 1, θ 5 → −1, θ i → 0 for all other i, and thus the learning rule converges to: Since by definition y t = g(W t X T t ) = W t X T t (g(x) = x in this experiment) and R t = W c X T t + ξ we get: Thus the distribution of ∆W t converges to a distribution with mean 0 and variance depending on η p and σ and W converges to W c .\nSo this learning rule will match the agent's weight vector with the vector of ingredient values in the environment. We examine the robustness of the learning rule the EA discovers by considering a slight modification of our task. Instead of predicting the expected food value, the agent now needs to decide whether to eat the presented food or not.\nThis is done by introducing a step-function nonlinearity (g(x) = 1 if x ≥ 1 and 0 otherwise). Then the output y(t) is computed as: Instead of the MSE loss between prediction and actual value, the fitness of the agent is now defined as the sum of the food values it chose to consume (by giving y t = 1). Besides these two changes, the setup of the experiments remains exactly the same.\nThe qualitative relation between η p and parameters of environment d e , σ and p tr is preserved in the changed experiment. However, the resulting learning rule is significantly different (Fig. ). The evolution converges to the following learning rule: In both cases, the rule has the form ∆W t = η p X t [α y R t + β y ].\nThus, the ∆W t is positive or negative depending on whether the reward R t is above or below a threshold (γ = −β y /α y ) that depends on the output decision of the network (y t = 0 or 1). Both learning rules (for the reward-prediction and decision tasks) have a clear Hebbian form (coordination of preand post-synaptic activity) and use the incoming reward signal as a threshold.\nThese similarities indicate some common organizing principles of reward-modulated learning rules, but their significant differences highlight the sensitivity of the optimization process to task details. We now turn to the moving embodied agents in the 2D environment. To optimize these agents, both the motor network's connections and the sensory network's plasticity parameters evolve simultaneously.\nSince the motor network is initially random and the agent has to move to find food, the number of interactions an agent experiences in its lifetime can be small, slowing down the learning. However, having the larger motor network also has benefits for evolution because it allows the output of the plastic network to be read out and transformed in different ways, resulting in a broad set of solutions.\nThe fitness of an agent (measured as the total food consumed over its lifetime) increases over generations of the EA for both the scalar and binary readouts in the sensory network. e. The Pearson correlation coefficient of an evolved agent's weights with the ingredient value vector of the current environment (E 1 -blue, E 2 -red).\nIn this example, the agent's weights are anti-correlated with its environment, which is not an issue for performance since the motor network can interpret the inverted signs of food. The agents can solve the task effectively by evolving a functional motor network and a plasticity rule that converges to interpretable weights (Fig. ).\nAfter ∼ 100 evolutionary steps (Fig. ), the agents can learn the ingredient value distribution using the plastic network and reliably move towards foods with positive values while avoiding the ones with negative values. We compare the dependence of the moving and the static agents on the parameters of the environment: d e and the state transition probability p tr .\nAt first, in order to simplify the experiment, we set the transition probability to 0, but fixed the initial weights to be the average of E 1 and E 2 , while the real state is E 2 . In this experiment, the distance between states d e indicates twice the distance between the agent's initial weights and the optimal weights (the environment's ingredient values) since the agent is initialized at the mean of the two environment distributions.\nSame as for the static agent, the learning rate increases with the distance d e (Fig. ). Then, we examine the effect of the environmental transition probability p tr on the evolved learning rate η p . In order for an agent to get sufficient exposure to each environment, we scale down the probability p tr from the equivalent experiment for the static agents.\nWe find that as the probability of transition increases, the evolved learning rate η p decreases (Fig. ). This fits with the larger trend for the static agent, although there is a clear difference when it comes to the increase for very small transition probabil-ities that were clearly identifiable in the static but not the moving agents.\nThis could be due to much sparser data and possibly the insufficiently long lifetime of the moving agent (the necessity of scaling makes direct comparisons difficult). Nevertheless, overall we see that the associations observed in the static agents between environmental distance d e and transition probability p tr and the evolved learning rate η p are largely maintained in the moving agents.\nStill, more data would be needed to make any conclusive assertions about the exact effect of these environmental parameters on the emerging plasticity mechanisms. A crucial difference between the static and the moving agents is the function the plasticity has to perform. While in the static agents, the plasticity has to effectively identify the exact value distribution of the environment in order to produce accurate predictions, in the embodied agents, the plasticity has to merely produce a representation of the environment that the motor network can evolve to interpret adequately enough to make decisions about which food to consume.\nTo illustrate the difference, we plot the Pearson correlation coefficient between an agent's weights and the ingredient values of the environment it is moving in (Fig. ). We use the correlation instead of the MSE loss (which we used for the static agents in Fig. ) because the amplitude of the vector varies a lot for different agents and meaningful The evolved parameters of moving agents' plasticity rule for the g(s) = x, identity (a.) and the step function (Eq.\n4) (b.) sensory networks (the environmental parameters here are d e ∈ [0, 1], σ = 0 and p tr = 0.001). The step function (binary output) network evolved a more structured plasticity rule (e.g., θ 3 > 0 for all realizations) than the linear network. Moreover, the learned weights for the identity network (c.) have higher variance and correlate significantly less with the environment's ingredient distribution compared to the learned weights for the thresholded network (d.)\nconclusions cannot be drawn from the MSE loss. For many agents, the learned weights are consistently anti-correlated with the actual ingredient values (an example of such an agent is shown in Fig. ). This means that the output of the sensory network will have the opposite sign from the actual food value.\nWhile in the static network, this would lead to very bad predictions and high loss, in the foraging task, these agents perform exactly as well as the ones where the weights and ingredients values are positively correlated, since the motor network can simply learn to move towards food for which it gets a negative instead of a positive sensory input.\nThis additional step of the output of the plastic network going through the motor network before producing any behavior has a strong effect on the plasticity rules that the embodied agents evolve. Specifically, if we look at the emerging rules the top performing agents have evolved (Fig. ), it becomes clear that, unlike the very well-structured rules of the static agents (Fig. ), there is now virtually no discernible pattern or structure.\nThe difference becomes even clearer if we look at the learned weights (at the end of a simulation) of the best-performing agents (Fig. ). While there is some correlation with the environment's ingredient value distribution, the variance is very large, and they do not seem to converge on the \"correct\" values in any way.\nThis is to some extent expected since, unlike the static agents where the network's output has to be exactly correct, driving the evolution of rules that converge to the precise environmental distribution, in the embodied networks, the bulk of the processing is done by the motor network which can evolve to interpret the scalar value of the sensory network's output in a variety of ways.\nThus, as long as the sensory network's plasticity rule co-evolves with the motor network, any plasticity rule that learns to produce consistent information about the value of encountered food can potentially be selected. To further test this assumption, we introduce a bottleneck of information propagation between the sensory and motor networks by using a step-function nonlinearity on the output of the sensory network (Eq.\n4). Similarly to the decision task of the static network, the output of the sensory network now becomes binary. This effectively reduces the flow of information from the sensory to the motor network, forcing the sensory network to consistently decide whether food should be consumed (with the caveat that the motor network can still interpret the binary sign in either of two ways, either consuming food marked with 1 or the ones marked with 0 by the sensory network).\nThe agents perform equally well in this variation of the task as before (Fig. ), but now, the evolved plasticity rules seem to be more structured (Fig. ). Moreover, the variance of the learned weights in the bestperforming agents is significantly reduced (Fig. ), which indicates that the bottleneck in the sensory network is in-creasing selection pressure for rules that learn the environment's food distribution accurately.\nWe find that different sources of variability have a strong impact on the extent to which evolving agents will develop neuronal plasticity mechanisms for adapting to their environment. A diverse environment, a reliable sensory system, and a rate of environmental change that is neither too large nor too small are necessary conditions for an agent to be able to effectively adapt via synaptic plasticity.\nAdditionally, we find that minor variations of the task an agent has to solve or the parametrization of the network can give rise to significantly different plasticity rules. Our results partially extend to embodied artificial agents performing a foraging task. We show that environmental variability also pushes the development of plasticity in such agents.\nStill, in contrast to the static agents, we find that the interaction of a static motor network with a plastic sensory network gives rise to a much greater variety of wellfunctioning learning rules. We propose a potential cause of this degeneracy; as the relatively complex motor network is allowed to read out and process the outputs from the plastic network, any consistent information coming out of these outputs can be potentially interpreted in a behaviorally useful way.\nReducing the information the motor network can extract from the sensory system significantly limits learning rule variability. Our findings on the effect of environmental variability concur with the findings of previous studies that have identified the constraints that environmental variability places on the evolutionary viability of learning behaviors.\nWe extend these findings in a mechanistic model which uses a biologically plausible learning mechanism (synaptic plasticity). We show how a simple evolutionary algorithm can optimize the different parameters of a simple reward-modulated plasticity rule for solving simple prediction and decision tasks.\nReward-modulated plasticity has been extensively studied as a plausible mechanism for credit assignment in the brain ; ; and has found several applications in artificial intelligence and robotics tasks ; . Here, we demonstrate how such rules can be very well-tuned to take into account different environmental parameters and produce optimal behavior in simple systems.\nAdditionally, we demonstrate how the co-evolution of plasticity and static functional connectivity in different subnetworks fundamentally changes the evolutionary pressures on the resulting plasticity rules, allowing for greater diversity in the form of the learning rule and the resulting learned connectivity.\nSeveral studies have demonstrated how, in biological networks, synaptic plasticity heavily interacts with and is driven by network topology . Moreover, it has been recently demonstrated that biological plasticity mechanisms are highly redundant in the sense that any observed neural connectivity or recorded activity can be achieved with a variety of distinct, unrelated learning rules .\nThis observed redundancy of learning rules in biological settings complements our results and suggests that the function of plasticity rules cannot be studied independently of the connectivity and topology of the networks they are acting on. The optimization of functional plasticity in neural networks is a promising research direction both as a means to understand biological learning processes and as a tool for building more autonomous artificial systems.\nOur results suggest that reward-modulated plasticity is highly adaptable to different environments and can be incorporated into larger systems that solve complex tasks. This work studies a simplified toy model of neural network learning in stochastic environments. Future work could be built on this basic framework to examine more complex reward distributions and sources of environmental variability.\nMoreover, a greater degree of biological realism could be added by studying more plausible network architectures (multiple plastic layers, recurrent and feedback connections) and more sophisticated plasticity rule parametrizations. Additionally, our foraging simulations were constrained by limited computational resources and were far from exhaustive.\nFurther experiments can investigate environments with different constraints, food distributions, multiple seasons, more complex motor control systems and interactions of those systems with different sensory networks as well as the inclusion of plasticity on the motor parts of the artificial organisms.", "answers": ["As the transition probability increases, the learning rate initially rises and then declines."], "length": 5346, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "7ae0ad8d4ded2dee79251ff4f951ecfcabad31d8b4f896ae"} {"input": "What did Mary tell the disciples?", "context": "A Homily from Easter Sunday, 2017.\nEarly on the first day of the week, while it was still dark, Mary Magdalene came to the tomb and saw that the stone had been removed from the tomb. But Mary stood weeping outside the tomb. As she wept, she bent over to look[a] into the tomb; and she saw two angels in white, sitting where the body of Jesus had been lying, one at the head and the other at the feet. They said to her, “Woman, why are you weeping?” She said to them, “They have taken away my Lord, and I do not know where they have laid him.” When she had said this, she turned around and saw Jesus standing there, but she did not know that it was Jesus. Jesus said to her, “Woman, why are you weeping? Whom are you looking for?” Supposing him to be the gardener, she said to him, “Sir, if you have carried him away, tell me where you have laid him, and I will take him away.” Jesus said to her, “Mary!” She turned and said to him in Hebrew,[b] “Rabbouni!” (which means Teacher). Jesus said to her, “Do not hold on to me, because I have not yet ascended to the Father. But go to my brothers and say to them, ‘I am ascending to my Father and your Father, to my God and your God.’” Mary Magdalene went and announced to the disciples, “I have seen the Lord”; and she told them that he had said these things to her.\nEarly in the morning, while it was still dark, Mary wept in the throws of grief. Early in the morning, while it was still dark, Mary dragged herself out of bed after a sleepless night and walked to the tomb in a kind of trance. Early in the morning, while it was still dark, Mary cried—scared, confused, alone. Early in the morning, while it was still dark, Mary thought that the powers of death had the last Word. Early in the morning, while it was still dark, Mary heard a voice in the darkness calling her name—Mary.\nThroughout this Lenten season, we’ve examined the ways that the powers and principalities hold us captive—how they push us towards securing our own survival, dominating others, using God for our own agenda. We’ve seen how in Jesus’ ministry, he’s constantly in resistance mode—exposing the powers for what they really are and envisioning an alternative way of living in the world. He describes this way as “the kingdom of God,” the living water we drink so we never thirst again, the light of the world. Jesus invites those who follow him into similar acts of resistance—to free us from the power money has on us by giving it away, to choose to see ourselves as Jesus sees us, resisting the shame that says I’m not enough, to practice Sabbath that contradicts productivity, to untie the grave clothes of someone who’s hands and feet are still tied in the trappings of death.\nBut all Jesus’ acts of resistance had a cost. All of the times he just wouldn’t shut up, all of the crowds he attracted because he actually noticed those who were normally ignored, the powers finally said enough is enough and put an end to his resistance the only way they could guarantee silence and division—by nailing him to a tree.\nJust then, she turned and saw a man the shadow of a man behind her; a man she assumed was the gardener, his face unfamiliar in the darkness. He repeated the question—“Woman, why are you crying?” Thinking that perhaps he knew what happened or worse, that he was a culprit, she begged, “Sir if you have carried him away, tell me where you have put him and I will get him.” But Jesus interrupted her pleading, interrupted her desperation, and called her by name from the darkness, Mary.\nMary. He calls her name. Her name. The name that captures the particularity of her life. To the gardener, she would just be the crying woman. At other points in her life, she was the possessed woman, the woman who wasn’t enough, the woman on the outside of the group. Never nameless—but still unnamed. Never not Mary, but still, not known.\nEarly in the morning, while it was still dark, God defeated the powers and principalities in the ultimate act of resistance—resurrection. The grave could not contain the Lord. Even death wasn’t enough.\nIn the resurrection, God defeats the powers of death and shows that it’s God who has the final Word. Nothing, not even death, can keep us from being fully known by God. The powers try to have the final say on our names, our identities, the markers by which we measure ourselves, the systems that hold people captive or keep people in oppression. But Jesus calls us out of the darkness by name.\nOn this Easter Sunday, we hear our Risen Lord calling our names from the darkness—Jesus, the resurrected one, the name above all names, the great I am, the Prince of Peace, the alpha and omega, the light of the world. The risen Lord has spoken.\nThis is the name unto which you were baptized. As you come forward and mark the sign of the cross on your forehead today, hear Jesus speaking your name from the darkness and drawing you into the light.\nFrom our worship service on the fifth Sunday of Lent, April 2, 2017.\n“Is the Lord really with us or not?” “Is the Lord really with us or not?” Why did you bring us all the way from Egypt to let us die of thirst in this desert? At least in Egypt, we had water. At least in Egypt, we weren’t so thirsty. At least in Egypt, we knew what tomorrow would hold. At least in Egypt, we weren’t so thirsty.\nBut no, that’s not the story they give us. They are hard on their ancestors. They tell how it is. The elders who sat and wrote down these stories understood something about our bodies, who we are and how we work. After all the generations these stories passed through, they tell the truth about how quickly we forget, about how quickly we complain, about how quickly we grow thirsty, about how much we need water.\nIt doesn’t take long, does it. By the end of this sermon, I will no doubt feel thirsty, not from walking on hard dusty ground in the heat of the day, but just from speaking with you. Most of us wake up in the morning needing a drink. Our bodies depend on water. We cannot live without it. Thirst, then, doesn’t happen only one time. When the Israelites panicked that they had no water, they weren’t only thinking of the present moment. They knew what was coming! We need water to live! Without water, we will die! Even if we have water for today, we will need water again tomorrow! We can drink until we are satisfied, only to know that we will eventually be thirsty for more.\nThe gospel of John tells a story about a woman who gave up on this question all together. She moved beyond wondering if the Lord was really with her, so confident God had forgotten her that she gave up wondering at all. Born a Samaritan into a world that valued other bodies as better than her body: male bodies, Jewish bodies, even married bodies. Even after encountering Jesus, she still leaves their conversation without a name, numbered as one of many, simply called, “Samaritan woman.” She too, was thirsty. Most believe that her shame led her to drink water in the heat of the day, when no one else would be at the rocky well, when she could get a drink alone, without experiencing the stigma and stares of others. When she came to get a drink, Jesus was also at the well, thirsty himself and in need of rest and water from the long journey through Samaria.\nThe Israelites complaint for water sends Moses to the only one who can satisfy, the only one who can meet this need. Moses turns to God, “What should I do with these people? How can I satisfy their thirst? I’ve looked around, I’ve checked far and wide, turned the house upsidedown, looked under the seats of the car, at the bottle of every bottle, I’ve even looked for dew on the ground and under the lids of jars and there is no water to be found. Where do we go for water? Is the Lord really with us or not?\nThe Israelites who wrote down this story and allowed the ancestors to look like desperate complainers who doubted God and tested God, they were onto something. They knew that we are thirsty people. Jesus knew also. All who are thirsty, come! All who believe in me, drink this living water! We are desperate to feel God’s presence, to be bathed in the water of the Spirit, to know that this is not all there is, to feel a sense of belonging to the One who is greater than I. We can only make it so long in the desert, so long wandering from one trial to the next, without a drink.\nAnd yet, Jesus also says, “Blessed are those who hunger and thirst for righteousness, for they will be filled. Notice with me: not blessed are those who are righteous, but those who thirst for righteousness. Not blessed are those who are righteous, but those who thirst for righteousness. Blessed are those who thirst for relationship with God, to know God, to see God.\nI wonder, “Does Jesus want us to keep wanting?” Does Jesus want us to keep thirsting? Many faithful followers of Jesus throughout history have never claimed their thirst was quenched, never fully satisfied. You know that moment when you quench your thirst, when you sigh with relief when your throat is at ease once again, that’s the opposite of how many of God’s children have described the life of faith. They describe wanting more, being satisfied at times, while knowing they will be thirsty again.\nWhat will be waiting for you at the rock? Will the water gush out, bursting forth, covering you from head to toe with God’s presence, drenching you in hope, cleansing you from the dust that’s caked to your feet and renewing you for a new day, a new hour, a new moment basking in the presence of God?\nWhat will be waiting for you at the rock? Will the water drip slowly, quenching your thirst for but a moment, giving you just a glimpse of God’s spirit? Will it be so hard to get the water from the rock, that you’ll have to bend down, get underneath that dripping water to try and catch a drop? Will it be just enough for you to know, if only for a moment, that God is really with you? Will it be just enough to satisfy you for this hour, but keep you coming back for more?\nWhat will be waiting for you at the rock? What if it seems like the water has run out, like there isn’t a drop left, the way that Mother Teresa described? What then? We follow her example. She still goes to the rock, over and over again, not to get water quench her own thirst, but to relieve the thirst of God’s other children.\nThis post was adapted from our sermon series on Interpreting Exodus. Pastor Megan preached this sermon at Butner Federal Prison complex on August 30, 2015.\nOn Father’s Day 2015, we gathered for worship at the labyrinth in front of UNC hospital, having devoted the month of June to exploring the question, “What happens after we die?” Many have watched their father’s die in this place or other similar spaces. We shared in a time of both remembrance and prayer/meditation, participating in the ancient spiritual practice of walking the labyrinth. A labyrinth is a kind of maze, laid out in a circle.\nTony graciously shared the following reflections from his experience at the labyrinth on the hot June day.\nIt’s smaller than I expected, stark and hard‐surfaced, with no landscaping for ornamentation or shade. I don’t know what to expect from it… or from myself. But that’s part of the appeal. I stand at the entrance, hesitating, trying to clear my mind. This doesn’t work very well, so I just start walking.\nAlmost immediately, the path presents itself as a linear and chronological symbol of my life’s journey. Like my physical lifetime, it has a beginning and an end, with an as‐yet undetermined amount between. This could be interesting. I like it so far… although I’m insecure about my style… and unsure about proper protocol. Is someone staring at me? Do I have to meditate? How slowly should I walk? Is it better to focus my thoughts… or to simply let them come? Will I control this thing, or allow it to control me?\nI begin to see each step as an increment of elapsed time, an irretrievable expenditure of life energy. I equate my initial discomfort to the natural immaturity of my childhood years. I gradually move beyond it, into metaphorical adulthood. This is much better.\nMost of the path is a series of gentle arcs. These are fairly easy to maneuver, like my comfortable life. But these segments are connected by intermittent sharp turns, mostly 180‐degree switchbacks. I see these as representing significant life changes or challenges, requiring more concentration and skill to negotiate. I notice that I am executing some of these turns mechanically, and some more gracefully. I begin to anticipate upcoming turns, and try to maintain good form around each one.\nI can’t see much of the path ahead, nor the end. I spend a significant amount of mental energy dealing with this uncertainty, constantly wanting to know my real‐time ratio of “distance walked” to “distance remaining”. This is a recurring distraction.\nToday is Father’s Day, and my Dad is on my mind. He recently completed his well‐walked journey, and is now watching me… even if as mere metaphor… or only as an element of my own (self‐) consciousness. I feel his presence embedded in his absence. I’m aware that it’s not only my turn to walk… it’s my only turn to walk.\nI think about my children, grandson, soon‐to‐arrive granddaughter, and their descendants. The familiar succession of life, death and new life seems magical, divinely‐derived, and strangely better than living forever. My role is limited, but critical. I love the part, and embrace it.\nI am acutely aware that others are journeying all around me. These are friends of mine. We meet, almost brushing, as we walk. The path seems purposefully narrow, perhaps perfectly so. I suddenly understand that it is impossible to walk this close to others without being affected by them. I affect them too… seen as small adjustments in their position or posture. As we meet, I try not to encroach too much, but making sure not to pull away. I put creative energy into maintaining the perfect degree of separation between our bodies. This feels like more art than science… each friend deserving a customized approach. This closeness seems good to me.\nThere is a much younger walker behind me, getting ever closer. I’m clearly holding her back. Maybe this means that the younger generation wants me to hurry up and get out of their way. I remind myself not to stretch the symbolism too far… as I pick up my pace.\nI now see the end of the path ahead. I have been expecting this part to be emotionally complicated, but it is not. The final section is round… large and unrestrictive… a qualitative change from the narrow linear pathway. The circle opens up to welcome me. It is easy to step into, a perfectly natural thing to do at the end of my walk. Inside the circle, I am centered… comfortable… peaceful… thankful.\n16 As Jesus passed alongside the Galilee Sea, he saw two brothers, Simon and Andrew, throwing fishing nets into the sea, for they were fishermen. 17 “Come, follow me,” he said, “and I’ll show you how to fish for people.” 18 Right away, they left their nets and followed him. 19 After going a little farther, he saw James and John, Zebedee’s sons, in their boat repairing the fishing nets. 20 At that very moment he called them. They followed him, leaving their father Zebedee in the boat with the hired workers.\nThis is a story about 4 fishermen, Simon, Andrew, James and John. It’s a normal morning at the docks. Each one of them is going about business as usual. They arrived at dawn, bundled up in the cool morning air and started work without much conversation. Simon and Andrew are working on one fishing boat and see the Teacher approaching. “Hey. There he is,” says Andrew. Jesus from Nazareth. You can’t go anywhere without hearing about him lately. What’s he doing down here?” They paddle back to shore, not wanting to miss any trouble this Jesus fellow might stir up. Simon and Andrew get the beach and Jesus comes over to talk to them. It’s like he had come there that morning just to find these two guys. Jesus didn’t say much, “Come and follow me.” Jesus invited these 2 fishermen to be his disciples, to follow after him, to walk behind him, tracing his every step.\nFurther down the beach, the same scene repeats. This time, Jesus walks directly up to James and John who are focused on repairing their fishing net. Jesus says the same thing to them and now all four fishermen walk behind their rabbi with no idea of what’s ahead of them.\nIt’s a big deal! The four normal guys, working a normal job, on a normal morning, decide to follow Jesus. Maybe you’ve wondered like I have, how is it that Simon, Andrew, James and John do it? How do they drop everything to follow Jesus? What were they thinking? How did they feel?\nIt’s interesting. The story doesn’t tell us. There’s nothing about how they felt. It doesn’t say they were excited, or moved, or scared, or joyful or resistant. This story about four fisherman gives us only verbs. Jesus passed alongside the Galilee Sea. He saw two brothers. He said, Come, follow. Then, Simon and Andrew left and followed. Jesus saw James and John. Jesus called them. They followed him.\nThis is a story about four fisherman who decided to follow Jesus.\nThis is also a story about fishing. I’ve been fishing been fishing three or four times. Once I realized that fishing was primarily a crack of dawn activity, I knew it wasn’t really for me. Jesus uses a kind of puzzling image about fishing. He says, “Come, follow me, and I’ll show you how to fish for people.” I don’t know about you, but this I find this to be very strange. I realized this week why his image is so confusing to me. What do you imagine when someone talks about fishing? What I imagine when I hear the word “fish” or “fishing” is a fishing pole, the rod, reel, bait, tackle box, worms, that kind of fishing. So I’ve always interpreted what Jesus said this way.\nI will make you fishers of men, fishers of men, fishers of men. I will make you fishers of men, if you follow me.\nThe road Jesus invites these four fishermen to follow him on will mean casting a net of love and welcome to people that they do not anticipate. Jesus will cast his net into the sea of a broken world, filled with sinners, people who have messed up, people who are outsiders, who don’t belong. Jesus will stay in the homes of poor, be guilty of associating prostitutes and touching the hands of people with communicable diseases. Jesus will throw his net into the sea and invite everyone in. Jesus will eventually be arrested and executed because those in power decided his fishing net included a few too many of the wrong people. This is a story about fishing.\nThis isn’t only a story about four fisherman, or only a story about fishing. It’s also, and perhaps, most importantly, a story about God.\nIf this is only a story about four fisherman who decide to follow Jesus, the pressures on you and me! After all, aren’t we too called to follow Jesus? Called to be his disciples? Wasn’t that the invitation you first heard when you first heard about Jesus? God has called us and we must decide. Jesus wants us all to follow him, to be like him, to walk in his footsteps, to do what he does. Of course this story is about that! And they do it, don’t they? Simon, Andrew, James, John, they do it! They decide and they do follow Jesus, imperfectly at that. Still, it’s a lot of pressure, a lot of responsibility. If life becomes all about what we do for Jesus, something is missing.\nIf this is only a story about fishing, have some of us failed? Is it too late for us? Some of us might not be the best at fishing, not all the great about casting Jesus’ loving net to our brothers and sisters. His net is sometimes, or maybe more than sometimes, a bit more expansive than we might be comfortable with. He calls us to be like him and fish for people, and yet, sometimes we can barely get the net into the water. Perhaps for others, we aren’t even convinced that Jesus would include us in the net at all, no matter how deep into the water he goes. He can really mean me? Would his net really reach me? There’s still more to the story.\nThis is a story about God, who God is, how God acts, what God does. Before Andrew, Simon, James and John follow Jesus, Jesus finds them. Before they follow Jesus, Jesus comes to them! They don’t have to go searching, they have been found. Jesus saw. Jesus spoke. Jesus called. Jesus said, “Come.“ We don’t follow Jesus in order to find him, to prove our worthiness with what we do, or even by showing Jesus how big our nets are. We follow Jesus because he first came to us. He came down to the beach to meet these four fishermen. He came specifically for Simon and for Andrew, for James and for John, for you and me.\nThis blog post was adapted from Pastor Megan’s sermon at Butner Federal Prison on January 25, 2015.\nThe life of faith consists of seasons. One scholar suggests that we can categorize these seasons of life as seasons of being securely oriented, painfully disoriented, and surprisingly reoriented. These generalizations could apply to our self-acceptance, our relations to significant others, and our participation in public or private life. We might think about these seasons as passages of life, stages of growth, or even identity crises. Acknowledging where we find ourselves in a particular season can allow us to be honest about where we are at in our lives and where we are in relation to God.\nThe Psalms, a collection of prayers, songs, and poems addressed to God, correspond to these seasons of orientation, disorientation, and reorientation. As we read through the book, we find Psalms where the writer is full of thanksgiving to God, securely oriented in life. We also find Psalms that demonstrate disorientation, perhaps categorized by loss, transition, grief, suffering, or even anger. Finally, some Psalms are written from a perspective of reorientation, wherein the Psalmist transitions from a period of being disoriented to being reoriented in relation to God and others.\nThe Psalms can become our partner in prayer. Giving us words when we have none, we pray the Psalms joining with all those who have prayed them before us and all who will pray them after we are gone. As we pray the Psalms, we find permission to be utterly honest with God about our feelings and situation, free to speak openly and deeply to God about what we are experiencing. Praying the Psalms also helps us to envision God’s future when we can’t see it ourselves. Lastly, the Psalms guard us against religion or merely thinking about God. Using their words in prayer brings us into direct conversation with the living God, in language we may never have imagined would come from our lips.\n-Pray the assigned Psalm from the daily lectionary, with set Scriptures to read each day. Click here to see today’s readings, subscribe to the daily readings by email, or download the app.\n-Pray the Psalms using the practice of praying in color. Click here for an excerpt from Sybil MacBeth’s book that gives instructions for praying in color. I have the book available if anyone would like to borrow it. You can read more about praying in color on her website.\n-Pray a Psalm, followed by journal writing. Consider these prompts: Where do I find myself in this Psalm? Where do I find my community? How am I being oriented to God in this prayer? What images or metaphors do I find striking? Explore the image more deeply.\n-Pray through a list of Psalms, one per day or the same one each day for a week.\n-Pray them as a family or with housemates at mealtime or bedtime.\n-Pray abbreviated Psalms as breath prayers. A breath Ppayer rhythm is simple: Breathe in slow and deep as you whisper or think on a phrase… Hold your breath… Then exhale.\nI will sing to my God as long as I am.\nPsalm 8: Lord, our master, how great is your name in all the earth.\nPsalm 104: Seek the Lord and his power; seek his face forever. Remember the wonders he has done.\n-Pray the Psalms using lectio divina. For instructions on praying lectio divina individually or in groups, click here. There are also instructions for doing lectio divina in color from Sybil MacBeth’s book.\n-Pray a Psalm from the category of life within which you find yourself—orientation, disorientation, or reorientation.\nPsalms of Orientation: These Psalms reflect a confident belief that the world is well ordered, reliable, and life-giving to the person of faith.\nPsalms of Disorientation: These Psalms reflect the brokenness of life, when it is no longer orderly but savage. Spoken out of the depths, they are still bold acts of faith.\nPsalms of New Orientation: The pit is not the end of life; there is more. New orientation Psalms reflect the surprise of new possibilities that are experienced as pure gift from God. They are full of thanks.\nCitations: The Message of the Psalms and Praying the Psalms by Walter Brueggemann and Getting Involved with God, by Ellen Davis.\nJoseph’s story opens in Genesis 37 and it’s a long one. Joseph was one of 11 kids, the youngest son. In Genesis 37, the story says, “Now Jacob (Joseph’s dad) loved Joseph more than any of his other sons because he was born when Jacob was old. Jacob had made for him a long robe. When his brothers saw that their father loved him more than any of his brothers, they hated him and couldn’t even talk nicely to him.” Sibling rivalry, jealousy, family drama—maybe a little too familiar for some of us.\nJust when you want to feel bad for Joseph, show him sympathy, “Poor kid—he can’t help that he’s the favorite,” Joseph makes himself quickly unlikeable. When Joseph’s head hits the pillow at night, he has vivid dreams about the future, dreams where he rules over his brothers. In one of these dreams, he’s in the field working with his brothers. They each tie a bundle of grain together…I imagine it like a hay bail. His bail rises up, towering and floating in the air above the others, while each of his ten older brother’s bails of hay, bows down to his bail, as if he’s ruling over them like a king. What’s worse—he didn’t keep his mouth shut about his dreams. Nope. He went ahead and announced them at the dinner table. When I imagine this scene, I’m reminded of the importance of friends. He seriously needed a friend to say, “Dude, listen, you have some dreams where you’re awesome and your brothers treat you like a king. They hate you, man. Keep your dreams to yourself.” Joseph lacked such a friend, so he bragged about his dreams—that combined with his fancy North Face jacket that Daddy bought for him only and the favoritism their dad showed him, brought his brothers to plot about how they might rid themselves of this pesky brat forever.\nJoseph’s brothers considered killing Joseph, but they settled on kidnapping him and selling him into slavery instead. That way, they wouldn’t have his death on their foreheads, without having to put up with him anymore. They took Joseph’s fancy coat and destroyed it, making it look like a wild animal killed Joseph. This they showed to their father, so that he would assume that Joseph was dead; their dad would never suspect they had any part in his disappearance.\nMeanwhile, Joseph was taken off to Egypt where he worked as a slave. Though he did well there and followed all the rules, he became a victim for a second time, when his master’s wife accused him of a crime he didn’t commit. Over a period of 13 years, Joseph worked as a slave and spent years locked up in prison. After a series of unlikely events, some terrible and some remarkable, Joseph rose to power and became the king’s right hand man, his adviser.\nWith the king’s blessing and support, Joseph led his country in preparing for a famine, putting food away on reserve during seven years of plenty. When a famine struck the land, Egypt was in a good position, able to lean on the reserved food that Joseph had put away. The surrounding lands, including Joseph’s homeland, had to lean on Egypt for food or else they would starve.\nJoseph shows his brother’s enormous generosity. He has them go home, pack up and move their entire family, including their elderly father Jacob to Egypt to be near Joseph. Not long after making the trip to Egypt and being reunited with his father, their father, an elderly man at this point, Jacob dies.\nAnd the final chapter of the story opens. Jacob is dead. Their father is gone. Now what?\nRealizing that their father was dead, Joseph’s brothers said, “What if Joseph still bears a grudge against us and pays us back in full for all the wrong that we did to him?” 16 So they approached[b] Joseph, saying, “Your father gave this instruction before he died, 17 ‘Say to Joseph: I beg you, forgive the crime of your brothers and the wrong they did in harming you.’ Now therefore please forgive the crime of the servants of the God of your father.” Joseph wept when they spoke to him. 18 Then his brothers also wept,[c] fell down before him, and said, “We are here as your slaves.” 19 But Joseph said to them, “Do not be afraid! Am I in the place of God? 20 Even though you intended to do harm to me, God intended it for good, in order to preserve a numerous people, as he is doing today. 21 So have no fear; I myself will provide for you and your little ones.” In this way he reassured them, speaking kindly to them.\nFear is a powerful force. Fear motivates and fear paralyzes.\nIt’s a little funny how they phrase the words of their father. The brothers put a great deal of distance between themselves and Joseph. Instead of saying, “Our father told us to tell you…” they say, “Your father to us to tell you…” They distance themselves from Joseph and from the message that their dad supposedly gave them to pass along.\nAnd then, they do it again. “Please forgive the crime of the servants of the God of your father,” his brothers say. They refer to themselves in third person—“the servants of the God of your father.” It’s not “our crime” that “we committed,” but the crime of these others.\nFear not only keeps them from confession, fear also keeps them from receiving forgiveness. They are scared for their lives the moment their father breathes his last, but haven’t they already been through this conversation with Joseph? At the dinner table, when Joseph revealed his identity to them, he tells them not to worry. “It’s ok. Yeah, it was awful, but look where I am! Look at how God has used me to help save those who would be starving now. I’m even saving you!” Joseph has already offered them forgiveness, but they haven’t fully received it. They haven’t believed what he’s said. Perhaps their views of themselves were so low that they didn’t see themselves worthy of forgiveness. Maybe they’ve carried the guilt for so long about what they’ve done, they fear what life will be like without it. It’s become so much an engrained part of their identity, they don’t know who they are apart from the guilt of what they’ve done. They fear receiving Joseph’s forgiveness. They fear forgiving themselves.\nTo the plea of the 10 brothers, to this made-up, manipulative, last cry for safety, Joseph has two responses. First, he weeps. His weeping—his display of vulnerability and emotion—causes his brothers to begin to weep also. There they are, 11 grown brothers, weeping on the floor of the house. Why did Joseph begin to weep? The story doesn’t say. Let’s notice, brothers and sisters—the road to releasing fear and offering and receiving forgiveness may not come without weeping.\nFear is a powerful force. Fear is an excellent motivator—moving us to do particular things and act in particular ways. But fear not only motivates, it can also paralyze, cause us to freeze right where we’re at, accept things for how they are. This final chapter begins with the brothers saying to one another, “What if…?” What if Joseph still bears a grudge against us…? Fear finishes the sentence, beginning in the words, “What if…?” Fear finishes the sentence. What if…he still bears a grudge against us? What if…we confess our evil to Joseph and he says that’s the end of us? What if…we ask for forgiveness and he denies it—if I say, will you forgive me, and he says, “no”?\n“What if’s” sneak into our minds and hearts.\nWhat if…I never get out of here?\nWhat if I fail as a parent?\nWhat if I don’t belong?\nWhat if no one notices I’m gone?\nWhat if I stand up for what I believe is right and it costs me my reputation?\nWhat if I make a mistake at work and lose my job?\nWhat if I risk opening myself up to someone and get hurt or betrayed again?\nWhat if my body fails me?\nWhat if I can never accept that the past can’t change?\nWhat if I’m not worthy of God’s forgiveness or the forgiveness of those I’ve wronged?\nWhat if I can never forgive myself?\nWe do not have to live in fear. We do not have to be motivated or paralyzed by it. Look at the God that we serve! Joseph explains how God has been with him. He says to his brothers, “Even though you intended to do harm to me, God intended it for good, in order to preserve a numerous people, as he is doing today.” Does this mean that God wanted or desired Joseph’s brothers to kidnap him, throw him into slavery and ruin his life to avenge their jealousy? No. God doesn’t desire that jealousy and revenge rule our lives. God doesn’t will for us to do evil or to harm other people. Rather, God is able to overcome evil and transform it. God can overcome evil! When Jesus was captured, tried as a criminal and sentenced to death, God overcame death, raising Jesus from the death.\nThis post was adapted from my sermon preached at Butner Federal Prison on September 14, 2014.\nWe were gathered at the plaza, right between the giant bull statue and the unattractive fences of a construction site. Luminary bags weighted with rice and lit candles marked the sacred space surrounding 30 of us, one to represent each person who died as a result of domestic violence the previous year in our state. The vigil began as planned, simple, but meaningful, to remember victims of this tragedy and raise awareness about the suffering that takes place behind closed doors. About halfway through the simple service, a woman stumbled into the vigil, interrupting the solemn mood without realizing that a group was gathered and someone was speaking. She stood silent for a few moments, listening to the speaker. When she realized that the speaker was talking about domestic violence, she began to interrupt, asking questions to the speaker, sharing details from her own experience with abuse. “What would you do…what would you do if…?” she cried. Then, as unexpectedly as she joined us and as abruptly as her interruption, she began to weep, uncontrollably crying for the rest of the vigil. A couple of women gathered around her and held her as she wept. Before long, it was my turn to pray. I barely got the words out…I could hardly project my shaking voice over her loud sobs.\n“Blessed are those who mourn, for they will be comforted,” Jesus proclaims in the second line of the beatitudes. Blessed are those who mourn. How is this weeping woman, this victim of abuse, blessed? She mourns the injustices she’s experienced, her suffering, the ways her life has been shaped by pain and her inability to free herself from her oppression. Jesus says that this woman and all her sisters and brothers that mourn with her are blessed.\nThe Jewish culture that Jesus was born into has a rich history of mourning or practicing lament, stretching back hundreds of years before he was born. The prophets and the Psalms include poems, songs, and speeches, recounting the words of people gathered together for public mourning. This mourning wasn’t a kind of crying about having a bad day or because of a frustration at home or work. The mourning Jesus is referencing is the kind of mourning that is a response to injustice and oppression, those who mourn the impact of the powers, both material and spiritual, on the lives of the most vulnerable.\nBlessed are those who mourn. Another beatitude and another paradox. Once again, Jesus’ words are outlandish and nonsensical. How is it that those who mourn are blessed? Aren’t those who are happy and fulfilled, aren’t they the ones that are blessed? Yet, in this beatitude, in this paradox, Jesus once again exposes the powers and envisions an alternative. Jesus exposes the powers that cause people to mourn in the first place, those who experience unjust suffering and loss, the same injustices that cause people to be poor in spirit. It’s these people, the mourners, that are blessed, Jesus says. These are the people that Jesus came for. In God’s empire, mourners are not written off or ignored as uncivilized, uneducated, or badly behaved. Instead, in God’s empire, they are the ones who receive God’s comfort and consolation; God’s hears their cries.\nOur culture tends to restrict mourning or public displays of emotion to something appropriate for home life or private time. Further, spending time in mourning may be quickly relegated to a waste of time or an inactive posture. The expression, “Don’t just cry about it, do something,” illustrates this clearly. But mourning is not a useless waste of time or an inactive practice. Mourn is a verb. In fact, mourning elicits action and engagement. Mourning exposes the powers, shows their true colors. Seeing people in mourning is disorienting. It interrupts the lives we lead that are detached from suffering and injustice, forcing us to take another look, to pause, to listen, and to join.\nThe woman who interrupted our solemn vigil for victims of domestic violence exposed the powers with her loud wailing. She made me feel uncomfortable, like I wanted to look away and get away from her as quickly as possible. And yet, her cries made it impossible for me to forget her. The sound of her weeping echoed in my ears for weeks following and if I try, I can still hear them now, over nine months later. Her mourning moves me to engage in seeking justice for others who have suffered like she has.\n1. James Howell, The Beatitudes for Today. Louisville: Westminster John Knox Press, 2005., 45.\n2. James Howell, The Beatitudes for Today, 46.", "answers": ["\"I have seen the Lord.\"."], "length": 6856, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "d1f31d2998513a4552d0419e16f8eba896e9ea1c8403605b"} {"input": "What is the SI unit of power?", "context": "For other uses, see Electricity (disambiguation).\n\"Electric\" redirects here. For other uses, see Electric (disambiguation).\nLightning is one of the most dramatic effects of electricity.\nElectricity is the set of physical phenomena associated with the presence and motion of matter that has a property of electric charge. In early days, electricity was considered as being not related to magnetism. Later on, many experimental results and the development of Maxwell's equations indicated that both electricity and magnetism are from a single phenomenon: electromagnetism. Various common phenomena are related to electricity, including lightning, static electricity, electric heating, electric discharges and many others.\nThe presence of an electric charge, which can be either positive or negative, produces an electric field. The movement of electric charges is an electric current and produces a magnetic field.\nWhen a charge is placed in a location with a non-zero electric field, a force will act on it. The magnitude of this force is given by Coulomb's law. Thus, if that charge were to move, the electric field would be doing work on the electric charge. Thus we can speak of electric potential at a certain point in space, which is equal to the work done by an external agent in carrying a unit of positive charge from an arbitrarily chosen reference point to that point without any acceleration and is typically measured in volts.\nelectronics which deals with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes and integrated circuits, and associated passive interconnection technologies.\nElectrical phenomena have been studied since antiquity, though progress in theoretical understanding remained slow until the seventeenth and eighteenth centuries. Even then, practical applications for electricity were few, and it would not be until the late nineteenth century that electrical engineers were able to put it to industrial and residential use. The rapid expansion in electrical technology at this time transformed industry and society, becoming a driving force for the Second Industrial Revolution. Electricity's extraordinary versatility means it can be put to an almost limitless set of applications which include transport, heating, lighting, communications, and computation. Electrical power is now the backbone of modern industrial society.\nLong before any knowledge of electricity existed, people were aware of shocks from electric fish. Ancient Egyptian texts dating from 2750 BCE referred to these fish as the \"Thunderer of the Nile\", and described them as the \"protectors\" of all other fish. Electric fish were again reported millennia later by ancient Greek, Roman and Arabic naturalists and physicians. Several ancient writers, such as Pliny the Elder and Scribonius Largus, attested to the numbing effect of electric shocks delivered by catfish and electric rays, and knew that such shocks could travel along conducting objects. Patients suffering from ailments such as gout or headache were directed to touch electric fish in the hope that the powerful jolt might cure them. Possibly the earliest and nearest approach to the discovery of the identity of lightning, and electricity from any other source, is to be attributed to the Arabs, who before the 15th century had the Arabic word for lightning ra‘ad (رعد) applied to the electric ray.\nAncient cultures around the Mediterranean knew that certain objects, such as rods of amber, could be rubbed with cat's fur to attract light objects like feathers. Thales of Miletus made a series of observations on static electricity around 600 BCE, from which he believed that friction rendered amber magnetic, in contrast to minerals such as magnetite, which needed no rubbing. Thales was incorrect in believing the attraction was due to a magnetic effect, but later science would prove a link between magnetism and electricity. According to a controversial theory, the Parthians may have had knowledge of electroplating, based on the 1936 discovery of the Baghdad Battery, which resembles a galvanic cell, though it is uncertain whether the artifact was electrical in nature.\nBenjamin Franklin conducted extensive research on electricity in the 18th century, as documented by Joseph Priestley (1767) History and Present Status of Electricity, with whom Franklin carried on extended correspondence.\nElectricity would remain little more than an intellectual curiosity for millennia until 1600, when the English scientist William Gilbert wrote De Magnete, in which he made a careful study of electricity and magnetism, distinguishing the lodestone effect from static electricity produced by rubbing amber. He coined the New Latin word electricus (\"of amber\" or \"like amber\", from ἤλεκτρον, elektron, the Greek word for \"amber\") to refer to the property of attracting small objects after being rubbed. This association gave rise to the English words \"electric\" and \"electricity\", which made their first appearance in print in Thomas Browne's Pseudodoxia Epidemica of 1646.\nFurther work was conducted in the 17th and early 18th centuries by Otto von Guericke, Robert Boyle, Stephen Gray and C. F. du Fay. Later in the 18th century, Benjamin Franklin conducted extensive research in electricity, selling his possessions to fund his work. In June 1752 he is reputed to have attached a metal key to the bottom of a dampened kite string and flown the kite in a storm-threatened sky. A succession of sparks jumping from the key to the back of his hand showed that lightning was indeed electrical in nature. He also explained the apparently paradoxical behavior of the Leyden jar as a device for storing large amounts of electrical charge in terms of electricity consisting of both positive and negative charges.\nIn 1791, Luigi Galvani published his discovery of bioelectromagnetics, demonstrating that electricity was the medium by which neurons passed signals to the muscles. Alessandro Volta's battery, or voltaic pile, of 1800, made from alternating layers of zinc and copper, provided scientists with a more reliable source of electrical energy than the electrostatic machines previously used. The recognition of electromagnetism, the unity of electric and magnetic phenomena, is due to Hans Christian Ørsted and André-Marie Ampère in 1819–1820. Michael Faraday invented the electric motor in 1821, and Georg Ohm mathematically analysed the electrical circuit in 1827. Electricity and magnetism (and light) were definitively linked by James Clerk Maxwell, in particular in his \"On Physical Lines of Force\" in 1861 and 1862.\nWhile the early 19th century had seen rapid progress in electrical science, the late 19th century would see the greatest progress in electrical engineering. Through such people as Alexander Graham Bell, Ottó Bláthy, Thomas Edison, Galileo Ferraris, Oliver Heaviside, Ányos Jedlik, William Thomson, 1st Baron Kelvin, Charles Algernon Parsons, Werner von Siemens, Joseph Swan, Reginald Fessenden, Nikola Tesla and George Westinghouse, electricity turned from a scientific curiosity into an essential tool for modern life.\nIn 1887, Heinrich Hertz:843–44 discovered that electrodes illuminated with ultraviolet light create electric sparks more easily. In 1905, Albert Einstein published a paper that explained experimental data from the photoelectric effect as being the result of light energy being carried in discrete quantized packets, energising electrons. This discovery led to the quantum revolution. Einstein was awarded the Nobel Prize in Physics in 1921 for \"his discovery of the law of the photoelectric effect\". The photoelectric effect is also employed in photocells such as can be found in solar panels and this is frequently used to make electricity commercially.\nThe first solid-state device was the \"cat's-whisker detector\" first used in the 1900s in radio receivers. A whisker-like wire is placed lightly in contact with a solid crystal (such as a germanium crystal) to detect a radio signal by the contact junction effect. In a solid-state component, the current is confined to solid elements and compounds engineered specifically to switch and amplify it. Current flow can be understood in two forms: as negatively charged electrons, and as positively charged electron deficiencies called holes. These charges and holes are understood in terms of quantum physics. The building material is most often a crystalline semiconductor.\nThe solid-state device came into its own with the invention of the transistor in 1947. Common solid-state devices include transistors, microprocessor chips, and RAM. A specialized type of RAM called flash RAM is used in USB flash drives and more recently, solid-state drives to replace mechanically rotating magnetic disc hard disk drives. Solid state devices became prevalent in the 1950s and the 1960s, during the transition from vacuum tubes to semiconductor diodes, transistors, integrated circuit (IC) and the light-emitting diode (LED).\nThe presence of charge gives rise to an electrostatic force: charges exert a force on each other, an effect that was known, though not understood, in antiquity.:457 A lightweight ball suspended from a string can be charged by touching it with a glass rod that has itself been charged by rubbing with a cloth. If a similar ball is charged by the same glass rod, it is found to repel the first: the charge acts to force the two balls apart. Two balls that are charged with a rubbed amber rod also repel each other. However, if one ball is charged by the glass rod, and the other by an amber rod, the two balls are found to attract each other. These phenomena were investigated in the late eighteenth century by Charles-Augustin de Coulomb, who deduced that charge manifests itself in two opposing forms. This discovery led to the well-known axiom: like-charged objects repel and opposite-charged objects attract.\nThe force acts on the charged particles themselves, hence charge has a tendency to spread itself as evenly as possible over a conducting surface. The magnitude of the electromagnetic force, whether attractive or repulsive, is given by Coulomb's law, which relates the force to the product of the charges and has an inverse-square relation to the distance between them.:35 The electromagnetic force is very strong, second only in strength to the strong interaction, but unlike that force it operates over all distances. In comparison with the much weaker gravitational force, the electromagnetic force pushing two electrons apart is 1042 times that of the gravitational attraction pulling them together.\nStudy has shown that the origin of charge is from certain types of subatomic particles which have the property of electric charge. Electric charge gives rise to and interacts with the electromagnetic force, one of the four fundamental forces of nature. The most familiar carriers of electrical charge are the electron and proton. Experiment has shown charge to be a conserved quantity, that is, the net charge within an electrically isolated system will always remain constant regardless of any changes taking place within that system. Within the system, charge may be transferred between bodies, either by direct contact, or by passing along a conducting material, such as a wire.:2–5 The informal term static electricity refers to the net presence (or 'imbalance') of charge on a body, usually caused when dissimilar materials are rubbed together, transferring charge from one to the other.\nThe charge on electrons and protons is opposite in sign, hence an amount of charge may be expressed as being either negative or positive. By convention, the charge carried by electrons is deemed negative, and that by protons positive, a custom that originated with the work of Benjamin Franklin. The amount of charge is usually given the symbol Q and expressed in coulombs; each electron carries the same charge of approximately −1.6022×10−19 coulomb. The proton has a charge that is equal and opposite, and thus +1.6022×10−19 coulomb. Charge is possessed not just by matter, but also by antimatter, each antiparticle bearing an equal and opposite charge to its corresponding particle.\nThe movement of electric charge is known as an electric current, the intensity of which is usually measured in amperes. Current can consist of any moving charged particles; most commonly these are electrons, but any charge in motion constitutes a current. Electric current can flow through some things, electrical conductors, but will not flow through an electrical insulator.\nBy historical convention, a positive current is defined as having the same direction of flow as any positive charge it contains, or to flow from the most positive part of a circuit to the most negative part. Current defined in this manner is called conventional current. The motion of negatively charged electrons around an electric circuit, one of the most familiar forms of current, is thus deemed positive in the opposite direction to that of the electrons. However, depending on the conditions, an electric current can consist of a flow of charged particles in either direction, or even in both directions at once. The positive-to-negative convention is widely used to simplify this situation.\nThe process by which electric current passes through a material is termed electrical conduction, and its nature varies with that of the charged particles and the material through which they are travelling. Examples of electric currents include metallic conduction, where electrons flow through a conductor such as metal, and electrolysis, where ions (charged atoms) flow through liquids, or through plasmas such as electrical sparks. While the particles themselves can move quite slowly, sometimes with an average drift velocity only fractions of a millimetre per second,:17 the electric field that drives them itself propagates at close to the speed of light, enabling electrical signals to pass rapidly along wires.\nCurrent causes several observable effects, which historically were the means of recognising its presence. That water could be decomposed by the current from a voltaic pile was discovered by Nicholson and Carlisle in 1800, a process now known as electrolysis. Their work was greatly expanded upon by Michael Faraday in 1833. Current through a resistance causes localised heating, an effect James Prescott Joule studied mathematically in 1840.:23–24 One of the most important discoveries relating to current was made accidentally by Hans Christian Ørsted in 1820, when, while preparing a lecture, he witnessed the current in a wire disturbing the needle of a magnetic compass. He had discovered electromagnetism, a fundamental interaction between electricity and magnetics. The level of electromagnetic emissions generated by electric arcing is high enough to produce electromagnetic interference, which can be detrimental to the workings of adjacent equipment.\nIn engineering or household applications, current is often described as being either direct current (DC) or alternating current (AC). These terms refer to how the current varies in time. Direct current, as produced by example from a battery and required by most electronic devices, is a unidirectional flow from the positive part of a circuit to the negative.:11 If, as is most common, this flow is carried by electrons, they will be travelling in the opposite direction. Alternating current is any current that reverses direction repeatedly; almost always this takes the form of a sine wave.:206–07 Alternating current thus pulses back and forth within a conductor without the charge moving any net distance over time. The time-averaged value of an alternating current is zero, but it delivers energy in first one direction, and then the reverse. Alternating current is affected by electrical properties that are not observed under steady state direct current, such as inductance and capacitance.:223–25 These properties however can become important when circuitry is subjected to transients, such as when first energised.\nThe concept of the electric field was introduced by Michael Faraday. An electric field is created by a charged body in the space that surrounds it, and results in a force exerted on any other charges placed within the field. The electric field acts between two charges in a similar manner to the way that the gravitational field acts between two masses, and like it, extends towards infinity and shows an inverse square relationship with distance. However, there is an important difference. Gravity always acts in attraction, drawing two masses together, while the electric field can result in either attraction or repulsion. Since large bodies such as planets generally carry no net charge, the electric field at a distance is usually zero. Thus gravity is the dominant force at distance in the universe, despite being much weaker.\nA hollow conducting body carries all its charge on its outer surface. The field is therefore zero at all places inside the body.:88 This is the operating principal of the Faraday cage, a conducting metal shell which isolates its interior from outside electrical effects.\nThe principles of electrostatics are important when designing items of high-voltage equipment. There is a finite limit to the electric field strength that may be withstood by any medium. Beyond this point, electrical breakdown occurs and an electric arc causes flashover between the charged parts. Air, for example, tends to arc across small gaps at electric field strengths which exceed 30 kV per centimetre. Over larger gaps, its breakdown strength is weaker, perhaps 1 kV per centimetre. The most visible natural occurrence of this is lightning, caused when charge becomes separated in the clouds by rising columns of air, and raises the electric field in the air to greater than it can withstand. The voltage of a large lightning cloud may be as high as 100 MV and have discharge energies as great as 250 kWh.\nA pair of AA cells. The + sign indicates the polarity of the potential difference between the battery terminals.\nThe concept of electric potential is closely linked to that of the electric field. A small charge placed within an electric field experiences a force, and to have brought that charge to that point against the force requires work. The electric potential at any point is defined as the energy required to bring a unit test charge from an infinite distance slowly to that point. It is usually measured in volts, and one volt is the potential for which one joule of work must be expended to bring a charge of one coulomb from infinity.:494–98 This definition of potential, while formal, has little practical application, and a more useful concept is that of electric potential difference, and is the energy required to move a unit charge between two specified points. An electric field has the special property that it is conservative, which means that the path taken by the test charge is irrelevant: all paths between two specified points expend the same energy, and thus a unique value for potential difference may be stated.:494–98 The volt is so strongly identified as the unit of choice for measurement and description of electric potential difference that the term voltage sees greater everyday usage.\nFor practical purposes, it is useful to define a common reference point to which potentials may be expressed and compared. While this could be at infinity, a much more useful reference is the Earth itself, which is assumed to be at the same potential everywhere. This reference point naturally takes the name earth or ground. Earth is assumed to be an infinite source of equal amounts of positive and negative charge, and is therefore electrically uncharged—and unchargeable.\nElectric potential is a scalar quantity, that is, it has only magnitude and not direction. It may be viewed as analogous to height: just as a released object will fall through a difference in heights caused by a gravitational field, so a charge will 'fall' across the voltage caused by an electric field. As relief maps show contour lines marking points of equal height, a set of lines marking points of equal potential (known as equipotentials) may be drawn around an electrostatically charged object. The equipotentials cross all lines of force at right angles. They must also lie parallel to a conductor's surface, otherwise this would produce a force that will move the charge carriers to even the potential of the surface.\nØrsted's discovery in 1821 that a magnetic field existed around all sides of a wire carrying an electric current indicated that there was a direct relationship between electricity and magnetism. Moreover, the interaction seemed different from gravitational and electrostatic forces, the two forces of nature then known. The force on the compass needle did not direct it to or away from the current-carrying wire, but acted at right angles to it. Ørsted's slightly obscure words were that \"the electric conflict acts in a revolving manner.\" The force also depended on the direction of the current, for if the flow was reversed, then the force did too.\nØrsted did not fully understand his discovery, but he observed the effect was reciprocal: a current exerts a force on a magnet, and a magnetic field exerts a force on a current. The phenomenon was further investigated by Ampère, who discovered that two parallel current-carrying wires exerted a force upon each other: two wires conducting currents in the same direction are attracted to each other, while wires containing currents in opposite directions are forced apart. The interaction is mediated by the magnetic field each current produces and forms the basis for the international definition of the ampere.\nThis relationship between magnetic fields and currents is extremely important, for it led to Michael Faraday's invention of the electric motor in 1821. Faraday's homopolar motor consisted of a permanent magnet sitting in a pool of mercury. A current was allowed through a wire suspended from a pivot above the magnet and dipped into the mercury. The magnet exerted a tangential force on the wire, making it circle around the magnet for as long as the current was maintained.\nExperimentation by Faraday in 1831 revealed that a wire moving perpendicular to a magnetic field developed a potential difference between its ends. Further analysis of this process, known as electromagnetic induction, enabled him to state the principle, now known as Faraday's law of induction, that the potential difference induced in a closed circuit is proportional to the rate of change of magnetic flux through the loop. Exploitation of this discovery enabled him to invent the first electrical generator in 1831, in which he converted the mechanical energy of a rotating copper disc to electrical energy. Faraday's disc was inefficient and of no use as a practical generator, but it showed the possibility of generating electric power using magnetism, a possibility that would be taken up by those that followed on from his work.\nItalian physicist Alessandro Volta showing his \"battery\" to French emperor Napoleon Bonaparte in the early 19th century.\nThe ability of chemical reactions to produce electricity, and conversely the ability of electricity to drive chemical reactions has a wide array of uses.\nElectrochemistry has always been an important part of electricity. From the initial invention of the Voltaic pile, electrochemical cells have evolved into the many different types of batteries, electroplating and electrolysis cells. Aluminium is produced in vast quantities this way, and many portable devices are electrically powered using rechargeable cells.\nA basic electric circuit. The voltage source V on the left drives a current I around the circuit, delivering electrical energy into the resistor R. From the resistor, the current returns to the source, completing the circuit.\nAn electric circuit is an interconnection of electric components such that electric charge is made to flow along a closed path (a circuit), usually to perform some useful task.\nElectric power is the rate at which electric energy is transferred by an electric circuit. The SI unit of power is the watt, one joule per second.\nElectricity generation is often done with electric generators, but can also be supplied by chemical sources such as electric batteries or by other means from a wide variety of sources of energy. Electric power is generally supplied to businesses and homes by the electric power industry. Electricity is usually sold by the kilowatt hour (3.6 MJ) which is the product of power in kilowatts multiplied by running time in hours. Electric utilities measure power using electricity meters, which keep a running total of the electric energy delivered to a customer. Unlike fossil fuels, electricity is a low entropy form of energy and can be converted into motion or many other forms of energy with high efficiency.\nElectronics deals with electrical circuits that involve active electrical components such as vacuum tubes, transistors, diodes, optoelectronics, sensors and integrated circuits, and associated passive interconnection technologies. The nonlinear behaviour of active components and their ability to control electron flows makes amplification of weak signals possible and electronics is widely used in information processing, telecommunications, and signal processing. The ability of electronic devices to act as switches makes digital information processing possible. Interconnection technologies such as circuit boards, electronics packaging technology, and other varied forms of communication infrastructure complete circuit functionality and transform the mixed components into a regular working system.\nToday, most electronic devices use semiconductor components to perform electron control. The study of semiconductor devices and related technology is considered a branch of solid state physics, whereas the design and construction of electronic circuits to solve practical problems come under electronics engineering.\nThus, the work of many researchers enabled the use of electronics to convert signals into high frequency oscillating currents, and via suitably shaped conductors, electricity permits the transmission and reception of these signals via radio waves over very long distances.\nEarly 20th-century alternator made in Budapest, Hungary, in the power generating hall of a hydroelectric station (photograph by Prokudin-Gorsky, 1905–1915).\nIn the 6th century BC, the Greek philosopher Thales of Miletus experimented with amber rods and these experiments were the first studies into the production of electrical energy. While this method, now known as the triboelectric effect, can lift light objects and generate sparks, it is extremely inefficient. It was not until the invention of the voltaic pile in the eighteenth century that a viable source of electricity became available. The voltaic pile, and its modern descendant, the electrical battery, store energy chemically and make it available on demand in the form of electrical energy. The battery is a versatile and very common power source which is ideally suited to many applications, but its energy storage is finite, and once discharged it must be disposed of or recharged. For large electrical demands electrical energy must be generated and transmitted continuously over conductive transmission lines.\nElectrical power is usually generated by electro-mechanical generators driven by steam produced from fossil fuel combustion, or the heat released from nuclear reactions; or from other sources such as kinetic energy extracted from wind or flowing water. The modern steam turbine invented by Sir Charles Parsons in 1884 today generates about 80 percent of the electric power in the world using a variety of heat sources. Such generators bear no resemblance to Faraday's homopolar disc generator of 1831, but they still rely on his electromagnetic principle that a conductor linking a changing magnetic field induces a potential difference across its ends. The invention in the late nineteenth century of the transformer meant that electrical power could be transmitted more efficiently at a higher voltage but lower current. Efficient electrical transmission meant in turn that electricity could be generated at centralised power stations, where it benefited from economies of scale, and then be despatched relatively long distances to where it was needed.\nSince electrical energy cannot easily be stored in quantities large enough to meet demands on a national scale, at all times exactly as much must be produced as is required. This requires electricity utilities to make careful predictions of their electrical loads, and maintain constant co-ordination with their power stations. A certain amount of generation must always be held in reserve to cushion an electrical grid against inevitable disturbances and losses.\nElectricity is a very convenient way to transfer energy, and it has been adapted to a huge, and growing, number of uses. The invention of a practical incandescent light bulb in the 1870s led to lighting becoming one of the first publicly available applications of electrical power. Although electrification brought with it its own dangers, replacing the naked flames of gas lighting greatly reduced fire hazards within homes and factories. Public utilities were set up in many cities targeting the burgeoning market for electrical lighting. In the late 20th century and in modern times, the trend has started to flow in the direction of deregulation in the electrical power sector.\nThe resistive Joule heating effect employed in filament light bulbs also sees more direct use in electric heating. While this is versatile and controllable, it can be seen as wasteful, since most electrical generation has already required the production of heat at a power station. A number of countries, such as Denmark, have issued legislation restricting or banning the use of resistive electric heating in new buildings. Electricity is however still a highly practical energy source for heating and refrigeration, with air conditioning/heat pumps representing a growing sector for electricity demand for heating and cooling, the effects of which electricity utilities are increasingly obliged to accommodate.\nElectricity is used within telecommunications, and indeed the electrical telegraph, demonstrated commercially in 1837 by Cooke and Wheatstone, was one of its earliest applications. With the construction of first intercontinental, and then transatlantic, telegraph systems in the 1860s, electricity had enabled communications in minutes across the globe. Optical fibre and satellite communication have taken a share of the market for communications systems, but electricity can be expected to remain an essential part of the process.\nThe effects of electromagnetism are most visibly employed in the electric motor, which provides a clean and efficient means of motive power. A stationary motor such as a winch is easily provided with a supply of power, but a motor that moves with its application, such as an electric vehicle, is obliged to either carry along a power source such as a battery, or to collect current from a sliding contact such as a pantograph. Electrically powered vehicles are used in public transportation, such as electric buses and trains, and an increasing number of battery-powered electric cars in private ownership.\nElectronic devices make use of the transistor, perhaps one of the most important inventions of the twentieth century, and a fundamental building block of all modern circuitry. A modern integrated circuit may contain several billion miniaturised transistors in a region only a few centimetres square.\nA voltage applied to a human body causes an electric current through the tissues, and although the relationship is non-linear, the greater the voltage, the greater the current. The threshold for perception varies with the supply frequency and with the path of the current, but is about 0.1 mA to 1 mA for mains-frequency electricity, though a current as low as a microamp can be detected as an electrovibration effect under certain conditions. If the current is sufficiently high, it will cause muscle contraction, fibrillation of the heart, and tissue burns. The lack of any visible sign that a conductor is electrified makes electricity a particular hazard. The pain caused by an electric shock can be intense, leading electricity at times to be employed as a method of torture. Death caused by an electric shock is referred to as electrocution. Electrocution is still the means of judicial execution in some jurisdictions, though its use has become rarer in recent times.\nElectricity is not a human invention, and may be observed in several forms in nature, a prominent manifestation of which is lightning. Many interactions familiar at the macroscopic level, such as touch, friction or chemical bonding, are due to interactions between electric fields on the atomic scale. The Earth's magnetic field is thought to arise from a natural dynamo of circulating currents in the planet's core. Certain crystals, such as quartz, or even sugar, generate a potential difference across their faces when subjected to external pressure. This phenomenon is known as piezoelectricity, from the Greek piezein (πιέζειν), meaning to press, and was discovered in 1880 by Pierre and Jacques Curie. The effect is reciprocal, and when a piezoelectric material is subjected to an electric field, a small change in physical dimensions takes place.\n§Bioelectrogenesis in microbial life is a prominent phenomenon in soils and sediment ecology resulting from anaerobic respiration. The microbial fuel cell mimics this ubiquitous natural phenomenon.\nSome organisms, such as sharks, are able to detect and respond to changes in electric fields, an ability known as electroreception, while others, termed electrogenic, are able to generate voltages themselves to serve as a predatory or defensive weapon. The order Gymnotiformes, of which the best known example is the electric eel, detect or stun their prey via high voltages generated from modified muscle cells called electrocytes. All animals transmit information along their cell membranes with voltage pulses called action potentials, whose functions include communication by the nervous system between neurons and muscles. An electric shock stimulates this system, and causes muscles to contract. Action potentials are also responsible for coordinating activities in certain plants.\nIn the 19th and early 20th century, electricity was not part of the everyday life of many people, even in the industrialised Western world. The popular culture of the time accordingly often depicted it as a mysterious, quasi-magical force that can slay the living, revive the dead or otherwise bend the laws of nature. This attitude began with the 1771 experiments of Luigi Galvani in which the legs of dead frogs were shown to twitch on application of animal electricity. \"Revitalization\" or resuscitation of apparently dead or drowned persons was reported in the medical literature shortly after Galvani's work. These results were known to Mary Shelley when she authored Frankenstein (1819), although she does not name the method of revitalization of the monster. The revitalization of monsters with electricity later became a stock theme in horror films.\nAs the public familiarity with electricity as the lifeblood of the Second Industrial Revolution grew, its wielders were more often cast in a positive light, such as the workers who \"finger death at their gloves' end as they piece and repiece the living wires\" in Rudyard Kipling's 1907 poem Sons of Martha. Electrically powered vehicles of every sort featured large in adventure stories such as those of Jules Verne and the Tom Swift books. The masters of electricity, whether fictional or real—including scientists such as Thomas Edison, Charles Steinmetz or Nikola Tesla—were popularly conceived of as having wizard-like powers.\nWith electricity ceasing to be a novelty and becoming a necessity of everyday life in the later half of the 20th century, it required particular attention by popular culture only when it stops flowing, an event that usually signals disaster. The people who keep it flowing, such as the nameless hero of Jimmy Webb’s song \"Wichita Lineman\" (1968), are still often cast as heroic, wizard-like figures.\nAmpère's circuital law, connects the direction of an electric current and its associated magnetic currents.\n^ Diogenes Laertius. R.D. Hicks (ed.). \"Lives of Eminent Philosophers, Book 1 Chapter 1 \". Perseus Digital Library. Tufts University. Retrieved 5 February 2017. Aristotle and Hippias affirm that, arguing from the magnet and from amber, he attributed a soul or life even to inanimate objects.\n^ Aristotle. Daniel C. Stevenson (ed.). \"De Animus (On the Soul) Book 1 Part 2 (B4 verso)\". The Internet Classics Archive. Translated by J.A. Smith. Retrieved 5 February 2017. Thales, too, to judge from what is recorded about him, seems to have held soul to be a motive force, since he said that the magnet has a soul in it because it moves the iron.\n^ a b c Guarnieri, M. (2014). \"Electricity in the age of Enlightenment\". IEEE Industrial Electronics Magazine. 8 (3): 60–63. doi:10.1109/MIE.2014.2335431.\n^ Srodes, James (2002), Franklin: The Essential Founding Father, Regnery Publishing, pp. 92–94, ISBN 0-89526-163-4 It is uncertain if Franklin personally carried out this experiment, but it is popularly attributed to him.\n^ a b Guarnieri, M. (2014). \"The Big Jump from the Legs of a Frog\". IEEE Industrial Electronics Magazine. 8 (4): 59–61, 69. doi:10.1109/MIE.2014.2361237.\n^ Hertz, Heinrich (1887). \"Ueber den Einfluss des ultravioletten Lichtes auf die electrische Entladung\". Annalen der Physik. 267 (8): S. 983–1000. Bibcode:1887AnP...267..983H. doi:10.1002/andp.18872670827.\n^ \"The Nobel Prize in Physics 1921\". Nobel Foundation. Retrieved 2013-03-16.\n^ John Sydney Blakemore, Solid state physics, pp. 1–3, Cambridge University Press, 1985 ISBN 0-521-31391-0.\n^ Richard C. Jaeger, Travis N. Blalock, Microelectronic circuit design, pp. 46–47, McGraw-Hill Professional, 2003 ISBN 0-07-250503-6.\n^ \"The repulsive force between two small spheres charged with the same type of electricity is inversely proportional to the square of the distance between the centres of the two spheres.\" Charles-Augustin de Coulomb, Histoire de l'Academie Royal des Sciences, Paris 1785.\n^ Sewell, Tyson (1902), The Elements of Electrical Engineering, Lockwood, p. 18 . The Q originally stood for 'quantity of electricity', the term 'electricity' now more commonly expressed as 'charge'.\n^ a b Berkson, William (1974), Fields of Force: The Development of a World View from Faraday to Einstein, Routledge, p. 370, ISBN 0-7100-7626-6 Accounts differ as to whether this was before, during, or after a lecture.\n^ \"Lab Note #105 EMI Reduction – Unsuppressed vs. Suppressed\". Arc Suppression Technologies. April 2011. Retrieved March 7, 2012.\n^ Almost all electric fields vary in space. An exception is the electric field surrounding a planar conductor of infinite extent, the field of which is uniform.\n^ Paul J. Nahin (9 October 2002). Oliver Heaviside: The Life, Work, and Times of an Electrical Genius of the Victorian Age. JHU Press. ISBN 978-0-8018-6909-9.\n^ \"The Bumpy Road to Energy Deregulation\". EnPowered. 2016-03-28.\n^ a b c d e f g h Van Riper, op.cit., p. 71.\nLook up electricity in Wiktionary, the free dictionary.\nBasic Concepts of Electricity chapter from Lessons In Electric Circuits Vol 1 DC book and series.", "answers": ["Watt, one joule per second."], "length": 6197, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "4c7891d780eb3f45e8c4e5bf14fd9ed6c0bf898fb159b329"} {"input": "Can individual molecules of indeno[1,2-a]fluorene switch between open-shell and closed-shell states?", "context": "Paper Info\n\nTitle: Bistability between π-diradical open-shell and closed-shell states in indeno[1,2-a]fluorene\nPublish Date: Unkown\nAuthor List: Shantanu Mishra (from IBM Research Europe -Zurich), Manuel Vilas-Varela (from Department of Organic Chemistry, Center for Research in Biological Chemistry and Molecular Materials (CiQUS), University of Santiago de Compostela), Leonard-Alexander Lieske (from IBM Research Europe -Zurich), Ricardo Ortiz (from Donostia International Physics Center (DIPC)), Igor Rončević (from Department of Chemistry, University of Oxford), Florian Albrecht (from IBM Research Europe -Zurich), Diego Peña (from Department of Organic Chemistry, Center for Research in Biological Chemistry and Molecular Materials (CiQUS), University of Santiago de Compostela), Leo Gross (from IBM Research Europe -Zurich)\n\nFigure\n\nFig. 1 | Non-benzenoid non-alternant polycyclic conjugated hydrocarbons.a, Classical nonbenzenoid non-alternant polycyclic conjugated hydrocarbons: pentalene, azulene and heptalene.b, Generation of indacenes and indenoindenes through benzinterposition and benzannelation of pentalene, respectively.Gray filled rings represent Clar sextets.c, Closed-shell Kekulé (left) and openshell non-Kekulé (right) resonance structures of QDMs.Note that meta-QDM is a non-Kekulé molecule.All indenofluorene isomers, being derived through benzannelation of indacenes, contain a central QDM moiety.d, Closed-shell Kekulé (top) and open-shell non-Kekulé (bottom) resonance structures of indenofluorenes.Compared to their closed-shell structures, 1 and 5 gain two Clar sextets in the openshell structure, while 2-4 gain only one Clar sextet in the open-shell structure.Colored bonds in d highlight the ortho-and para-QDM moieties in the two closed-shell Kekulé structures of 5. e, Scheme of on-surface generation of 5 by voltage pulse-induced dehydrogenation of 6 (C20H14).Structures 7 and 8 represent the two monoradical species (C20H13).\nFig. 2 | Characterization of open-shell indeno[1,2-a]fluorene on bilayer NaCl/Au(111).a, DFTcalculated wave functions of the frontier orbitals of 5OS in the triplet configuration for the spin up (occupied) level (isovalue: 0.002 e -Å -3 ).Blue and red colors represent opposite phases of the wave function.b, Corresponding DFT-calculated spin density of 5OS (isovalue: 0.01 e -Å -3).Blue and orange colors represent spin up and spin down densities, respectively.c, Probability density of the SOMOs of 5OS (isovalue: 0.001 e -Å -3 ).d, DFT-calculated bond lengths of 5OS.e, Constant-height I(V) spectra acquired on a species of 5 assigned as 5OS, along with the corresponding dI/dV(V) spectra.Open feedback parameters: V = -2 V, I = 0.17 pA (negative bias side) and V = 2 V, I = 0.17 pA (positive bias side).Acquisition position of the spectra is shown in Supplementary Fig.7.f, Scheme of many-body transitions associated to the measured ionic resonances of 5OS.Also shown are STM images of assigned 5OS at biases where the corresponding transitions become accessible.Scanning parameters: I = 0.3 pA (V = -1.2V and -1.5 V) and 0.2 pA (V = 1.3 V and 1.6 V). g, Laplace-filtered AFM image of assigned 5OS.STM set point: V = 0.2 V, I = 0.5 pA on bilayer NaCl, Δz = -0.3Å.The tip-height offset Δz for each panel is provided with respect to the STM setpoint, and positive (negative) values of Δz denote tip approach (retraction) from the STM setpoint.f and g show the same molecule at the same adsorption site, which is next to a trilayer NaCl island.The bright and dark features in the trilayer NaCl island in g correspond to Cl -and Na + ions, respectively.Scale bars: 10 Å (f) and 5 Å (g).\nFig. 3 | Characterization of closed-shell indeno[1,2-a]fluorene on bilayer NaCl/Au(111).a, DFTcalculated wave functions of the frontier orbitals of closed-shell 5 0 (isovalue: 0.002 e -Å -3 ).The wave functions shown here are calculated for the 5para geometry.b, DFT-calculated bond lengths of 5ortho (top) and 5para (bottom).c, Constant-height I(V) spectra acquired on a species of 5 assigned as 5para, along with the corresponding dI/dV(V) spectra.Open feedback parameters: V = -2 V, I = 0.15 pA (negative bias side) and V = 2.2 V, I = 0.15 pA (positive bias side).Acquisition position of the spectra is shown in Supplementary Fig. 7. d, Scheme of many-body transitions associated to the measured ionic resonances of 5para.Also shown are STM images of assigned 5para at biases where the corresponding transitions become accessible.Scanning parameters: I = 0.15 pA (V = -1.5 V) and 0.2 pA (V = 1.7 V). e, Laplace-filtered AFM image of assigned 5para.STM set point: V = 0.2 V, I = 0.5 pA on bilayer NaCl, Δz = -0.7 Å. f, Selected bonds labeled for highlighting bond order differences between 5para and 5ortho.For the bond pairs a/b, c/d and e/f, the bonds labeled in bold exhibit a higher bond order than their neighboring labeled bonds in 5para.g, Laplace-filtered AFM images of 5 on bilayer NaCl/Cu(111) showing switching between 5OS and 5para as the molecule changes its adsorption position.The faint protrusion adjacent to 5 is a defect that stabilizes the adsorption of 5. STM set point: V = 0.2 V, I = 0.5 pA on bilayer NaCl, Δz = -0.3Å. STM and STS data in c and d are acquired on the same species, while the AFM data in e is acquired on a different species.Scale bars: 10 Å (d) and 5 Å (e,g).\nNMR (300 MHz, CDCl3) δ: 7.51 (m, 2H), 7.40 -7.28 (m, 5H), 7.27 -7.20 (m, 2H), 7.13 (d, J = 7.7 Hz, 1H), 2.07 (s, 3H), 1.77 (s, 3H) ppm. 13C NMR-DEPT (75 MHz, CDCl3, 1:1 mixture of atropisomers) δ: 141.2 (C), 141.1 (C), 140.0 (C), 139.4 (2C), 137.5 (C), 137.4 (C), 136.0 (3C), 134.8 (C), 134.5 (C), 134.1 (C), 134.0 (C), 133.7 (C), 133.6 (C), 131.6 (CH), 131.2 (CH), 131.1 (CH), 130.7 (CH), 129.8 (CH), 129.7 (CH), 129.5 (CH), 129.4 (CH), 129.0 (CH), 128.9 (CH), 128.7 (2CH), 128.6 (2CH), 127.2 (CH), 127.1 (CH), 127.0 (CH), 126.9 (CH), 126.7 (CH), 126.6 (CH), 20.6 (CH3), 20.5 (CH3), 17.7 (CH3), 17.5 (CH3) ppm.MS (APCI) m/z (%): 327 (M+1, 100).HRMS: C20H16Cl2; calculated: 327.0702, found: 327.0709.\nNMR (500 MHz, CDCl3) δ: 7.93 (d, J = 7.6 Hz, 1H), 7.85 (d, J = 7.5 Hz, 1H), 7.78 (d, J = 7.7 Hz, 1H), 7.65 (d, J = 7.4 Hz, 1H), 7.61 (d, J = 7.5 Hz, 1H), 7.59 (d, J = 7.7 Hz, 1H), 7.47 (ddd, J = 8.4, 7.2, 1.1 Hz, 1H), 7.42 (dd, J = 8.1, 7.0 Hz, 1H), 7.35 (m, 2H), 4.22 (s, 3H), 4.02 (s, 3H).ppm. 13C NMR-DEPT (125 MHz, CDCl3) δ: 144.1 (C), 143.3 (C), 142.3 (C), 141.9 (C), 141.8 (C), 141.2 (C), 138.2 (C), 136.5 (C), 127.0 (CH), 126.9 (CH), 126.7 (CH), 126.6 (CH), 125.3 (CH), 125.2 (CH), 123.6 (CH), 122.2 (CH), 119.9 (CH), 118.4 (CH), 37.4 (CH2), 36.3 (CH2).ppm.MS (APCI) m/z (%): 254 (M+, 88).HRMS: C20H14; calculated: 254.1090, found: 254.1090.\n\nabstract\n\nIndenofluorenes are non-benzenoid conjugated hydrocarbons that have received great interest owing to their unusual electronic structure and potential applications in nonlinear optics and photovoltaics. Here, we report the generation of unsubstituted indeno[1,2-a]fluorene, the final and yet unreported parent indenofluorene regioisomer, on various surfaces by cleavage of two C-H bonds in 7,12-dihydro indeno[1,2-a]fluorene through voltage pulses applied by the tip of a combined scanning tunneling microscope and atomic force microscope.\nOn bilayer NaCl on Au(111), indeno[1,2a]fluorene is in the neutral charge state, while it exhibits charge bistability between neutral and anionic states on the lower work function surfaces of bilayer NaCl on Ag(111) and Cu(111). In the neutral state, indeno[1,2-a]fluorene exhibits either of two ground states: an open-shell π-diradical state, predicted to be a triplet by density functional and multireference many-body perturbation theory calculations, or a closedshell state with a para-quinodimethane moiety in the as-indacene core.\nSwitching between open-and closed-shell states of a single molecule is observed by changing its adsorption site on NaCl. The inclusion of non-benzenoid carbocyclic rings is a viable route to tune the physicochemical properties of polycyclic conjugated hydrocarbons (PCHs) . Non-benzenoid polycycles may lead to local changes in strain, conjugation, aromaticity, and, relevant to the context of the present work, induce an open-shell ground state of the corresponding PCHs .\nMany nonbenzenoid PCHs are also non-alternant, where the presence of odd-membered polycycles breaks the bipartite symmetry of the molecular network . Figure shows classical examples of non-benzenoid non-alternant PCHs, namely, pentalene, azulene and heptalene. Whereas azulene is a stable PCH exhibiting Hückel aromaticity ([4n+2] π-electrons, n = 2), pentalene and heptalene are unstable Hückel antiaromatic compounds with [4n] π-electrons, n = 2 (pentalene) and n = 3 (heptalene).\nBenzinterposition of pentalene generates indacenes, consisting of two isomers s-indacene and as-indacene (Fig. ). Apart from being antiaromatic, indacenes also contain proaromatic quinodimethane (QDM) moieties (Fig. ) , which endows them with potential open-shell character. While the parent s-indacene and asindacene have never been isolated, stable derivatives of s-indacene bearing bulky substituents have been synthesized .\nA feasible strategy to isolate congeners of otherwise unstable non-benzenoid non-alternant PCHs is through fusion of benzenoid rings at the ends of the π-system, that is, benzannelation. For example, while the parent pentalene is unstable, the benzannelated congener indeno[2,1-a]indene is stable under ambient conditions (Fig. ) .\nHowever, the position of benzannelation is crucial for stability: although indeno[2,1a]indene is stable, its regioisomer indeno[1,2-a]indene (Fig. ) oxidizes under ambient conditions . Similarly, benzannelation of indacenes gives rise to the family of PCHs known as indenofluorenes (Fig. ), which constitute the topic of the present work.\nDepending on the benzannelation position and the indacene core, five regioisomers can be constructed, namely, indeno [ Practical interest in indenofluorenes stems from their low frontier orbital gap and excellent electrochemical characteristics that render them as useful components in organic electronic devices .\nThe potential open-shell character of indenofluorenes has led to several theoretical studies on their use as non-linear optical materials and as candidates for singlet fission in organic photovoltaics . Recent theoretical work has also shown that indenofluorene-based ladder polymers may exhibit fractionalized excitations.\nFundamentally, indenofluorenes represent model systems to study the interplay between aromaticity and magnetism at the molecular scale . Motivated by many of these prospects, the last decade has witnessed intensive synthetic efforts toward the realization of indenofluorenes. Derivatives of 1-4 have been realized in solution , while 1-3 have also been synthesized on surfaces and characterized using scanning tunneling microscopy (STM) and atomic force microscopy (AFM), which provide information on molecular orbital densities , molecular structure and oxidation state .\nWith regards to the open-shell character of indenofluorenes, 2-4 are theoretically and experimentally interpreted to be closed-shell, while calculations indicate that 1 and 5 should exhibit open-shell ground states . Bulk characterization of mesitylsubstituted 1, including X-ray crystallography, temperature-dependent NMR, and electron spin resonance spectroscopy, provided indications of its open-shell ground state .\nElectronic characterization of 1 on Au(111) surface using scanning tunneling spectroscopy (STS) revealed a low electronic gap of 0.4 eV (ref. ). However, no experimental proof of an openshell ground state of 1 on Au(111), such as detection of singly occupied molecular orbitals (SOMOs) or spin excitations and correlations due to unpaired electrons , was shown.\nIn this work, we report the generation and characterization of unsubstituted 5. Our research is motivated by theoretical calculations that indicate 5 to exhibit the largest diradical character among all indenofluorene isomers . The same calculations also predict that 5 should possess a triplet ground state.\nTherefore, 5 would qualify as a Kekulé triplet, of which only a handful of examples exist . However, definitive synthesis of 5 has never been reported so far. Previously, Dressler et al. reported transient isolation of mesityl-substituted 5, where it decomposed both in the solution and in solid state , and only the structural proof of the corresponding dianion was obtained.\nOn-surface generation of a derivative of 5, starting from truxene as a precursor, was recently reported . STM data on this compound, containing the indeno[1,2-a]fluorene moiety as part of a larger PCH, was interpreted to indicate its open-shell ground state. However, the results did not imply the ground state of unsubstituted 5. Here, we show that on insulating surfaces 5 can exhibit either of two ground states: an open-shell or a closed-shell.\nWe infer the existence of these two ground states based on high-resolution AFM imaging with bond-order discrimination and STM imaging of molecular orbital densities . AFM imaging reveals molecules with two different geometries. Characteristic bond-order differences in the two geometries concur with the geometry of either an open-or a closed-shell state.\nConcurrently, STM images at ionic resonances show molecular orbital densities corresponding to SOMOs for the open-shell geometry, but orbital densities of the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) for the closed-shell geometry. Our experimental results are in good agreement with density functional theory (DFT) and multireference perturbation theory calculations.\nFinally, we observe switching between open-and closed-shell states of a single molecule by changing its adsorption site on the surface. Synthetic strategy toward indeno[1,2-a]fluorene. The generation of 5 relies on the solution-phase synthesis of the precursor 7,12-dihydro indeno[1,2-a]fluorene (6). Details on synthesis and characterization of 6 are reported in Supplementary Figs.\n. Single molecules of 6 are deposited on coinage metal (Au(111), Ag(111) and Cu(111)) or insulator surfaces. In our work, insulating surfaces correspond to two monolayer-thick (denoted as bilayer) NaCl on coinage metal surfaces. Voltage pulses ranging between 4-6 V are applied by the tip of a combined STM/AFM system, which result in cleavage of one C-H bond at each of the pentagonal apices of 6, thereby leading to the generation of 5 (Fig. ).\nIn the main text, we focus on the generation and characterization of 5 on insulating surfaces. Generation and characterization of 5 on coinage metal surfaces is shown in Supplementary Fig. . ). Blue and orange colors represent spin up and spin down densities, respectively. c, Probability density of the SOMOs of 5OS (isovalue: 0.001 e -Å -3 ).\nd, DFT-calculated bond lengths of 5OS. e, Constant-height I(V) spectra acquired on a species of 5 assigned as 5OS, along with the corresponding dI/dV(V) spectra. Open feedback parameters: V = -2 V, I = 0.17 pA (negative bias side) and V = 2 V, I = 0.17 pA (positive bias side). Acquisition position of the spectra is shown in Supplementary Fig. . f, Scheme of many-body transitions associated to the measured ionic resonances of 5OS.\nAlso shown are STM images of assigned 5OS at biases where the corresponding transitions become accessible. Scanning parameters: I = 0.3 pA (V = -1.2 V and -1.5 V) and 0.2 pA (V = 1.3 V and 1.6 V). g, Laplace-filtered AFM image of assigned 5OS. STM set point: V = 0.2 V, I = 0.5 pA on bilayer NaCl, Δz = -0.3\nÅ. The tip-height offset Δz for each panel is provided with respect to the STM setpoint, and positive (negative) values of Δz denote tip approach (retraction) from the STM setpoint. f and g show the same molecule at the same adsorption site, which is next to a trilayer NaCl island. The bright and dark features in the trilayer NaCl island in g correspond to Cl -and Na + ions, respectively.\nScale bars: 10 Å (f) and 5 Å (g). To experimentally explore the electronic structure of 5, we used bilayer NaCl films on coinage metal surfaces to electronically decouple the molecule from the metal surfaces. Before presenting the experimental findings, we summarize the results of our theoretical calculations performed on 5 in the neutral charge state (denoted as 5 0 ).\nWe start by performing DFT calculations on 5 0 in the gas phase. Geometry optimization performed at the spin-unrestricted UB3LYP/6-31G level of theory leads to one local minimum, 5OS, the geometry of which corresponds to the open-shell resonance structure of 5 (Fig. , the label OS denotes open-shell).\nThe triplet electronic configuration of 5OS is the lowest-energy state, with the openshell singlet configuration 90 meV higher in energy. Geometry optimization performed at the restricted closed-shell RB3LYP/6-31G level reveals two local minima, 5para and 5ortho, the geometries of which (Fig. ) exhibit bond length alternations in line with the presence of a para-or an ortho-QDM moiety, respectively, in the as-indacene core of the closed-shell resonance structures of 5 (Fig. ) .\nRelative to 5OS in the triplet configuration, 5para and 5ortho are 0.40 and 0.43 eV higher in energy, respectively. Additional DFT results are shown in Supplementary Fig. . To gain more accurate insights into the theoretical electronic structure of 5, we performed multireference perturbation theory calculations (Supplementary Fig. ) based on quasi-degenerate second-order n-electron valence state perturbation theory (QD-NEVPT2).\nIn so far as the order of the ground and excited states are concerned, the results of QD-NEVPT2 calculations qualitatively match with DFT calculations. For 5OS, the triplet configuration remains the lowest-energy state, with the open-shell singlet configuration 60 meV higher in energy. The energy differences between the open-and closed-shell states are substantially reduced in QD-NEVPT2 calculations, with 5para and 5ortho only 0.11 and 0.21 eV higher in energy, respectively, compared to 5OS in the triplet configuration.\nWe also performed nucleus-independent chemical shift calculations to probe local aromaticity of 5 in the openand closed-shell states. While 5OS in the triplet configuration exhibits local aromaticity at the terminal benzenoid rings, 5OS in the open-shell singlet configuration, 5para and 5ortho all display antiaromaticity (Supplementary Fig. ).\nThe choice of the insulating surface determines the charge state of 5: while 5 adopts neutral charge state on the high work function bilayer NaCl/Au(111) surface (irrespective of its openor closed-shell state, Supplementary Fig. ), 5 exhibits charge bistability between 5 0 and the anionic state 5 -1 on the lower work function bilayer NaCl/Ag(111) and Cu(111) surfaces (Supplementary Figs. ).\nIn the main text, we focus on the characterization of 5 on bilayer NaCl/Au(111). Characterization of charge bistable 5 is reported in Supplementary Figs. . We first describe experiments on 5 on bilayer NaCl/Au(111), where 5 exhibits a geometry corresponding to the calculated 5OS geometry, and an open-shell electronic configuration.\nWe compare the experimental data on this species to calculations on 5OS with a triplet configuration, as theory predicts a triplet ground state for 5OS. For 5OS, the calculated frontier orbitals correspond to the SOMOs ψ1 and ψ2 (Fig. ), whose spin up levels are occupied and the spin down levels are empty.\nFigure shows the DFT-calculated bond lengths of 5OS, where the two salient features, namely, the small difference in the bond lengths within each ring and the notably longer bond lengths in the pentagonal rings, agree with the open-shell resonance structure of 5 (Fig. ). Figure shows an AFM image of 5 adsorbed on bilayer NaCl/Au(111) that we assign as 5OS, where the bond-order differences qualitatively correspond to the calculated 5OS geometry (discussed and compared to the closed-shell state below).\nDifferential conductance spectra (dI/dV(V), where I and V denote the tunneling current and bias voltage, respectively) acquired on assigned 5OS exhibit two peaks centered at -1.5 V and 1.6 V (Fig. ), which we assign to the positive and negative ion resonances (PIR and NIR), respectively. Figure shows the corresponding STM images acquired at the onset (V = -1.2\nV/1.3 V) and the peak (V = -1.5 V/1.6 V) of the ionic resonances. To draw a correspondence between the STM images and the molecular orbital densities, we consider tunneling events as many-body electronic transitions between different charge states of 5OS (Fig. ). Within this framework, the PIR corresponds to transitions between 5 0 and the cationic state 5 .\nAt the onset of the PIR at -1.2 V, an electron can only be detached from the SOMO ψ1 and the corresponding STM image at -1.2 V shows the orbital density of ψ1. Increasing the bias to the peak of the PIR at -1.5 V, it becomes possible to also empty the SOMO ψ2, such that the corresponding STM image shows the superposition of ψ1 and ψ2, that is, |ψ1| 2 + |ψ2| 2 (ref.\n). Similarly, the NIR corresponds to transitions between 5 0 and 5 -1 . At the NIR onset of 1.3 V, only electron attachment to ψ2 is energetically possible. At 1.6 V, electron attachment to ψ1 also becomes possible, and the corresponding STM image shows the superposition of ψ1 and ψ2. The observation of the orbital densities of SOMOs, and not the hybridized HOMO and LUMO, proves the open-shell ground state of assigned 5OS.\nMeasurements of the monoradical species with a doublet ground state are shown in Supplementary Fig. . Unexpectedly, another species of 5 was also experimentally observed that exhibited a closedshell ground state. In contrast to 5OS, where the frontier orbitals correspond to the SOMOs ψ1 and ψ2, DFT calculations predict orbitals of different shapes and symmetries for 5para and 5ortho, denoted as α and β and shown in Fig. .\nFor 5ortho, α and β correspond to HOMO and LUMO, respectively. The orbitals are inverted in energy and occupation for 5para, where β is the HOMO and α is the LUMO. Fig. shows an AFM image of 5 that we assign as 5para. We experimentally infer its closed-shell state first by using qualitative bond order discrimination by AFM.\nIn high-resolution AFM imaging, chemical bonds with higher bond order are imaged brighter (that is, with higher frequency shift Δf) due to stronger repulsive forces, and they appear shorter . In Fig. , we label seven bonds whose bond orders show significant qualitative differences in the calculated 5ortho, 5para (Fig. ) and 5OS (Fig. ) geometries.\nIn 5para, the bonds b and d exhibit a higher bond order than a and c, respectively. This pattern is reversed for 5ortho, while the bond orders of the bonds a-d are all similar and small for 5OS. Furthermore, in 5para bond f exhibits a higher bond order than e, while in 5ortho and 5OS bonds e and f exhibit similar bond order (because they belong to Clar sextets).\nFinally, the bond labeled g shows a higher bond order in 5para than in 5ortho and 5OS. The AFM image of assigned 5para shown in Fig. indicates higher bond orders of the bonds b, d and f compared to a, c and e, respectively. In addition, the bond g appears almost point-like and with enhanced Δf contrast compared to its neighboring bonds, indicative of a high bond order (see Supplementary Fig. for height-dependent measurements).\nThese observations concur with the calculated 5para geometry (Fig. ). Importantly, all these distinguishing bond-order differences are distinctly different in the AFM image of 5OS shown in Fig. , which is consistent with the calculated 5OS geometry (Fig. ). In the AFM images of 5OS (Fig. and Supplementary Fig. ), the bonds a-d at the pentagon apices appear with similar contrast and apparent bond length.\nThe bonds e and f at one of the terminal benzenoid rings also exhibit similar contrast and apparent bond length, while the central bond g appears longer compared to assigned 5para. Further compelling evidence for the closed-shell state of assigned 5para is obtained by STM and STS. dI/dV(V) spectra acquired on an assigned 5para species exhibit two peaks centered at -1.4 V (PIR) and 1.6 V (NIR) (Fig. ).\nSTM images acquired at these biases (Fig. ) show the orbital densities of β (-1.4 V) and α (1.6 V). First, the observation of α and β as the frontier orbitals of this species, and not the SOMOs, strongly indicates its closed-shell state. Second, consistent with AFM measurements that indicate good correspondence to the calculated 5para geometry, we observe β as the HOMO and α as the LUMO.\nFor 5ortho, α should be observed as the HOMO and β as the LUMO. We did not observe molecules with the signatures of 5ortho in our experiments. We observed molecules in open-(5OS, Fig. ) and closed-shell (5para, Fig. ) states in similar occurrence after their generation from 6 on the surface. We could also switch individual molecules between open-and closed-shell states as shown in Fig. and Supplementary Fig. .\nTo this end, a change in the adsorption site of a molecule was induced by STM imaging at ionic resonances, which often resulted in movement of the molecule. The example presented in Fig. shows a molecule that was switched from 5para to 5OS and back to 5para. The switching is not directed, that is, we cannot choose which of the two species will be formed when changing the adsorption site, and we observed 5OS and 5para in approximately equal yields upon changing the adsorption site.\nThe molecule in Fig. is adsorbed on top of a defect that stabilizes its adsorption geometry on bilayer NaCl. At defect-free adsorption sites on bilayer NaCl, that is, without a third layer NaCl island or atomic defects in the vicinity of the molecule, 5 could be stably imaged neither by AFM nor by STM at ionic resonances (Supplementary Fig. ).\nWithout changing the adsorption site, the state of 5 (open-or closedshell) never changed, including the experiments on bilayer NaCl/Ag(111) and Cu(111), on which the charge state of 5 could be switched (Supplementary Figs. ). Also on these lower work function surfaces, both open-and closed-shell species were observed for 5 0 and both showed charge bistability between 5 0 (5OS or 5para) and 5 -1 (Supplementary Figs. ).\nThe geometrical structure of 5 -1 probed by AFM, and its electronic structure probed by STM imaging at the NIR (corresponding to transitions between 5 -1 and the dianionic state 5 -2 ), are identical within the measurement accuracy for the charged species of both 5OS and 5para. When cycling the charge state of 5 between 5 0 and 5 -1 several times, we always observed the same state (5OS or 5para) when returning to 5 0 , provided the molecule did not move during the charging/discharging process.\nBased on our experimental observations we conclude that indeno[1,2-a]fluorene (5), the last unknown indenofluorene isomer, can be stabilized in and switched between an open-shell (5OS) and a closed-shell (5para) state on NaCl. For the former, both DFT and QD-NEVPT2 calculations predict a triplet electronic configuration.\nTherefore, 5 can be considered to exhibit the spin-crossover effect, involving magnetic switching between high-spin (5OS) and low-spin (5para) states, coupled with a reversible structural transformation. So far, the spin-crossover effect has mainly only been observed in transition-metal-based coordination compounds with a near-octahedral geometry .\nThe observation that the switching between open-and closedshell states is related to changes in the adsorption site but is not achieved by charge-state cycling alone, indicates that the NaCl surface and local defects facilitate different electronic configurations of 5 depending on the adsorption site.\nGas-phase QD-NEVPT2 calculations predict that 5OS is the ground state, and the closed-shell 5para and 5ortho states are 0.11 and 0.21 eV higher in energy. The experiments, showing bidirectional switching between 5OS and 5para, indicate that a change in the adsorption site can induce sufficient change in the geometry of 5 (leading to a corresponding change in the ground state electronic configuration) and thus induce switching.\nSwitching between open-and closed-shell states in 5 does not require the breaking or formation of covalent bonds , but a change of adsorption site on NaCl where the molecule is physisorbed. Our results should have implications for single-molecule devices, capitalizing on the altered electronic and chemical properties of a system in π-diradical open-shell and closed-shell states such as frontier orbital and singlet-triplet gaps, and chemical reactivity.\nFor possible future applications as a single-molecule switch, it might be possible to also switch between open-and closed-shell states by changing the local electric field, such as by using chargeable adsorbates . Scanning probe microscopy measurements and sample preparation. STM and AFM measurements were performed in a home-built system operating at base pressures below 1×10 -10 mbar and a base temperature of 5 K. Bias voltages are provided with respect to the sample.\nAll STM, AFM and spectroscopy measurements were performed with carbon monoxide (CO) functionalized tips. AFM measurements were performed in non-contact mode with a qPlus sensor . The sensor was operated in frequency modulation mode with a constant oscillation amplitude of 0.5 Å. STM measurements were performed in constantcurrent mode, AFM measurements were performed in constant-height mode with V = 0 V, and I(V) and Δf(V) spectra were acquired in constant-height mode.\nPositive (negative) values of the tip-height offset Δz represent tip approach (retraction) from the STM setpoint. All dI/dV(V) spectra are obtained by numerical differentiation of the corresponding I(V) spectra. STM and AFM images, and spectroscopy curves, were post-processed using Gaussian low-pass filters.\nAu(111), Ag(111) and Cu(111) surfaces were cleaned by iterative cycles of sputtering with Ne + ions and annealing up to 800 K. NaCl was thermally evaporated on Au(111), Ag(111) and Cu(111) surfaces held at 323 K, 303 K and 283 K, respectively. This protocol results in the growth of predominantly bilayer (100)-terminated islands, with a minority of trilayer islands.\nSub-monolayer coverage of 6 on surfaces was obtained by flashing an oxidized silicon wafer containing the precursor molecules in front of the cold sample in the microscope. CO molecules for tip functionalization were dosed from the gas phase on the cold sample. Density functional theory calculations. DFT was employed using the PSI4 program package .\nAll molecules with different charge (neutral and anionic) and electronic (open-and closed-shell) states were independently investigated in the gas phase. The B3LYP exchangecorrelation functional with 6-31G basis set was employed for structural relaxation and singlepoint energy calculations. The convergence criteria were set to 10 −4 eV Å −1 for the total forces and 10 −6 eV for the total energies.\nMultireference calculations. Multireference calculations were performed on the DFToptimized geometries using the QD-NEVPT2 level of theory , with three singlet roots and one triplet root included in the state-averaged calculation. A (10,10) active space (that is, 10 electrons in 10 orbitals) was used along with the def2-TZVP basis set .\nIncreasing either the active space size or expanding the basis set resulted in changes of about 50 meV for relative energies of the singlet and triplet states. These calculations were performed using the ORCA program package . Nucleus-independent chemical shift (NICS) calculations. Isotropic nucleus-independent chemical shift values were evaluated at the centre of each ring using the B3LYP exchangecorrelation functional with def2-TZVP basis set using the Gaussian 16 software package .\nStarting materials (reagent grade) were purchased from TCI and Sigma-Aldrich and used without further purification. Reactions were carried out in flame-dried glassware and under an inert atmosphere of purified Ar using Schlenk techniques. Thin-layer chromatography (TLC) was performed on Silica Gel 60 F-254 plates (Merck).\nColumn chromatography was performed on silica gel (40-60 µm). Nuclear magnetic resonance (NMR) spectra were recorded on a Bruker Varian Mercury 300 or Bruker Varian Inova 500 spectrometers. Mass spectrometry (MS) data were recorded in a Bruker Micro-TOF spectrometer. The synthesis of compound 6 was developed following the two-step synthetic route shown in Supplementary Fig. , which is based on the preparation of methylene-bridge polyarenes by means of Pd-catalyzed activation of benzylic C-H bonds .\nSupplementary Figure | Synthetic route to obtain compound 6. The complex Pd2(dba)3 (20 mg, 0.02 mmol) was added over a deoxygenated mixture of 1,3-dibromo-2,4-dimethylbenzene (9, 100 mg, 0.38 mmol), boronic acid 10 (178 mg, 1.14 mmol), K2CO3 (314 mg, 2.28 mmol) and XPhos (35 mg, 0.08 mmol) in toluene (1:1, 10 mL), and the resulting mixture was heated at 90 °C for 2 h.\nAfter cooling to room temperature, the solvents were evaporated under reduced pressure. The reaction crude was purified by column chromatography (SiO2; hexane:CH2Cl2 9:1) affording 11 (94 mg, 76%) as a colorless oil. The complex Pd(OAc)2 (7 mg, 0.03 mmol) was added over a deoxygenated mixture of terphenyl 11 (90 mg, 0.27 mmol), K2CO3 (114 mg, 0.83 mmol) and ligand L (26 mg, 0.06 mmol) in NMP (2 mL).\nThe resulting mixture was heated at 160 °C for 4 h. After cooling to room temperature, H2O (30 mL) was added, and the mixture was extracted with EtOAc (3x15 mL). The combined organic extracts were dried over anhydrous Na2SO4, filtered, and evaporated under reduced pressure. The reaction crude was purified by column chromatography (SiO2; hexane:CH2Cl2 9:1) affording compound 6 (8 mg, 11%) as a white solid. in AFM imaging due to their reduced adsorption height compared to the rest of the carbon atoms.\nWe attribute this observation to the significantly different lattice parameter of Cu(111) (2.57 Å) compared to Au(111) and Ag(111) (2.95 Å and 2.94 Å, respectively) , such that the apical carbon atoms of the pentagonal rings of 5 adsorb on the on-top atomic sites on Au(111) and Ag(111), but not on Cu(111).\nOur speculation is based on a previous study of polymers of 1 on Au(111) by Di Giovannantonio et al. , where both tilted and planar individual units of 1 were observed depending on whether the apical carbon atoms of the pentagonal rings in 1 adsorbed on the on-top or hollow sites of the surface, respectively.\nGiven the strong molecule-metal interaction, we found no electronic state signatures of 5 on all three metal surfaces. STM set point for AFM images: V = 0. e, Frontier orbital spectrum of 5 -1 . In the anionic state, ψ2 becomes doubly occupied and ψ1 is the SOMO. Filled and empty circles denote occupied and empty orbitals, respectively.\nFor each panel, zero of the energy axis has been aligned to the respective highest-energy occupied orbital.", "answers": ["Yes, individual molecules of indeno[1,2-a]fluorene can switch between open-shell and closed-shell states by changing their adsorption site on the surface."], "length": 5523, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "566881d2138d7e29cd6dd2b661b6f7ffe4c515c92fdaf837"} {"input": "What are the symptoms of alpha thalassemia major?", "context": "Thalassaemia minor | definition of Thalassaemia minor by Medical dictionary\nThalassaemia minor | definition of Thalassaemia minor by Medical dictionary\nhttps://medical-dictionary.thefreedictionary.com/Thalassaemia+minor\n(redirected from Thalassaemia minor)\nRelated to Thalassaemia minor: thalassaemia major\nThalassemia describes a group of inherited disorders characterized by reduced or absent amounts of hemoglobin, the oxygen-carrying protein inside the red blood cells. There are two basic groups of thalassemia disorders: alpha thalassemia and beta thalassemia. These conditions cause varying degrees of anemia, which can range from insignificant to life threatening.\nAll types of thalassemias are considered quantitative diseases of hemoglobin, because the quantity of hemoglobin produced is reduced or absent. Usual adult hemoglobin is made up of three components: alpha globin, beta globin, and heme. Thalassemias are classified according to the globin that is affected, hence the names alpha and beta thalassemia. Although both classes of thalassemia affect the same protein, the alpha and beta thalassemias are distinct diseases that affect the body in different ways.\nBeta thalassemia may be the most best-known type of thalassemia and is also called Cooley's anemia. It is caused by a change in the gene for the beta globin component of hemoglobin. Beta thalassemia causes variable anemia that can range from moderate to severe, depending in part on the exact genetic change underlying the disease. Beta thalassemia can be classified based on clinical symptoms. Beta thalassemia major usually causes severe anemia that can occur within months after birth. If left untreated, severe anemia can result in insufficient growth and development, as well as other common physical complications that can lead to a dramatically decreased life-expectancy. Fortunately, in developed countries beta thalassemia is usually identified by screening in the newborn period, before symptoms have developed. Children who are identified early can be started on ongoing blood transfusion therapy as needed. Although transfusion therapy prevents many of the complications of severe anemia, the body is unable to eliminate the excess iron contained in the transfused blood. Over time, the excess iron deposits in tissues and organs, resulting in damage and organ failure. Another medication must be administered to help the body eliminate the excess iron and prevent iron-over-load complications. Beta thalassemia intermedia describes the disease in individuals who have moderate anemia that only requires blood transfusions intermittently, if at all.\nAlpha thalassemia is the result of changes in the genes for the alpha globin component of hemoglobin. There are two main types of alpha thalassemia disease: hemoglobin H disease and alpha thalassemia major. The two diseases are quite different from beta thalassemia as well as from one another. Individuals with hemoglobin H disease can experience events of hemolytic anemia—anemia caused by the rapid breakdown of the red blood cells. These events are thought to be triggered by various environmental causes, such as infection and/or exposure to certain chemicals. Hemoglobin H disease is in most cases milder than beta thalassemia. It does not generally require transfusion therapy. Alpha thalassemia major is a very serious disease that results in severe anemia that begins even before birth. Most affected babies do not survive to be born or die shortly after birth.\nThe thalassemias are among the most common genetic diseases worldwide. Both alpha and beta thalassemia have been described in individuals of almost every ancestry, but the conditions are more common among certain ethnic groups. Unaffected carriers of all types of thalassemia traits do not experience health problems. In fact, the thalassemia trait is protective against malaria, a disease caused by blood-borne parasites transmitted through mosquito bites. According to a widely accepted theory, most genetic changes—mutations—that cause thalassemia occurred multiple generations ago. Coincidentally, these mutations increased the likelihood that carriers would survive malaria infection. Survivors passed the mutation onto their offspring, and the trait became established throughout areas where malaria is common. As populations migrated, so did the thalassemia traits.\nBeta thalassemia trait is seen most commonly in people with the following ancestry: Mediterranean (including North African, and particularly Italian and Greek), Middle Eastern, Indian, African, Chinese, and Southeast Asian (including Vietnamese, Laotian, Thai, Singaporean, Filipino, Cambodian, Malaysian, Burmese, and Indonesian). Alpha-thalassemia trait is seen with increased frequency in the same ethnic groups. However, there are different types of alpha thalassemia traits within these populations. The frequency of hemoglobin H disease and alpha thalassemia major depends on the type of alpha thalassemia trait. The populations in which alpha thalassemia diseases are most common include Southeast Asians and Chinese (particularly Southern Chinese).\nIt is difficult to obtain accurate prevalence figures for various types of thalassemia within different populations. This difficulty arises due to testing limitations in determining exact genetic diagnoses, as well as the fact that many studies have focused on small, biased hospital populations.\nTwo studies reflect prevalence figures that can be helpful counseling families and determining who to screen for beta thalassemia. Between the years of 1990 and 1996, the State of California screened more than 3.1 million infants born in the state for beta thalassemia. Approximately 1 in 114,000 infants had beta thalassemia major, with prevalence rates being highest among Asian Indians (about one in 4,000), Southeast Asians (about one in 10,000), and Middle Easterners (about one in 7,000). Another type of beta thalassemia disease, E/beta thalassemia, was represented in approximately one in 110,000 births, all of which occurred in families of Southeast Asian ancestry. Among Southeast Asians, the prevalence of E/beta thalassemia was approximately one in 2,600 births. This is in keeping with the observation that hemoglobin E trait carrier rates are relatively high within the Southeast Asian population: 16% in a study of 768 immigrants to California, and up to 25% in some specific Southeast Asian populations such as Cambodians. While these California studies address some of the limitations of earlier population studies, the pattern observed in California is expected to be different in other areas of the United States and the world. For example, Italians are underrepresented in this population when compared to the population of the East Coast of the United States.\nDetermining prevalence figures for alpha thalassemia is even more difficult due to increased limitations in diagnostic testing. All types of alpha thalassemia disease are most common among people of Southeast Asian and Chinese descent, for reasons that become clearer with an understanding of the underlying genetics of alpha thalassemia. One study of 500 pregnant women in Northern Thailand estimated a frequency of one in 500 pregnancies affected by alpha thalassemia major, for example. Prevalence of alpha thalassemia disease is significantly lower in the United States primarily because of immigration patterns; although at least one state, California, has observed growing hemoglobin H disease incidence rates that are high enough to justify universal newborn screening for the condition.\nHumans normally make several types of the oxygen-carrying protein hemoglobin. An individual's stage in development determines whether he or she makes primarily embryonic, fetal, or adult hemoglobins. All types of hemoglobin are made of three components: heme, alpha (or alpha-like) globin, and beta (or beta-like) globin. All types of thalassemia are caused by changes in either the alpha- or beta-globin gene. These changes cause little or no globin to be produced. The thalassemias are, therefore, considered quantitative hemoglobin diseases. All types of thalassemias are recessively inherited, meaning that a genetic change must be inherited from both the mother and the father. The severity of the disease is influenced by the exact thalassemia mutations inherited, as well as other genetic and environmental factors. There are rare exceptions, notably with beta thalassemia, where globin gene mutations exhibit a dominant pattern of inheritance in which only one gene needs to be altered in order to see disease expression. Scientists continue to study the causes. For instance, a new mutation for alpha-thalassemia was discovered for the first time among Iranian patients in 2004.\nBETA-THALASSEMIA. Most individuals have two normal copies of the beta globin gene, which is located on chromosome 11 and makes the beta globin component of normal adult hemoglobin, hemoglobin A. There are approximately 100 genetic mutations that have been described that cause beta thalassemia, designated as either beta0 or beta + mutations. No beta globin is produced with a beta0 mutation, and only a small fraction of the normal amount of beta globin is produced with a beta + mutation.\nWhen an individual has one normal beta globin gene and one with a beta thalassemia mutation, he or she is said to carry the beta thalassemia trait. Beta thalassemia trait, like other hemoglobin traits, is protective against malaria infection. Trait status is generally thought not to cause health problems, although some women with beta thalassemia trait may have an increased tendency toward anemia during pregnancy.\nWhen two members of a couple carry the beta thalassemia trait, there is a 25% chance that each of their children will inherit beta thalassemia disease by inheriting two beta thalassemia mutations, one from each parent. The clinical severity of the beta thalassemia disease—whether an individual has beta thalassemia intermedia or beta thalassemia major—will depend largely on whether the mutations inherited are beta0 thalassemia or beta + thalassemia mutations. Two beta0 mutations generally lead to beta thalassemia major, and two beta+ thalassemia mutations generally lead to beta thalassemia intermedia. Inheritance of one beta0 and one beta + thalassemia mutation tends to be less predictable.\nAlthough relatively uncommon, there are other thalassemia-like mutations that can affect the beta globin gene. Hemoglobin E is the result of a substitution of a single nucleotide. This change results in a structurally altered hemoglobin that is produced in decreased amounts. Therefore, hemoglobin E is unique in that it is both a quantitative (i.e. thalassemia-like) and qualitative trait. When co-inherited with a beta thalassemia trait, it causes a disease that is almost indistinguishable from beta thalassemia disease. Large deletions around and including the beta globin gene can lead to delta/beta thalassemia or hereditary persistence of fetal hemoglobin (HPFH). Interestingly, delta/beta thalassemia trait behaves very similarly to beta thalassemia trait in its clinical manifestations. However, HPFH trait does not tend to cause hemoglobin disease when co-inherited with a second thalassemia or other beta globin mutation.\nALPHA-THALASSEMIA. Most individuals have four normal copies of the alpha globin gene, two copies on each chromosome 16. These genes make the alpha globin component of normal adult hemoglobin, which is called hemoglobin A. Alpha globin is also a component of fetal hemoglobin and the other major adult hemoglobin called hemoglobin A2. Mutations of the alpha globin genes are usually deletions of the gene, resulting in absent production of alpha globin. Since there are four genes (instead of the usual two) to consider when looking at alpha globin gene inheritance, there are several alpha globin types that are possible.\nAbsence of one alpha globin gene leads to a condition known as silent alpha thalassemia trait. This condition causes no health problems and can be detected only by special genetic testing. Alpha thalassemia trait occurs when two alpha globin genes are missing. This can occur in two ways. The genes may be deleted from the same chromosome, causing the 'cis' type of alpha thalassemia trait. Alternately, they may be deleted from different chromosomes, causing the 'trans' type of alpha thalassemia trait. In both instances, there are no associated health problems, although the trait status may be detected by more routine blood screening.\nHemoglobin H disease results from the deletion of three alpha globin genes, such that there is only one functioning gene. Typically, this can occur when one parent carries the silent alpha thalassemia trait, and the other parent carries the 'cis' type of the alpha thalassemia trait. In this situation, there is a 25% chance for hemoglobin H disease in each of such a couple's children.\nHemoglobin H disease-like symptoms can also be a part of a unique condition called alpha thalassemia mental retardation syndrome. Alpha thalassemia mental retardation syndrome can be caused by a deletion of a significant amount of chromosome 16, affecting the alpha globin genes. This is usually not inherited, but rather occurs sporadically in the affected individual. Affected individuals have mild hemoglobin H disease, mild-to-moderate mental retardation, and characteristic facial features. This syndrome can also occur as a sex-linked form in which a mutation is inherited in a particular gene on the X-chromosome. This gene influences alpha globin production, as well as various other developmental processes. Individuals affected with this form of the syndrome tend to have more severe mental retardation, delayed development, nearly absent speech, characteristic facial features, and genital-urinary abnormalities. The remaining discussion will focus only on aspects of hemoglobin H disease.\nAlpha thalassemia major results from the deletion of all four alpha globin genes, such that there are no functioning alpha globin genes. This can occur when both parents carry the 'cis' type of the alpha thalassemia trait. In this situation, there is a 25% chance for alpha thalassemia major in each of such a couple's children.\nBeta thalassemia major is characterized by severe anemia that can begin months after birth. In the United States and other developed countries beta thalassemia is identified and treated early and effectively. Therefore, the following discussion of symptoms applies primarily to affected individuals in the past and unfortunately in some underdeveloped countries now. If untreated, beta thalassemia major can lead to severe lethargy, paleness, and delays in growth and development. The body attempts to compensate by producing more blood, which is made inside the bones in the marrow. However, this is ineffective without the needed genetic instructions to make enough functioning hemoglobin. Instead, obvious bone expansion and changes occur that cause characteristic facial and other changes in appearance, as well as increased risk of fractures. Severe anemia taxes other organs in the body—such as the heart, spleen, and liver—which must work harder than usual. This can lead to heart failure, as well as enlargement and other problems of the liver and spleen. When untreated, beta thalassemia major generally results in childhood death, usually due to heart failure. In 2004, the first known heart attack associated with beta thalassemia major was reported. Fortunately, in developed countries diagnosis is usually made early, often before symptoms have begun. This allows for treatment with blood transfusion therapy, which can prevent most of the complications of the severe anemia caused by beta thalassemia major. Individuals with beta thalassemia intermedia have a more moderate anemia that may only require treatment with transfusion intermittently, such as when infections occur and stress the body. As a person with beta thalassemia intermedia gets older, however, the need for blood transfusions may increase to the point that they are required on a regular basis. When this occurs their disease becomes more similar to beta thalassemia major. Other genetic and environmental factors can influence the course of the disease as well. For example, co-inheritance of one or two alpha thalassemia mutations can tend to ameliorate some of the symptoms of beta thalassemia disease, which result in part from an imbalance in the amount of alpha- and beta-globin present in the red blood cells.\nHemoglobin h disease\nAbsence of three alpha globin genes causes an imbalance of alpha and beta globin proteins in the red blood cells. The excess beta globin proteins tend to come together to form hemoglobin H, which is unable to release oxygen to the tissues. In addition, hemoglobin H tends to precipitate out in the cells, causing damage to the red blood cell membrane. When affected individuals are exposed to certain drugs and chemicals known to make the membrane more fragile, the cells are thought to become vulnerable to breakdown in large numbers, a complication called hemolytic anemia. Fever and infection are also considered to be triggers of hemolytic anemia in hemoglobin H disease. This can result in fatigue, paleness, and a yellow discoloration of the skin and whites of eyes called jaundice. Usually, the anemia is mild enough not to require treatment. Severe anemia events may require blood transfusion, however, and are usually accompanied by such other symptoms as dark feces or urine and abdominal or back pain. These events are uncommon in hemoglobin H disease, although they occur more frequently in a more serious type of hemoglobin H disease called hemoglobin H/Constant Spring disease. Individuals effected with this type of hemoglobin H disease are also more likely to have enlargement of and other problems with the spleen.\nAlpha thalassemia major\nBecause alpha globin is a necessary component of all major hemoglobins and some minor hemoglobins, absence of all functioning alpha globin genes leads to serious medical consequences that begin even before birth. Affected fetuses develop severe anemia as early as the first trimester of pregnancy. The placenta, heart, liver, spleen, and adrenal glands may all become enlarged. Fluid can begin collecting throughout the body as early as the start of the second trimester, causing damage to developing tissues and organs. Growth retardation is also common. Affected fetuses usually miscarry or die shortly after birth. In addition, women carrying affected fetuses are at increased risk of developing complications of pregnancy and delivery. Up to 80% of such women develop toxemia, a disturbance of metabolism that can potentially lead to convulsions and coma. Other maternal complications include premature delivery and increased rates of delivery by cesarean section, as well as hemorrhage after delivery.\nThalassemia may be suspected if an individual shows signs that are suggestive of the disease. In all cases, however, laboratory diagnosis is essential to confirm the exact diagnosis and to allow for the provision of accurate genetic counseling about recurrence risks and testing options for parents and affected individuals. Screening is likewise recommended to determine trait status for individuals of high-risk ethnic groups.\nThe following tests are used to screen for thalassemia disease and/or trait:\nhemoglobin electrophoresis with quantitative hemoglobin A2 and hemoglobin F\nfree erythrocyte-protoporphyrin (or ferritin or other studies of serum iron levels)\nA complete blood count will identify low levels of hemoglobin, small red blood cells, and other red blood cell abnormalities that are characteristic of a thalassemia diagnosis. Since thalassemia trait can sometimes be difficult to distinguish from iron deficiency, tests to evaluate iron levels are important. A hemoglobin electrophoresis is a test that can help identify the types and quantities of hemoglobin made by an individual. This test uses an electric field applied across a slab of gel-like material. Hemoglobins migrate through this gel at various rates and to specific locations, depending on their size, shape, and electrical charge. Isoelectric focusing and high-performance liquid chromatography (HPLC) use similar principles to separate hemoglobins and can be used instead of or in various combinations with hemoglobin electrophoresis to determine the types and quantities of hemoglobin present. Hemoglobin electrophoresis results are usually within the normal range for all types of alpha thalassemia. However, hemoglobin A2 levels and sometimes hemoglobin F levels are elevated when beta thalassemia disease or trait is present. Hemoglobin electrophoresis can also detect structurally abnormal hemoglobins that may be co-inherited with a thalassemia trait to cause thalassemia disease (i.e., hemoglobin E) or other types of hemoglobin disease (i.e., sickle hemoglobin). Sometimes DNA testing is needed in addition to the above screening tests. This can be performed to help confirm the diagnosis and establish the exact genetic type of thalassemia.\nDiagnosis of thalassemia can occur under various circumstances and at various ages. Several states offer thalassemia screening as part of the usual battery of blood tests done for newborns. This allows for early identification and treatment. Thalassemia can be identified before birth through the use of prenatal diagnosis. Chorionic villus sampling (CVS) can be offered as early as 10 weeks of pregnancy and involves removing a sample of the placenta made by the baby and testing the cells. CVS carries a risk of causing a miscarriage that is between 0.5%-1%. Amniocentesis is generally offered between 15 and 22 weeks of pregnancy, but can sometimes be offered earlier. Two to three tablespoons of the fluid surrounding the baby is removed. This fluid contains fetal cells that can be tested. The risk of miscarriage associated with amniocentesis ranges from 0.33-0.5%. Pregnant woman and couples may choose prenatal testing in order to prepare for the birth of a baby that may have thalassemia. Alternately, knowing the diagnosis during pregnancy allows for the option of pregnancy termination. Preimplantation genetic diagnosis (PGD) is a relatively new technique that involves in-vitro fertilization followed by genetic testing of one cell from each developing embryo. Only the embryos unaffected by sickle cell disease are transferred back into the uterus. PGD is currently available on a research basis only and is relatively expensive.\nIndividuals with beta thalassemia major receive regular blood transfusions, usually on a monthly basis. This helps prevent severe anemia and allows for more normal growth and development. Transfusion therapy does have limitations, however. Individuals can develop reactions to certain proteins in the blood—called a transfusion reaction. This can make locating appropriately matched donor blood more difficult. Although blood supplies in the United States are very safe, particularly relative to the past and to other areas of the world, there remains an increased risk of exposure to such blood-borne infections as hepatitis. Additionally, the body is not able to get rid of the excess iron that accompanies each transfusion. An additional medication called desferoxamine is administered, usually five nights per week over a period of several hours, using an automatic pump that can be used during sleep or taken anywhere the person goes. This medication is able to bind to the excess iron, which can then be eliminated through urine. If desferoxamine is not used regularly or is unavailable, iron overload can develop and cause tissue damage and organ damage and failure. The heart, liver, and endocrine organs are particularly vulnerable. Desferoxamine itself may rarely produce allergic or toxic side effects, including hearing damage. Signs of desferoxamine toxicity are screened for and generally develop in individuals who overuse the medication when body iron levels are sufficiently low. Overall, however, transfusion and desferoxamine therapy have increased the life expectancy of individuals with the most severe types of beta thalassemia major to the 4th or 5th decade. This can be expected to improve with time and increased developments in treatment, as well as for those with more mild forms of the disease.\nNew treatments offer additional options for some individuals with beta thalassemia major. There are various medications that target the production of red blood cells (i.e. erythropoeitin) or fetal hemoglobin (i.e. hydroxyurea and butyrate). Their effectiveness in ameliorating the severity of beta thalassemia is currently being investigated. Another promising new treatment is bone marrow transplantation, in which the bone marrow of an affected individual is replaced with the bone marrow of an unaffected donor. If successful, this treatment can provide a cure. However, there is an approximately 10-15% chance the procedure could be unsuccessful (i.e. the thalassemia returns); result in complications (i.e. graft-versus-host disease); or result in death. The risk for specific individuals depends on current health status, age, and other factors. Because of the risks involved and the fact that beta thalassemia is a treatable condition, transplant physicians require a brother or sister donor who has an identically matched tissue type, called HLA type. HLA type refers to the unique set of proteins present on each individual's cells, which allows the immune system to recognize \"self\" from \"foreign.\" HLA type is genetically determined, so there is a 25% chance for two siblings to be a match. Transplant physicians and researchers are also investigating ways to improve the safety and effectiveness of bone marrow transplantation. Using newborn sibling umbilical cord blood—the blood from the placenta that is otherwise discarded after birth but contains cells that can go on to make bone marrow—seems to provide a safer and perhaps more effective source of donor cells. Donors and recipients may not have to be perfect HLA matches for a successful transplant using cord blood cells. Trials are also underway to determine the effectiveness of \"partial transplants,\" in which a safer transplant procedure is used to replace only a percentage of the affected individual's bone marrow. Other possible treatments on the horizon may include gene therapy techniques aimed at increasing the amount of normal hemoglobin the body is able to make.\nHemoglobin H disease is a relatively mild form of thalassemia that may go unrecognized. It is not generally considered a condition that will reduce one's life expectancy. Education is an important part of managing the health of an individual with hemoglobin H disease. It is important to be able to recognize the signs of severe anemia that require medical attention. It is also important to be aware of the medications, chemicals, and other exposures to avoid due to the theoretical risk they pose of causing a severe anemia event. When severe anemia occurs, it is treated with blood transfusion therapy. For individuals with hemoglobin H disease, this is rarely required. For those with the hemoglobin H/Constant Spring form of the disease, the need for transfusions may be intermittent or ongoing, perhaps on a monthly basis and requiring desferoxamine treatment. Individuals with this more severe form of the disease may also have an increased chance of requiring removal of an enlarged and/or overactive spleen.\nAnemia — A blood condition in which the level of hemoglobin or the number of red blood cells falls below normal values. Common symptoms include paleness, fatigue, and shortness of breath.\nBilirubin — A yellow pigment that is the end result of hemoglobin breakdown. This pigment is metabolized in the liver and excreted from the body through the bile. Bloodstream levels are normally low; however, extensive red cell destruction leads to excessive bilirubin formation and jaundice.\nBone marrow — A spongy tissue located in the hollow centers of certain bones, such as the skull and hip bones. Bone marrow is the site of blood cell generation.\nBone marrow transplantation — A medical procedure used to treat some diseases that arise from defective blood cell formation in the bone marrow. Healthy bone marrow is extracted from a donor to replace the marrow in an ailing individual. Proteins on the surface of bone marrow cells must be identical or very closely matched between a donor and the recipient.\nDesferoxamine — The primary drug used in iron chelation therapy. It aids in counteracting the life-threatening buildup of iron in the body associated with long-term blood transfusions.\nGlobin — One of the component protein molecules found in hemoglobin. Normal adult hemoglobin has a pair each of alpha-globin and beta-globin molecules.\nHeme — The iron-containing molecule in hemoglobin that serves as the site for oxygen binding.\nHemoglobin — Protein-iron compound in the blood that carries oxygen to the cells and carries carbon dioxide away from the cells.\nHemoglobin A — Normal adult hemoglobin that contains a heme molecule, two alpha-globin molecules, and two beta-globin molecules.\nHemoglobin electrophoresis — A laboratory test that separates molecules based on their size, shape, or electrical charge.\nHepatomegaly — An abnormally large liver.\nHLA type — Refers to the unique set of proteins called human leukocyte antigens. These proteins are present on each individual's cell and allow the immune system to recognize 'self' from 'foreign'. HLA type is particularly important in organ and tissue transplantation.\nHydroxyurea — A drug that has been shown to induce production of fetal hemoglobin. Fetal hemoglobin has a pair of gamma-globin molecules in place of the typical beta-globins of adult hemoglobin. Higher-than-normal levels of fetal hemoglobin can ameliorate some of the symptoms of thalassemia.\nIron overload — A side effect of frequent blood transfusions in which the body accumulates abnormally high levels of iron. Iron deposits can form in organs, particularly the heart, and cause life-threatening damage.\nJaundice — Yellowing of the skin or eyes due to excess of bilirubin in the blood.\nMutation — A permanent change in the genetic material that may alter a trait or characteristic of an individual, or manifest as disease, and can be transmitted to offspring.\nPlacenta — The organ responsible for oxygen and nutrition exchange between a pregnant mother and her developing baby.\nRed blood cell — Hemoglobin-containing blood cells that transport oxygen from the lungs to tissues. In the tissues, the red blood cells exchange their oxygen for carbon dioxide, which is brought back to the lungs to be exhaled.\nScreening — Process through which carriers of a trait may be identified within a population.\nSplenomegaly — Enlargement of the spleen.\nBecause alpha thalassemia major is most often a condition that is fatal in the prenatal or newborn period, treatment has previously been focused on identifying affected pregnancies in order to provide appropriate management to reduce potential maternal complications. Pregnancy termination provides one form of management. Increased prenatal surveillance and early treatment of maternal complications is an approach that is appropriate for mothers who wish to continue their pregnancy with the knowledge that the baby will most likely not survive. In recent years, there have been a handful of infants with this condition who have survived long-term. Most of these infants received experimental treatment including transfusions before birth, early delivery, and even bone marrow transplantation before birth, although the latter procedure has not yet been successful. For those infants that survive to delivery, there seems to be an increased risk of developmental problems and physical effects, particularly heart and genital malformations. Otherwise, their medical outlook is similar to a child with beta thalassemia major, with the important exception that ongoing, life-long blood transfusions begin right at birth.\nAs discussed above, the prognosis for individuals with the most serious types of thalassemia has improved drastically in the last several years following recent medical advances in transfusion, chemo-, and transplantation therapy. Advances continue and promise to improve the life expectancy and quality of life further for affected individuals.\n\"First Known Heart Attack Associated With Beta-thalassemia Major Reported.\" Heart Disease Weekly February 22, 2004: 10.\n\"Novel Alpha-thalassemia Mutations Identified.\" Hematology Week January 26, 2004: 19.\nChildren's Blood Foundation. 333 East 38th St., Room 830, New York, NY 10016-2745. (212) 297-4336. cfg@nyh.med.cornell.edu.\nCooley's Anemia Foundation, Inc. 129-09 26th Ave. #203, Flushing, NY 11354. (800) 522-7222 or (718) 321-2873. http://www.thalassemia.org.\nMarch of Dimes Birth Defects Foundation. 1275 Mamaroneck Ave., White Plains, NY 10605. (888) 663-4637. resourcecenter@modimes.org. http://www.modimes.org.\nNational Heart, Lung, and Blood Institute. PO Box 30105, Bethseda, MD 20824-0105. (301) 592-8573. nhlbiinfo@rover.nhlbi.nih.gov. http://www.nhlbi.nih.gov.\nNational Organization for Rare Disorders (NORD). PO Box 8923, New Fairfield, CT 06812-8923. (203) 746-6518 or (800) 999-6673. Fax: (203) 746-6481. http://www.rarediseases.org.\nBojanowski J. \"Alpha Thalassemia Major: The Possibility of Long-Term Survival.\" Pamphlet from the Northern California Comprehensive Thalassemia Center. (1999).\nChildren's Hospital Oakland, Northern California Comprehensive Thalassemia Center website. http://www.thalassemia.com.\nCooley's Anemia Foundation, Inc. website. http://www.thalassemia.org/gohome.html.\nJoint Center for Sickle Cell and Thalassemic Disorders website. http://cancer.mgh.harvard.edu/medOnc/sickle.htm.\n[thal″ah-se´me-ah]\na heterogeneous group of hereditary hemolytic anemias marked by a decreased rate of synthesis of one or more hemoglobin polypeptide chains, classified according to the chain involved (α, β, δ); the two major categories are α- and β-thalassemia.\nα-thalassemia (alpha-thalassemia) that caused by diminished synthesis of alpha chains of hemoglobin. The homozygous form is incompatible with life, the stillborn infant displaying severe hydrops fetalis. The heterozygous form may be asymptomatic or marked by mild anemia.\nβ-thalassemia (beta-thalassemia) that caused by diminished synthesis of beta chains of hemoglobin. The homozygous form is called t. major and the heterozygous form is called t. minor.\nthalassemia ma´jor the homozygous form of β-thalassemia, in which hemoglobin A is completely absent; it appears in the newborn period and is marked by hemolytic, hypochromic, microcytic anemia; hepatosplenomegaly; skeletal deformation; mongoloid facies; and cardiac enlargement.\nthalassemia mi´nor the heterozygous form of β-thalassemia; it is usually asymptomatic, but there may be mild anemia.\nsickle cell–thalassemia a hereditary anemia involving simultaneous heterozygosity for hemoglobin S and thalassemia.\nthal·as·se·mi·a\n, thalassanemia (thal'ă-sē'mē-ă, thă-las-ă-nē'mē-ă),\nAny of a group of inherited disorders of hemoglobin metabolism in which there is impaired synthesis of one or more of the polypeptide chains of globin; several genetic types exist, and the corresponding clinical picture may vary from barely detectable hematologic abnormality to severe and fatal anemia.\n[G. thalassa, the sea, + haima, blood]\n/thal·as·se·mia/ (thal″ah-se´me-ah) a heterogeneous group of hereditary hemolytic anemias marked by a decreased rate of synthesis of one or more hemoglobin polypeptide chains, classified according to the chain involved (α, β, δ); the two major categories are α- and β-thalassemia.\nα-thalassemia that caused by diminished synthesis of alpha chains of hemoglobin. The homozygous form is incompatible with life, the stillborn infant displaying severe hydrops fetalis. The heterozygous form may be asymptomatic or marked by mild anemia.\nβ-thalassemia that caused by diminished synthesis of beta chains of hemoglobin. The homozygous form is called t. major and the heterozygous form is called t. minor.\nthalassemia ma´jor the homozygous form of β, in which hemoglobin A is completely absent; it appears in the newborn period and is marked by hemolytic, hypochromic, microcytic anemia, hepatosplenomegaly, skeletal deformation, mongoloid facies, and cardiac enlargement.\nthalassemia mi´nor the heterozygous form of β, usually asymptomatic, although there is sometimes mild anemia.\n(thăl′ə-sē′mē-ə)\nAn inherited form of anemia occurring chiefly among people of Mediterranean descent, caused by faulty synthesis of part of the hemoglobin molecule. Also called Mediterranean anemia.\nthal′as·se′mic adj.\n[thal′əsē′mē·ə]\nEtymology: Gk, thalassa, sea, a + haima, without blood\nproduction and hemolytic anemia characterized by microcytic, hypochromic red blood cells. Thalassemia is caused by inherited deficiency of alpha- or beta-globin synthesis. See also hemochromatosis, hemosiderosis.\nBeta thalassemia, clinical thalassemia, Cooley's anemia, Mediterranean anemia, thalassemia major Hematology A group of genetic diseases by underproduction of hemoglobin due to mutations in the beta globin gene, which is more common in Mediterraneans Heredity Parents are carriers–heterozygotes; one in 4 children is homozygous for the mutation and thus has full-blown disease Clinical See Anemia. Cf Sickle cell anemia.\nα-thalassemia\nHemoglobin Barts Hematology An inherited condition caused by a defect in the synthesis of the Hb α chain; Hb Barts hemoglobinopathy is characterized by the presence of 4 gamma chains; it is more common in southeast Asians; the most severe form of alpha thalassemia causes stillbirth due to hydrops fetalis Heredity Parents are carriers–heterozygotes; one in 4 children is homozygous for the mutation and thus has full-blown disease Clinical Pallor, fatiguability, FTT, fever, infections, diarrhea Management Transfusions\nThalassemia major Hematology A hemoglobinopathy caused by a defect in the synthesis of Hb β chain Clinical Pallor, fatigability, FTT, fever due to infections, diarrhea, bone deformities, hepatosplenomegaly Management Transfusions, but iron overload can damage the heart, liver, and endocrine systems, ergo iron chelation–early use of deferiprone, deferoxamine ↓ transfusion-related iron overload and may protect against DM, cardiac disease, early death\nδ-thalassemia\nHematology A condition characterized by a defect of Hb A2–α2δ2; because Hb A2 comprises only 3% of the circulating Hb, even its complete absence; δ-thalassemia has little clinical or hematologic impact\nγ-thalassemia\nHematology A condition characterized by a defect of gamma–γ Hb chains found in Hb F–α2γ2; because Hb F is present primarily in the fetus and newborns, it is rarely seen outside of the neonatal period, but may cause transient neonatal hemolytic anemia.\n, thalassanemia (thal'ă-sē'mē-ă, -ă-să-nē'mē-ă)\nAny of a group of inherited disorders of hemoglobin metabolism in which there is impaired synthesis of one or more of the polypeptide chains of globin; several genetic types exist, and the corresponding clinical picture may vary from barely detectable hematologic abnormality to severe and fatal anemia. People of Mediterranean, extraction are more often affected than others by this type of anemia.\nSynonym(s): thalassaemia, thalassanaemia.\nAny of a group of inherited disorders of hemoglobin metabolism with impaired synthesis of one or more polypeptide chains of globin; several genetic types exist.\nthalassemia\nBarts hemoglobin\nbeta hemoglobinopathy\nbeta-delta thalassemia\nbeta-thalassaemia\nBite Cell\nblack gallstone\nI know of a couple, totally unrelated and unbeknownst to them, who are silent carriers of Thalassaemia minor.\nPakistan: Genetic factor: All in the genes\nBut, unfortunately, when one person with thalassaemia minor carrier happens to marry another with the same diagnosis, there is a strong possibility that their child would be thalassaemia major, as happened in the case of Taneja.\n' My life depends upon a monthly blood transfusion '\n0] thalassaemia demonstrates variable severity, ranging from a condition similar to [beta] thalassaemia minor to something approaching thalassaemia major.\nA retrospective review of homozygous haemoglobin E patients\nThal, Alan P.\nthalame\nthalamencephalic\nthalamencephalon\nthalamic\nthalamic fasciculus\nthalamic nucleus\nthalamic pain syndrome\nthalamic peduncle\nthalamic radiation\nthalamo-\nthalamocoele\nthalamocortical\nthalamocortical fibers\nthalamogeniculate artery\nthalamolenticular\nthalamoperforating artery\nthalamostriate radiation\nthalamotuberal artery\nThalassaemia minor\nthalassaemiaor Cooley's disease\nthalassemic facies\nthalasso-\nThalassobacter\nThalassobacter utilis\nthalassoplankton\nthalassoposia\nthalidomide neuropathy\nThalidomider\nthallium poisoning\nThalarctos\nTHALAS\nThalasaemia\nThalassaemia Association of Malaysia\nthalassaemia major\nThalassaemias\nthalassaemic\nthalassanaemia\nThalassemia Action Group\nThalassemia Clinical Research Network\nthalassemia syndrome", "answers": ["Severe anemia that begins even before birth."], "length": 6102, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "d427e3ab7584a0e253bfb6a9d76626b726c79b57d5e39f2a"} {"input": "What is the sticking point in the political showdown over the budget?", "context": "CNN.com - Transcripts\nTensions Boil Over possible government shutdown; New trouble targeting Gadhafi; Libyan Rebels in Panicked Retreat; Should U.S. Recognize the Rebels?; Meeting With Gadhafi; Washington, D.C. to Feel Burden of Shutdown; Religious Leaders Fast to Protests Cuts for Poor\nWOLF BLITZER, HOST: Don, thanks very much.\nHappening now, the top U.S. general in charge of the military mission in Libya now expressing doubts that the opposition has the manpower to topple Moammar Gadhafi, as deadly new air strikes force rebel fighters into another retreat. This hour, I'll speak with a former Republican Congressman who's in Tripoli right now trying to get Gadhafi to step down.\nAlso, growing outrage across the United States, amidst new signs tomorrow's potential government shutdown may -- repeat may be unavoidable. Why one lawmaker is telling Congress -- and I'm quoting right now -- \"go straight to hell.\"\nAnd possible presidential hopeful, Donald Trump, on a mission to tell President Obama, \"you're fired.\" We're fact checking his controversial investigation into the president's birth.\nUp first, the political showdown over the budget, as tensions reach a boiling point about 31 hours until impending government shutdown. Just hours from now, President Obama will meet with Republican House speaker, John Boehner, and the Democratic Senate majority leader, Harry Reid, for further negotiations. Those talks scheduled to begin 7:00 p.m. Eastern.\nHundreds of thousands of people across the country will be impacted by the shutdown. And we'll be bringing you examples throughout the next two hours.\nOne place it would be felt heavily is right here in Congress' backyard, the city of Washington. Washington, DC -- its spending is tied to the federal budget. And this major metropolitan area could lose millions of dollars while a number of critical services, like trash collection, for example, would be suspended for at least a week.\nToday, an enraged Eleanor Holmes, the delegate representing Washington, DC, lit into Congress over the stalemate.\n(BEGIN VIDEO CLIP) ELEANOR HOLMES NORTON (D), D.C. DELEGATE: It's one thing to beat up on the District of Columbia. It's another thing drop a bomb on the city. And that's what this Congressional -- C.R. does. It takes the route of authoritarian governments and dictatorships by dictating to a local government how it may spend its local funds. And it may force the District of Columbia government to shut down, although our government had a balanced budget.\nBLITZER: And get this -- the members of Congress charged with reaching a deal, they'll still be receiving a paycheck if there's a shutdown, despite the hundreds of thousands of government employees who won't be receiving any paychecks. The current Congressional salary, by the way, $174,000 a year.\nOur CNN senior Congressional correspondent, Dana Bash, is up on Capitol Hill with the latest developments -- Dana, specifically, where are the sticking points right now?\nDANA BASH, CNN SENIOR CONGRESSIONAL CORRESPONDENT:\nWell, look, Wolf, this is effectively a bill to fund the government. And the sticking points certainly are about how much spending to cut. That's what this whole issue has been about.\nHowever -- however, one of the main issues, I am told, that were just -- that was discussed at the White House meeting this afternoon with the president, the House speaker and the Senate majority leader, was over not necessarily spending measures, but over lightning rod issues like regulating greenhouse gases and abortion.\nBASH (voice-over): One of the biggest disagreements is not over government spending, but policy.\nREP. JOHN BOEHNER (R-OH), SPEAKER OF THE HOUSE: Some 40 or 50 policy restrictions that were attached to -- to our bill.\nBASH: So-called policy riders Republicans call essential and Democrats call nonstarters. The most divisive is over abortion. A GOP plan to cut all federal funding for Planned Parenthood, which provides abortion procedures in addition other women's health services.\nSEN. HARRY REID (D-NV), MAJORITY LEADER: This is a budget. This is to keep our country running. This is not a woman's health bill.\nBASH: Planned Parenthood staged a rally outside the Capitol to protest.\nCECILE RICHARDS, CEO, PLANNED PARENTHOOD: They don't want to allow Planned Parenthood to serve the three million women that we see every single year. Ninety-seven percent of the services Planned Parenthood provides are preventive care.\nUNIDENTIFIED MALE: I certainly don't think that taxpayers should subsidize abortions. It's -- if a woman chooses to have an abortion, it's legal to do that in this country. But I don't think taxpayers should be put in a position to have to pay for those abortions.\nBASH: Another major sticking point -- how much spending to cut. A Democratic source tells CNN they have finally tentative agreement on slashing $34.5 billion from the rest of this year's budget. But a Republican source says there's no deal.\nBOEHNER: There is no agreement on the number. There are no agreement on the policy issues that are contained with it.\nBASH: Then there's the critical issue of what programs and agencies to cut. Democrats say they're trying to find spending cuts with the least impact on those who need it most. So they're pushing for things like temporary one year cuts in programs. Some examples, cuts in wetlands protection and Pell grants for summer school and graduate students.\nRepublicans call that smoke and mirrors.\nBOEHNER: And our goal is to make real spending cuts.\nBASH: Some examples of what Republicans want to cut -- money for food inspectors, Head Start education programs and funding for housing.\nBASH: This afternoon, House Republicans did pass a bill to keep the government running for one week past tomorrow's midnight deadline. It has $12 billion in cuts. It would fund the Defense Department for the rest of the year. But Democrats, including the president of the United States, call it a distraction and they say that they really want to keep the focus on what they're negotiating, which is a bill that would keep the government open -- keep the government functioning and funded for the rest of the year.\nBLITZER: And the president's playing hardball. He's saying he'll veto that legislation --\nBASH: He was, yes.\nBLITZER: -- if it were to pass the Senate and come to his desk. I hear, Dana, that some employees already are getting furloughing notices.\nBASH: It's true. This is just preventive. But all across the Capitol here today, people in offices and -- and -- well, really, everywhere -- were told whether or not, if, in fact, it does come to a government shutdown, if they're going to be here or not. And this is an example. We obtained one of the furlough notices. And I'll just read you a line. This -- imagine if this came across your desk. It says: \"Because your services are not needed for the orderly suspension of operations and you're not engaged in one of the accepted functions, you're being placed on furlough effective Saturday, April 9, 2011.\"\nNow, again, of course, this is just protective. The government is still open. But interesting that they're already getting ready for a government shutdown and telling people who will come to work and not.\nOne more note. Even people who are here, who are called essential, they're not going to get paid, either.\nBLITZER: All right, Dana.\nDon't go too far away.\nWe'll be in close touch.\nThe impact of the potential government shutdown is even being felt on the front lines of combat in Afghanistan. And Iraq. The Defense secretary, Robert Gates, is in Iraq right now. And he's telling U.S. troops they will feel a pinch.\nROBERT GATES, SECRETARY OF DEFENSE: I hope they didn't have you standing out here in the sun too long\nIf -- if the government shutdown starts on the 8th and goes for a week, you'd get a half a check. If it goes from the 15th to the 30th, you wouldn't get a paycheck on the 30th but you would be back paid for all of it. So that's -- that's the deal.\nBLITZER: Not great deal.\nGates also told the troops this would likely be his last trip to the country as Defense secretary and he wanted to say thank you.\nHe's expected to retire later this year.\nNow to the deadly stalemate in Libya. New signs the military operation in the region is facing some tough new challenges.\nOur Pentagon correspondent, Barbara Starr is here.\nShe's watching the story.\nShe's got more.\nWhat are you learning? BARBARA STARR, CNN PENTAGON CORRESPONDENT: Well, Wolf, there was very dramatic, very hard-nosed testimony today on Capitol Hill from the top U.S. commander responsible for the U.S. involvement in Libya, saying that Gadhafi forces are becoming increasingly difficult to target, as they are using civilian vehicles, mixing in with local populations, moving next to mosques, schools, hospitals -- all the same tactics we saw for years in Iraq.\nAnd now, all of this today leading to a very dramatic exchange between General Carter Ham and one of the most vocal administration critics, Senator John McCain.\nSEN. JOHN MCCAIN (R), ARIZONA: Hearing your testimony, General Ham, is almost an Orwellian experience for me. The fact is that if we had imposed the no-fly zone three weeks, four weeks ago, Gadhafi would not be in power today.\nThe fact is that the situation on the ground is basically a stalemate.\nWould you say that the situation on the ground is a stalemate or an emerging stalemate?\nGEN. CARTER HAM, COMMANDER, U.S. AFRICA COMMAND: Senator, I -- I would agree with that if present on the ground.\nMCCAIN: So the goal -- our policy objective of the removal of Gadhafi is further from being achieved than it was three or four weeks ago.\nHAM: Senator, I -- I don't know that I would agree with that. What I -- because that, again, was not a military mission. The military mission of protecting, I think, was not wholly achieved, but achieved in large part.\nSTARR: General Ham also acknowledging another problem -- a key U.S. aircraft, the AC-130, that flies low and slow to target on the ground, is facing what he called \"a significant threat\" from surface to air missiles, which he said remain effective and operational in some cases.\nAnd, Wolf, get this. General Ham says there were about 20,000 of those surface to air missiles when the campaign started and they are concerned that an awful lot of them are still out there -- Wolf.\nBLITZER: Barbara, thanks very much for that report.\nPanicked rebels are once again on the retreat from Gadhafi's forces. Just today, at least three people were killed, another 10 injured, in new air strikes. And there are mounting questions about whether NATO could be responsible for the attack.\nOur senior international correspondent, Ben Wedeman, is joining us now from Benghazi.\nBen's watching all of this closely.\nYou just heard General Ham, who is the commander of the U.S. military's Africa Command. He was in charge of the mission before handing over complete control to NATO. You just heard him say there could be a stalemate out there.\nWhat's the sense on the ground?\nBEN WEDEMAN, CNN SENIOR INTERNATIONAL CORRESPONDENT: Well, the sense was a few days ago that it was, indeed, a stalemate -- sort of a seesaw battle that went back and forth between Ajdabiya and Brega.\nBut what we saw today was that that seesaw was tipped over. And it was a general retreat by the opposition forces from somewhere near Brega to almost the other side of Ajdabiya. This, after this air strike, which almost everybody on the ground believes to be NATO leaving not three, but four people dead. And many others are still unaccounted for.\nThat set off this general retreat whereby we saw all their heavy -- all of the opposition forces' heavy equipment -- multiple rocket launchers, tens and tens of these pickup trucks mounted with heavy machine guns streaming through Ajdabiya to the other side, the far side of the city. Some of them going all the way back to Benghazi, according to the head of the rebel forces in the eastern part of the country, Abdul Fatah Younis. He says that the Gadhafi forces were approaching Ajdabiya from three different directions.\nI would not call that a stalemate -- Wolf.\nBLITZER: Ben, you got a close-up look at some of the casualties today out on the front lines.\nWEDEMAN: It's very bad, very bad. I mean it wasn't just fighters. It was also medics who had gone to the scene of this reported air strike, which then got hit again. So one of them was a doctor, one of them was a medic. And we were in the hospital. And there was real anger at NATO, anger at the fact that when they needed those air strikes on the Gadhafi forces, they weren't getting them. And now, for the second time in a week, there's been another strike. Now, of course, we must stress that NATO says that they -- because they don't have enough boots on the ground, they can neither confirm nor deny this was a NATO strike. But certainly, speaking to eyewitnesses in the hospital, it certainly sounded like an air strike. And there are no other planes in the skies of Libya other than NATO planes -- Wolf.\nBLITZER: Ben Wedeman in Benghazi for us.\nThe U.S. says Moammar Gadhafi is no longer the legitimate leader of Libya.\nSo why not recognize the rebels?\nWhy one U.S. official says it raises serious concerns.\nAnd a former U.S. Congressman in Libya armed with a message for the Libyan dictator.\nWill he get to meet with him face-to-face?\nMy interview with Curt Weldon, that Republican former Congressman -- that's coming up, as well.\nBLITZER: Let's get right to Jack.\nHe's got some nuclear concerns on his mind with The Cafferty File -- Jack.\nJACK CAFFERTY, THE CAFFERTY FILE: Well, they had another little temblor in Japan -- a 7.1 magnitude earthquake hit Northeastern Japan today, the strongest aftershock since that massive 9.0 quake and tsunami that followed devastated that nation four weeks ago. And this one today was in roughly the same area.\nOne of the big concerns, of course, is possible further damage to the Fukushima Daiichi nuclear power plant. The Tokyo Electric Power Company, TEPCO, which operates the plant -- or what's left of it -- said there were no serious incidents as a result of today's aftershock.\nSo they say. Radioactivity from that plant has poisoned the surrounding land, air and ocean. Millions of people have been exposed. Millions more could be, as radioactivity has been picked up in food and drinking water and detected in faraway places, like California.\nThis week, workers plugged a crack in the plant that had been gushing contaminated water into the ocean for weeks. As a result, TEPCO says now radiation levels in the ocean waters off the coast there have dropped dramatically.\nYesterday, the head of the United Nations' scientific committee on the effects of atomic radiation said the Fukushima accident is not expected to have any serious impact on the health of the Japanese people. He said, quote: \"We have seen traces of iodine in the air all over the world, but they are much, much, much lower than traces we have seen at similar distances following Chernobyl,\" unquote.\nWell, not everybody is convinced. In South Korea, more than 130 primary schools and kindergartens ordered closed today outside of Seoul. People there were worried that windy, rainy weather could be carrying radioactive material from Japan.\nNorth Korea aired warnings on television for its people to stay indoors during that rain storm and to take a full shower if they were caught outside in the storm.\nEven here in the United States, some chefs are now using sensors to test levels of radiation in the fish they plan to serve in restaurants.\nHere's the question -- do you think you're being told the truth about the nuclear accident in Japan?\nIf your -- if your trout is he glowing, Wolf --\nBLITZER: Yes?\nCAFFERTY: -- you might want to send it back and get a ham sandwich.\nBLITZER: You want it well done, but not necessarily that well done.\nCAFFERTY: No.\nBLITZER: All right, Jack.\nNot a laughing matter.\nBLITZER: Serious stuff.\nCAFFERTY: Right.\nBLITZER: See you in a few moments.\nNew questions this hour about the capabilities of the rebels in Libya and whether they have the power to overthrow Moammar Gadhafi.\nShould the United States -- should the United States have a hand in helping arm the rebels?\nLet's bring in our foreign affairs correspondent, Jill Dougherty, with this part of the story.\nWhat are you hearing over at the State Department -- Jill. JILL DOUGHERTY, CNN FOREIGN AFFAIRS CORRESPONDENT: Well, you know, Wolf, other countries have done it, other countries like France and Italy have done it -- recognizing the opposition. And supporters say now, with the rebels in retreat, the U.S. shouldn't wait.\nBut what would it really change anything?\nDOUGHERTY (voice-over): The U.S. says Moammar Gadhafi is no longer the legitimate leader of Libya.\nSecretary of State Hillary Clinton is full of praise for them.\nHILLARY RODHAM CLINTON, SECRETARY OF STATE: These were not soldiers. These were not trained military forces. They were doctors and lawyers and university professors and economists and, you know, young men who were students. And they are being attacked by mercenaries, by ruthless forces that Gadhafi is utilizing to show no mercy against his people. And they are courageous. They are moving as fast as they can to try to form themselves into a military operation.\nDOUGHERTY: Clinton has met with the rebel leaders personally, but the administration still is cautious. The president authorized the CIA to send in agents to learn about the rebels and assess their needs. Clinton's special representative, the State Department's Christopher Stevens, seen here in 2008 in Tripoli, is on the ground in Benghazi, scoping them out.\nMARK TONER, U.S. STATE DEPARTMENT SPOKESMAN: We sent somebody in to get that kind of on the ground assessment of the -- of -- of their identity, of their leadership structure, to talk with them firsthand and to see what direction we think they're moving in. We've seen some positive signals.\nDOUGHERTY: Recognizing the rebels, a senior official tells CNN, raises serious issues. It would acknowledge that Libya is now a divided country.\nAnd could the U.S. be sure the group represents the whole opposition movement?\nIt's a bit early, this official says. Maybe they turn out not to be the right folks.\nBut Secretary Clinton knows the timing is urgent.\nCLINTON: What NATO is doing is buying time, buying space.\nDOUGHERTY: So far, the U.S. is providing what's called non- lethal humanitarian aid. The administration hasn't yet decided to arm them or provide financial assistance.\nDOUGHERTY: But a senior U.S. official tells CNN there's a lot the United States could be doing right now without going so far as to recognize the rebels, pointing out that the U.S. funds political groups and other organizations around the world. But this official says you want to be careful about who they are.\nSo, so far, caution seems to be winning out over urgency -- Wolf.\nJill is at the State Department.\nThe House speaker, John Boehner, may be doing double duty if the government shuts down this weekend. You're going to find out why he could be cleaning up a lot of trash in his own backyard.\nAnd a former U.S. Congressman now on a mission to meet with Moammar Gadhafi in Tripoli in person. Curt Weldon, he's here. He'll join us in THE SITUATION ROOM from Tripoli. You're going to find out who he says would be a good replacement for the embattled Libyan leader.\nBLITZER: Military leaders have a message for Congress about \"don't ask/don't tell.\"\nWell, military leaders say preparations for repealing \"don't ask/don't tell\" are going better than they expected. They testified before a House committee today about getting rid of the policy that bars openly gay service members. They caution, though, that it will take time and training to implement the repeal. And it must still be certified by President Obama, the Defense secretary and the chairman of the Joint Chiefs of Staff.\nWell, your Smartphone just got a little smarter. The FCC is requiring that wireless carriers provide access to the mobile Internet anywhere it's available, even when it's offered by a competing provider. And that could be a huge -- make a huge difference to smaller carriers, who told the FCC they just can't compete otherwise against industry heavyweights like Verizon and AT&T.\nNew York City school chancellor, Cathie Black, is stepping down after only three months on the job. Mayor Michael Bloomberg says her short stint just didn't work out as either of them had expected or hoped. Her approval rating has plunged to 17 percent.\nBlack chaired \"First\" magazine before overseeing the nation's largest school system. Deputy Mayor Dennis Walcott will replace her.\nAnd a war of words is erupting between an emerging Republican star, New Jersey Governor Chris Christie, and his state's largest teachers' union. In a network TV interview, Christie called the union leaders, quote, \"political thugs.\" He blames them for teacher lay-offs that he says could have been avoided if they had not opposed salary freezes. The New Jersey Education Association is firing back, accusing Christie of name-calling -- Wolf.\nBLITZER: Sticks and stones will break many bones.\nSYLVESTER: Sticks and stones may break my bones --\nSYLVESTER: But words never hurt me.\nA former U.S. Congressman is in Tripoli, Libya right now. His goal -- to talk to Moammar Gadhafi. His message -- we'll talk about that. My interview with Curt Weldon coming up next.\nPlus, we showed it to you earlier -- a member of Congress telling colleagues to, quote, \"go to hell.\"\nNow she's is joining us live here in THE SITUATION ROOM to explain.\nHOLMES NORTON: -- of Columbia. It's another thing to drop a bomb on a city. And that's what this --\nBLITZER: Former Congressman Curt Weldon is in a -- Weldon is on a mission to Libya right now to try to meet with the embattled leader, Moammar Gadhafi. But that may be easier said than done.\nJoining us now from Tripoli, former Republican Congressman Curt Weldon of Pennsylvania. Congressman, thanks very much for coming in.\nAnd joining us now from Tripoli, former Republican Congressman Curt Weldon of Pennsylvania.\nCURT WELDON, FORMER U.S. CONGRESSMAN: My pleasure, Wolf.\nBLITZER: Let's talk about your meeting with Moammar Gadhafi. I take it it has not yet happened.\nDo you expect to meet with the Libyan leader? WELDON: Absolutely. The invitation that was sent to me was from his chief of staff, Bashir Salah, who I've met on all three of my official visits here in 2004 and 2005. And the letter specifically says we want you to come over and meet with the leader and our senior leadership.\nAnd I said it's worth me coming over to support the administration and to try to let the leader know face to face that this is facing -- it's very grave timing in the situation and they have to have some movement fairly quickly or they're not going to be happy with the -- with the alternatives.\nBLITZER: What's taking so long?\nWhy haven't you been able to meet with Gadhafi yet?\nWELDON: Well, it -- that's not unusual. I mean all three of the delegation trips that I led here in 2003 and 2004 -- or, actually, 2004 and 2005 -- they always make you wait until 30 minutes before the meeting and then you go. And some of those meetings were at 10:00 at night, some were at 5:00 in the afternoon.\nAs you know from the excellent reporting being done by your folks here, there's a lot of security concerns, and they are very concerned where Gadhafi is at any given moment. That's one of the issues, but we have been making ourselves available.\nWe have been doing a lot of back-channel meetings with friends and associates that I have here, and we have met with the chief of staff and one of the sons, and today with the prime minister, a very lengthy meeting for two hours. So, we're going to give them until tomorrow. We're not going to stay beyond that. And we have given them some suggestions, and we expect a response by midday tomorrow. And if we don't, we will done exit conversation with your people and let you know our feelings.\nBLITZER: What's the major headline that you got out of these meetings with other leaders? I take it you met with Saif Al-Islam Gadhafi, one of the sons of Moammar Gadhafi. What are they saying to you?\nWELDON: Well, we actually didn't meet with Saif. I have met with Saif probably 10 times over the past seven years, both in America and here in Libya. I have not yet met with Saif. I have offered, if he is available.\nI have met with Saadi. And the general thrust is obviously that they want peace and they want to find a way out of this. But as I have explained to them, there's certain things that have to be done according to our president and our secretary of state, who I'm here to support.\nWe don't have a different agenda. There's no compromise on our part. Our only mission here is to talk face to face with them and say this is reality and this is a grave situation, and you need to do certain things that we suggest that we think will get our administration to respond to your actions. And again, we're not doing any negotiating.\nThey know me, they have seen my efforts. I have not taken anything from their country in the way of financial benefits, and I'm here only because I want to avoid war. I don't want to see American soldiers killed, and I don't want to see more innocence Libyans killed.\nBLITZER: You wrote an op-ed in \"The New York Times\" this week saying that once you meet face to face with Moammar Gadhafi, you will tell him to step down. Is that still your intention?\nWELDON: Absolutely, Wolf. I wrote the op-ed before the trip was planned. And I wrote it, Wolf, because back in 2004, when I led the first delegation of Americans to sit down with him in the tent in Tripoli, he said to me, \"Congressman, why did it take 30 years for someone from your country to come and sit with me and tell me to my face that you believe that I'm a criminal and a terrorist. And then if you didn't believe me, bomb me?\"\nAnd I said, \"Leader, I can't explain that.\" So I said now it's time for someone to sit in a tent face to face with Colonel Gadhafi and let him know how grave this situation is.\nAnd I'm willing to do that. And I think I'm probably the best person because I have met with him three times, and because I sat in that tent in 2004 and listened to him tell me that. So, in effect, that's why I'm here.\nBLITZER: You wrote also in \"The New York Times\" this -- you wrote, \"Colonel Gadhafi's son, Saif, a powerful businessman, a politician, could play a constructive role as a member of the committee to devise a new government structure or constitution.\"\nYou know, a lot of people, including the opposition, the rebels, as they're called, they think Saif Al-Islam Gadhafi is just as much a killer or thug as his father is, and they say they have no interest in dealing with him either.\nWhat do you say to that criticism?\nWELDON: Well, what I said, I'm not endorsing anything anyone for any office here. What I am hoping for is what the president wants, which is a free and fair election to take place, hopefully sooner rather than later.\nBut having been involved with Libya for seven years, I was a witness to the work that Saif did in the Lockerbie case, the La Bella nightclub bombing. I personally witnessed through the Gadhafi Foundation the work that Saif did to free up the Bulgarian nurses who were sentenced to death twice, along with a Palestinian doctor.\nI have seen the work that Saif and Dr. Salani (ph) at the foundation have done in dealing with chemical weapons destruction and with the elimination of landmines and humanitarian efforts worldwide. I have been out to (INAUDIBLE), the chemical weapons plant, and I have actually seen visibly how they have removed the chemical weapons production materials. He was behind all of that.\nBelieve me, Wolf, I'm not happy with some of the statements and the actions that he's made over the past month, and he knows I'm not happy. But I think in a fair election, up until now, he should be given the opportunity to seek office where he can run against other candidates, perhaps, for the presidency. And so I would at this time think that he should be allowed that opportunity.\nThat's not to say I condone anything that he said or his actions. He will have to be accountable for those on his own.\nBLITZER: Because you make him sound like he's a decent guy when so many people think he is a killer, a murderer, especially given the statements that he recently made, that if he goes into Benghazi, if he finds these rebels, he will go and kill them all. You make it sound like he's a decent guy.\nWELDON: Well, I -- you know, I haven't been with him on a continual basis. I have met with him a number of times, both in the U.S. and here, under some very stressful situations, especially when it came to resolving the Lockerbie case and the La Bella nightclub. And despite what Sarkozy said about resolving the issues of the Bulgarian nurses when they were sentenced to death twice, it was Saif who played a very critical role against some very powerful forces in this country that wanted to kill those people.\nYou know, I don't know of any incidences where I, first hand, have seen evidence of him committing human rights violations, and if he did, he has to be held accountable like everyone else. And I have said that publicly and I will say that privately.\nSo my judgment is just based upon my experience with him, the fact that he is a knowledgeable person, he understands the need to interact and interface with the West. I think he could be a viable candidate. But ultimately, my opinion is hopefully going to be the opinion of the Libyan people.\nBLITZER: Because you probably have seen all of the articles, the reports over the past month, month and a half, of mass murder, of killings, not only by Saif Al-Islam, but some of his brothers that have gone on, the atrocities that have been so widely reported. I hear what you're saying about his role over the recent years when the Bush administration, and later the Obama administration, was trying to improve relations with Libya, but over the past several weeks, based on all of the international reporting we have seen, it's been a brutal record that he has accomplished.\nWELDON: Well, again, I don't have firsthand evidence of that. I just got here two days ago. And I fully support an international tribunal to look at human rights violations on everyone in this country. That's necessary. And if they find evidence that he has been involved in that, then he should suffer the consequences of his actions.\n(END VIDEOTAPE) BLITZER: In our next hour, part two of the interview with former congressman Curt Weldon. There have been some questions raised about his motive. Is he in all of this for the money? You're going to find out his answer to that and more. Stand by.\nAlso, Washington, D.C.'s congressional delegate is telling colleagues -- and I'm quoting her now -- \"Go to hell.\" She is joining us live in THE SITUATION ROOM to tell us why.\nPlus, no budget deal, no food -- the extreme tens of thousands of people are going to as a government shutdown looms.\nBLITZER: Let's get back to the outrage boiling over on Capitol Hill, only hours before a potential government shutdown.\nJoining us now, the Democratic delegate representing the city of Washington, D.C., Eleanor Holmes Norton.\nELEANOR HOLMES NORTON (D), D.C. DELEGATE: Of course, Wolf.\nBLITZER: I think it's fair to say that Washington, D.C., a city of a population of about 600,000 people, the only major metropolitan -- the only major city in the United States that's going to foal the direct impact of a federal government shutdown so dramatically, so powerfully, because it is a federal district.\nGive me an example of what's going to happen if there's a government shutdown.\nNORTON: Absolutely, although your viewers will be shocked by what they are about to hear.\nThey know a little bit about taxation without representation -- we pay our taxes, then we have full representation in the House and the Senate. But I bet they didn't know that our local budget, without a dime of federal money in it -- and we support ourselves almost entirely -- has to be sent to the masters in the Congress to sign off on it before we can spend our own local money.\nWell, listen to this, Wolf. We passed our budget in -- last spring. The appropriators signed off on it last summer.\nSo, why are we in a federal budget fight over their money when it is our money I am talking about? I have put forward amendments that said the district can spend its own local funds.\nBLITZER: What's going to happen in the District of Columbia Saturday, Sunday, Monday, if there is a government shutdown? Give me an example or two.\nNORTON: I will give you some dramatic ones. How about the shutdown of the D.C. government itself? Because since the final gavel hasn't fallen on all the federal appropriations, then the district government has now prepared to shut down on Saturday morning just because the federal government is shutting down.\nWe are at the height of the tourist season, the Cherry Blossom Festival. That has been severely curtailed because of the federal shutdown. That's going to -- three million people come here just in one month for the cherry blossoms. Our mayor has had to put out a list of agencies that will be open and a list of agencies that won't be open.\nBLITZER: Trash collection -- will there be any trash collection in the District of Columbia?\nNORTON: No trash collection, and some residents have started up a Facebook page that says if they close down the District of Columbia, we're carrying our trash to Speaker Boehner's House.\nBLITZER: You don't support that do you?\nNORTON: I do not.\nNORTON: And let me just say right here, I do not. But let me tell you, I am only expressing a little of the rage that the taxpaying residents of the District of Columbia are feeling.\nBLITZER: But let me ask you this, Congresswoman, because the Democrats were in control, they had a large majority in the House all of last year; in the Senate, a significant majority. They failed to pass a budget. Don't the Democrats deserve a lot of the blame for this current impasse?\nNORTON: Absolutely not, because the Democrats would never have held our budget up here.\nLISA BLOOM, CNN LEGAL ANALYST: Why didn't they pass the budget?\nNORTON: Well, that doesn't have anything to do with us. This is our local money.\nAll it would take is -- the Democrats in the Senate are ready to agree. The president is ready to sign an amendment --\nBLITZER: But they could have done this any time last year.\nNORTON: Wait a minute, Wolf. Wait a minute -- an amendment that said while we're fighting it out on the federal budget, we will let the district spend its own local funds.\nSo that's all I'm asking. I'm not in this fight, so don't ask me why the Democrats didn't pass the Democratic budget.\nI passed -- we passed our budget. Our budget is balanced. The only issue before the Senate and the House is, can we spend our local money? It doesn't have anything to do with their budget.\nThey can go on from now until Timbuktu. Let us spend our money and don't close down our city because the federal government can't get its act together.\nBLITZER: I'm with you there. This is an outrage, the fact that there is -- if there is going to be shutdown. I'm still hoping there won't be a shutdown, but it's --\nNORTON: I think there may not be.\nBLITZER: -- ridiculous when you think about it, when you think about how close they are. It would be a horrible, horrible tragedy, because 800,000 people directly are going to start losing their paychecks. And the District of Columbia, which is, as you point out correctly, taxation without representation, is going to suffer a great deal more than any other city in the United States.\nGood luck, Congresswoman. Thanks very much.\nNORTON: Thank you, Wolf.\nBLITZER: I feel your pain.\nConcerns within military families over a government shutdown. Also, why they are downright scared they won't be able to put food on the table.\nAnd tens of thousands of people on a hunger strike, including some members of Congress. We'll explain why.\nCAFFERTY: The question this hour is: Do you believe you're being told the truth about the nuclear accident in January?\nFred writes, \"You want the truth? You can't handle the truth.\"\n\"Just how should a government balance our right to know the truth with the perceived need to not create a panic and thus a larger problem? Can you really evacuate a million people? To where? Yes, without the truth, how can anyone try to act reasonably?\"\n\"In the end, we do have a right to know the truth. Honesty is the best policy.\"\nPaul in Ohio writes, \"Jack, I believe they're telling what they think they know with certainty. It is most certain that they don't know everything.\"\nJeremy in California, \"So I'm confused. Is the current California radiation level 'harmless to human health,' 'not immediately harmful to human health,' not permanently harmful to people outside the region,' or no more than an apples-to-oranges transcontinental flight?\"\nCraig writes, \"Perspective. In Japan, they have had yet another earthquake and have lived in fear and chaos for over a month. And yet, their government hasn't shut down. Nuclear disaster, natural disaster, absolute destruction hasn't kept their elected officials from doing their duty to the people.\n\"Yet, in America, we get Harry Reid, John Boehner and a White House who are more concerned with the 2012 election campaign. It's times like these when we see just how far off the mark we really are.\"\nLouis writes, \"No. Just too many things going wrong. They say that the seafood will be safe. I ask this: Do fish migrate or do they set up housekeeping in one spot and then stay there? And if so, why don't I catch fish in the same place every day?\"\nAnd Jim in Colorado, \"The nuclear industry telling the truth? The unicorn, garden gnome and I were talking this over just the other day, and we all agreed it could happen. Why not?\"\nIf you want to read more about the unicorn and the garden gnome, go to CNN.com/CaffertyFile.\nBLITZER: We will, for sure, Jack. Thank you. See you in a few moments.\nSeveral sticking points in the ongoing budget negotiations, but will the government shutdown come down to money or social issues?\nPlus, Donald Trump, he's making allegations about President Obama's birthplace. Does Donald Trump have any grounds for any of that? We're digging deeper for answers.\nBLITZER: The growing outrage over the budget crisis isn't just about Congress' failure to reach a deal, it's also about some of the cuts that are being proposed.\nLets bring in our own Lisa Sylvester once again. She has the details -- Lisa.\nWell, as congressional leaders hammer away on a budget compromise, a group of religious leaders have been fasting and praying to raise awareness of cuts in the budget that they say will harm the poor.\nJIM WALLIS, PRESIDENT, SOJOURNERS: Orange juice never tasted so good.\nSYLVESTER (voice-over): It's been 10 days since Jim Wallis last had solid food. The president of Sojourners, a Christian group that advocates for the underprivileged, is leading the charge among faith groups on a hunger fast to protest proposed cuts in the federal budget for the poor.\nWALLIS: We're saying a budget is a moral document. And whether at your kitchen table, as a family, or a church or a nation, you make choices. What's important, what's not?\nSYLVESTER: Wallis said in the last 10 days, more than 30,000 people around the country have joined in the fast in their own way. He says they have become a bit like God's lobbyists for the poor, putting a theological and moral spin on the cuts. Wallis said he is all for deficit reduction but --\nWALLIS: I don't think doing this at the expense of the poorest people is a good choice, or hurting those who are already hurting the most is moral or even is smart.\nSYLVESTER: Fiscal conservatives have suggested cuts in food stamps, foreign aid, and preschool programs for low-income families, that private groups can and should provide for the needy. But David Beckman of Bread for the World, who used to work at the World Bank, says the private sector can't fill the gap.\nDAVID BECKMAN, PRESIDENT, BREAD FOR THE WORLD: All the private charitable feeding in the country amounts to about six percent of the food that poor people get from the national programs. So if you slash food stamps, as the House Republicans are proposing to do, there is no way that churches and charities and charitable people can make up for that.\nSYLVESTER: Tony Hall was a member of Congress for years. As part of the fast, he is urging his former colleagues to reconsider cuts.\nTONY HALL, ALLIANCE TO END HUNGER: When you make decisions about people's lives, be careful. You don't cut the poorest of the poor, because they didn't get you here. They didn't cause this mess.\nSYLVESTER: On Wednesday, members of Congress began signing up for the fast.\nREP. BARBARA LEE (D), CALIFORNIA: Several members of Congress today will be joining you in this fast.\nSYLVESTER: Now, Sheila Jackson Lee, Keith Ellison and Jim McGovern are among 28 congressional Democrats who have signed on so far to join the hunger fast, and they will be doing a relay, with each taking one day to fast and then passing on the fast to their colleagues.\nWallis and Hall, they're doing it a little differently. They're fasting all the way through Easter Sunday. But they say for them, this fight for the poor is larger than just the specific budget battle -- Wolf.\nBLITZER: These are committed, committed people to try to help.\nWe're going back to Libya in just a few moments. Rebel forces, furious with NATO right now, the R-rated message they are sending through our own Ben Wedeman.", "answers": ["The sticking point in the political showdown over the budget is how much spending to cut."], "length": 7321, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "6ee1971ac8c7c0ee4a5add9ec201557d28d3c2f66b176fb4"} {"input": "When did the London Paving and Lighting Act pass, which mandated the numbering of houses?", "context": "A Brief History of Benjamin Franklin's Residences on Craven Street, London: 1757 - 1775 - Journal of the American Revolution\nBenjamin Franklin House, 36 Craven St, London. (Photo by Elliott Brown | Wikimedia Commons)\nIf one looked into Benjamin Franklin’s time on Craven Street, they might initially believe he lived at 36 Craven Street the entirety of his two stays in London based on the plethora of articles on the internet that say so. If they dug a little deeper they might read that he lived at No. 27 Craven Street, previously numbered 7, but now numbered 36; or that he lived exclusively at No. 7 Craven Street; or that he lived in multiple residences on Craven Street; or that he moved out of No. 36 to another house on Craven Street and then moved back into No. 36 the last year of his residence. What is one to believe with all of the conflicting accounts? What does the historical record have to say about Franklin’s time on Craven Street?\nFigure 1. Spur Alley 1685. “A map of the parish of St Martins in the Fields, taken from ye last survey, with additions (1685)”. (© The British Library Board, Shelfmark: Maps Crace Port. 13.2, Item number: 2)\nBefore Craven Street existed there was Spur Alley, a narrow passageway sandwiched between the Hungerford Market to the north (now Charing Cross Station) and Scotland Yard and the Northumberland House and Garden to the south. It was flanked on both ends by major thoroughfares, the Strand on the west, connecting Westminster to London by road, and the River Thames on the east, not only connecting the two cities to each other and to Southwark on the south side of the Thames, but connecting the entire metropolis to the rest of the world. Being located in the City of Westminster, Spur Alley had escaped the devastation of the Great Fire of London in 1666 leaving its wooden structures, built in the early part of seventeenth century, intact, but also in dire need of restoration or demolition. “The ratebooks show that during the last thirty years or so of their existence the houses in Spur Alley were in a very bad condition. Few of them were rated at more than a few shillings and many of them were unoccupied.”[1] The landowner, William, 5th Baron Craven, desiring to increase the profitability of his assets, tore down the derelict structures on Spur Alley around 1730 and leased the newly established lots to builders. By 1735, twenty brick houses in the Georgian style had been built on the west side and sixteen on the east side of the way now called Craven Street.[2]\nFigure 2. Craven Street 1746. (John Rocque London, Westminster and Southwark, First Edition 1746, Motco Enterprises Limited, motco.com)\nLetters to Franklin during his residence with Mrs. Margaret Stevenson, his landlady on Craven Street, were addressed rather vaguely; “Craven Street/Strand”, “Mrs. Stevensons in Craven Street”, or “Benjamin Franklin Esqr.” are but a few examples. Letters from Franklin referenced “London,” or sometimes “Cravenstreet,” but never included a number. Despite the absence of numbered addresses in Franklin’s correspondence, there was a sense of one’s place in the neighborhood based on entries in the Westminster Rate Books (tax assessments). The Rate Books did not list house numbers during Franklin’s time there, but they did list the residents of Craven Street in a particular order that became the default numbering system for the street. Number one was associated with the first resident listed under “Craven Street” in the Rate Books and was the northernmost house on the west side of the street. The numbers increased counter-clockwise down the west side and up the east side in accordance with the list of residents. In 1748, the first year of Margaret Stevenson’s (Stevens in the Rate Books for that year) residence on Craven Street, she is listed as the twenty-seventh resident, the second house north of Court Street (later Craven Court, now Craven Passage) on the east side of the street.[3]\nIn 1766, Parliament passed the London Paving and Lighting Act (6 Geo. 3 c. 26), “An act for the better paving, cleansing, and enlightening, the city of London, and the liberties thereof; and for preventing obstructions and annoyances within the same; and for other purposes therein mentioned.”[4] One of the other purposes therein mentioned was the numbering of houses. With an aim to bring order to the chaotic numbering systems or lack thereof on London streets the Act provided that “… the said commissioners … may also cause every house, shop, or warehouse, in each of the said streets, lanes, squares, yards, courts, alleys, passages, and places, to be marked or numbered, in such manner as they shall judge most proper for distinguishing the same.”[5] This was quite an undertaking that took years to accomplish. It was a decade later before numbered addresses on Craven Street in the City of Westminster appeared in The London Directory (1776). The London Directory and its competitors were published primarily by booksellers or printers to supplement their income and were highly profitable. To say they were competitive is an understatement. “Some of the most hotly disputed struggles over copyright in the century concerned guidebooks. Many were optimistically emblazoned with a royal license and a notice that the work had been entered at Stationers’ Hall. Various struggles between rival guides intensified as the potential for profits became clear.”[6] The London Directory boldly proclaimed to contain “An ALPHABETICAL LIST OF THE NAMES and PLACES of ABODE of the MERCHANTS and PRINCIPAL TRADERS of the Cities of LONDON and WESTMINSTER, the Borough of SOUTHWARK, and their Environs, with the Number affixed to each House.”[7] Kent’s Directory made a similar proclamation: “An Alphabetical LIST OF THE Names and Places of Abode OF THE DIRECTORS of COMPANIES, Persons in Public Business, MERCHANTS, and other eminent TRADERS in the Cities of London and Westminster, and Borough of Southwark WITH THE NUMBERS as they are affixed to their Houses agreeable to the late Acts of Parliament.”[8] Mrs. Stevenson wasn’t included in the directories because she didn’t meet the criteria of being a merchant or trader, not because she was a woman. Although it is rare to see women listed in the directories, some examples do exist.[9] If Mrs. Stevenson had appeared in the directories in 1776 it would not have been on Craven Street as she had moved to Northumberland Court, a stone’s throw away, the previous year.[10] A comparison of Craven Street residents whose names and addresses do appear in the directories with the same residents as they appear in the Westminster Rate Books determines if the numbering systems were congruent. For the most part they were. For example, Joseph Bond at No. 30, William Rowles at No. 31, Samuel Sneyd at No. 32, and Jonathan Michie at No. 35 in The London Directory coincide with their places of residence in the Westminster Rate Books; however, errors did occur. The 1776 edition of The London Directory lists Brown & Whiteford, wine merchants, at No. 9 Craven Street while the Westminster Rate Books list them as the twenty-ninth residents. Obviously, it makes no sense to have Brown & Whiteford at No. 9 in The London Directory and their next-door neighbor, Joseph Bond, at No. 30. The same error appears in Baldwin’s The New Complete Guide for 1783. The New Complete Guide may have “borrowed” the error from The London Directory. It was not uncommon for the owner of one directory to copy entries from another to save both time and money. Beginning in 1778 and contrary to The London Directory, Kent’s Directory faithfully followed the numbering system of the Westminster Rate Books in all of its editions and listed Brown & Whiteford at No. 29 as did Bailey’s Northern Directory in 1781. Perhaps realizing their error, The London Directory changed their listing of Brown & Whiteford from No. 9 to No. 29 in their 1783 edition and maintained that listing thereafter.\nSometime prior to 1792, the embankment on the Thames at the south end of Craven Street had been sufficiently extended allowing for the construction of ten new houses below the original houses: “ … four houses, Nos. 21–24, were built on the west side, and six houses, Nos. 25–30, on the east side of the way.”[11] In a note in the same report, the new numbering system is explained. “The houses in the street, which had previously been numbered consecutively down the west side and up the east side, were then renumbered on the same system to include the additional houses.”[12] Because the new houses (21-24) on the west side were built below the existing houses (1-20), houses 1-20 retained their original numbering.\nFigure 4. Craven Street 1799. (Richard Horwood’s Map of London, Westminster and the Borough of Southwark 1799, Motco Enterprises Limited, motco.com)\nOne would think that the numbers of the sixteen original houses on the east side, Nos. 21 – 36, would simply increase by ten with the addition of the ten new houses, but such was not the case; they increased by nine. How could that be? The only possible explanation is that No. 21 of the original houses was demolished to make way for the construction of the northernmost of the six new houses on the east side (No. 30). Evidence of No. 21’s demolition appears in the lease granted to Charles Owen by William, 7th Baron Craven, in 1792, which describes No. 22 as: “All that messuage in Craven Street late in the occupation of Francis Deschamps undertaker … being the Southernmost house in the Old Buildings on the East Side of the said Street numbered with the No. 22.”[13] The lease describes No. 22 as being the southernmost house in the old buildings on the east side of Craven Street. Clearly the house previously at No. 21 did not exist when the lease granted to Charles Owen was written in 1792 as it used to be the southernmost house. It is also worth noting that in 1790, The London Directory listed Jacob Life at No. 21 (original numbering). In 1791-2, it listed him at No. 6. With No. 21 vacated, it would allow for its demolition and the construction of the tenth new house. By utilizing lot No. 21 for the new construction, only nine additional lots were needed to build the ten houses, hence, Margaret Stevenson’s former residence at 27 became 36 (27 + 9) in the renumbering and not 37.\nFor nearly a century and a half after Franklin departed London for America in March of 1775 the scales were tipped heavily in favor of his residence having been No. 7 Craven Street. As early as 1807 in London; Being An Accurate History And Description Of The British Metropolis And Its Neighborhood, Volume 4, one would have read: “In Craven Street is a house, No. 7, remarkable for having been the residence of Dr. Benjamin Franklin.[14] In 1815, the identical phrase appeared in The Beauties of England and Wales.[15] After 23 editions of not mentioning Franklin, his name finally appeared in the 24th edition of The Picture of London in 1826: “The house, No. 7, Craven Street, in the Strand, was once the residence of Dr. Benjamin Franklin.”[16] In 1840, Jared Sparks referred to Franklin’s Craven Street residence appearing in London guide books in his voluminous The Works of Benjamin Franklin: “In the London Guide Books, ‘No. 7, Craven Street,’ is still indicated as the house in which Dr. Franklin resided.”[17] In 1846, George Gulliver F.R.S., in his book, The Works of William Hewson, wrote: “She [Polly] had been upon terms of the warmest friendship with Dr. Franklin\nFigure 5. No. 7 Craven Street with Memorial Tablet. (Photo courtesy of British History Online, and the Survey of London)\nsince she was eighteen years of age. That eminent philosopher resided with her mother, Mrs. Margaret Stevenson, at No. 7, Craven Street, Strand, during the fifteen years of his abode in London.”[18] Guide books mentioning Franklin at No. 7 continued to proliferate throughout the century: Handbook for London; Past and Present, Volume I (1849);”[19] Handbook for Modern London (1851);”[20] The Town; Its Memorable Characters and Events (1859);”[21] London and Its Environs (1879).[22] There was an anomaly when London In 1880 Illustrated With Bird’s-Eye Views of the Principal Streets, Sixth Edition (1880) placed Franklin at 27 Craven street.[23] The anomaly lasted for six years until his place of residence was changed to No. 7 in the revised edition, London. Illustrated by Eighteen Bird’s-Eye Views of the Principal Streets (1886).[24] London Past and Present; Its History, Associations, and Traditions, Volume 1 (1891), copied the 1849 Handbook for London almost word-for-word and included, “The house is on the right from the Strand.”[25] In October of 1867, The Society of Arts in London declared that: “In order to show how rich the metropolis is in the memory of important personages and events, which it would be desirable to mark by means of tablets on houses, the Council have caused an alphabetical list to be prepared, … ”[26] Franklin had been elected a corresponding member to the Society in 1756 and was a popular choice among Council members deciding who they were to memorialize.[27] By January of 1870, a tablet honoring him was affixed to the house they believed to have been his residence while in London, No. 7 Craven Street in the Strand on the west side of the street.[28] A majority of historians writing about Franklin in the nineteenth and early twentieth century placed him at No. 7: O. L. Holley, The Life of Benjamin Franklin (1848); E. M. Tomkinson, Benjamin Franklin (1885); John Torrey Morse, Benjamin Franklin (1891); Paul Elmer More, Benjamin Franklin (1900); John S. C. Abbot, Benjamin Franklin (1903); Sydney George Fisher, The True Benjamin Franklin (1903). A notable exception is D. H. Montgomery’s His Life Written by Himself published in 1896. He has Franklin at No. 27 Craven Street. It seems then that depending upon the source, Franklin was thought to have lived at either No. 7 or No. 27, but not both, the overwhelming majority favoring No. 7. As late as 2011, Franklin is still mentioned as living at No. 7.[29]\nIn 1913, No. 7 was scheduled to be torn down. An article in the March 1914 edition of The Book News Monthly, describes the situation:\nAs is well known to informed American pilgrims, it has been possible for all admirers of the famous philosopher and statesman to pay their respects to his memory before that house, No. 7 Craven Street, just off the Strand, which was his chief home during his two sojourns in the British capital, but even as these lines are being written the London newspapers are recording that that interesting shrine is soon to be pulled down to make room for a restaurant. It is some mitigation of this misfortune to remember that at the most the Craven Street house was nothing more than a reproduction of the one in which Franklin had his suite of four rooms, for the structure has been rebuilt since Franklin’s time. When, then, some one makes a piteous plea that at least the philosopher’s bedroom shall be preserved, the soothing answer is that the apartment in question is only a replica of that in which the illustrious American enjoyed his well-earned slumbers in 1757-62 and 1764-75. The restaurant-builder, however, with an eye doubtless to possible American patronage, has assured the world that every effort will be made to preserve as much as possible of the entire structure.[30]\nConcerned with the possible demolition of Franklin’s residence, the Royal Society of Arts (formerly the Society of Arts[31]) initiated an inquiry into the matter.[32] The London County Council, having taken over the responsibility of placing memorial tablets on notable houses from the Royal Society, was charged with the investigation. It ultimately fell to Sir George Laurence Gomme, a clerk to the Council, to come up with a response. A few years earlier Sir George had discovered Margaret Stevenson residing at No. 27 Craven Street in the Westminster Rate Books. He must have wondered why No. 7 on the west side of Craven Street was being celebrated as Franklin’s residence when the evidence clearly showed otherwise.\nSir George and his staff examined the various London directories discussed earlier and came up with a novel explanation for the discrepancy. They concluded that there had been two numbering systems on Craven Street. An anonymous author echoes Sir George’s conclusion about the two numbering systems in an article in The Journal of the Royal Society of Arts:\n…an inspection of the directories of that time proves that there were at least two systems of numbering in Craven Street before the erection of the additional houses. According to one of these the numbers started from the top (Strand end) on the west side of the street, and ran down to the bottom to No. 20, then crossed over and went back to the Strand along the east side – 21 to 36. According to the other system, the east side of the street was numbered from the bottom upwards, starting at No 1. This was not apparently in general use, but there is evidence that this numbering was at all events occasionally used.\nThe evidence of these two systems of numbering, and for believing that Mrs. Stevenson’s house was first No. 7 under the oldest system, next No. 27 under the second system, and finally No. 36 under the latest and existing system, is to be found in the various directories and the Westminster rate-books.[33]\nThe “evidence” mentioned above consisted of The London Directory’s listing of Brown & Whiteford at No. 9: “The rate-books for 1781 and 1786 show the house next but one to the north of Mrs. Stevenson’s house as in the occupation of Brown and ‘Whiteford,’ while the old directories mention the business of the firm as wine merchants, and give their address as 9, Craven Street – then a little later, down to 1791, as 29, Craven Street. Curiously enough, in the years 1778 to 1780, or 1781, Lowndes gives it as No. 9, and Kent as 29.”[34] Ignoring Kent’s Directory having Brown and Whiteford as 29 and The London Directory (Lowndes) having Brown and Whiteford “a little later” as 29, and knowing that Mrs. Stevenson lived two doors south of them, Sir George concluded that her house must have been numbered 7, even though there is no listing in any of the directories of her residence ever being No. 7. He surmised that the No. 7 on the west side of Craven Street with the memorial tablet thought to have been Franklin’s residence had simply been confused with number 7 (27) on the east side. Again from The Journal of the Royal Society of Arts:\nTaking all the evidence together, there cannot be any doubt whatever that Mrs. Stevenson’s house, in which Franklin lodged, was the house two doors north from Craven Court, first numbered 7, afterwards 27, and finally 36, and consequently that the house in which Franklin lived was that now numbered 36, not the one now numbered 7, on which the tablet is placed.[35]\nA response to The Royal Society of Arts was issued: “… the London County Council … informed the Society that it had made a mistake and that No. 36 Craven street was the building that deserved commemoration.”[36] The Society accepted the Council’s conclusion, and despite assurances of preservation by the restaurant builder, No. 7 was torn down the following year.\nSir George’s assertion “that Mrs. Stevenson’s house, in which Franklin lodged, was the house two doors north from Craven Court” was correct, however, his assertion that it was “first numbered 7, afterwards 27”, was not. It was only by association with the errant entry of Brown & Whiteford at No. 9 from 1776-1782 in The London Directory that Mrs. Stevenson’s address was conjured to be No. 7. The problem with associating her address exclusively with that of Brown & Whiteford at No. 9 during those years is that, as previously demonstrated, The London Directory also listed four other Craven Street residents, Bond, Rowles, Sneyd, and Michie, who’s addresses did conform to the numbering system in The Westminster Rate Books. If Brown & Whiteford at No. 9 was indicative of a numbering system different from The Westminster Rate Books, Bond, Rowles, Sneyd, and Michie would have been listed as Nos. 10, 11, 12, and 15, respectively. So on one hand Sir George was relying on the Westminster Rate Books to establish Mrs. Stevenson at No. 27 and on the other hand he was dismissing the Westminster Rate Books to establish her at No. 7. Instead of using the anomalous listing of Brown & Whiteford at No. 9, he could have just as easily, and more logically, used the Bond et al. listings, or the post-1782 Brown & Whiteford listing in the London Directory at No. 29 to establish Mrs. Stevenson at No. 27. Even if there had been two numbering systems, his assertion that No. 27 was first numbered 7 would still be false. The earliest numbering system was the Westminster Rate Books dating from the early 1730s when the houses were constructed. Brown & Whiteford at No. 9 didn’t appear until 46 years later and then only for a brief period.\nThere is ample evidence in Franklin’s correspondence and in a memoir by Polly Hewson (Mrs. Stevenson’s daughter) that Benjamin and Mrs. Stevenson lived in not one, but two houses on Craven Street. On July 6, 1772, Polly wrote to Benjamin from her house at Broad Street North in London: “My Mother I must tell you went off last friday week, took our little Boy with her and left Mr. Hewson [Polly’s husband, William] the care of her House [27 Craven Street]. The first thing he did was pulling down a part of it in order to turn it to his own purpose, and advantage we hope. This Demolition cannot affect you, who at present are not even a Lodger [Benjamin was traveling at the time], your litterary apartment remains untouch’d, the Door is lock’d …”[37] In a memoir about her husband written after his death Polly writes: “He [William Hewson] began his Lectures Sept. 30, 1772, in Craven-street, where he had built a Theatre adjoining a house which he intended for the future residence of his family.”[38] On October 7, 1772, Benjamin wrote to his son William: “I am very well. But we [Mrs. Stevenson and I] are moving to another House in the same street; and I go down tomorrow to Lord LeDespencer’s to [stay a] Week till things are settled.”[39] To his son-in-law, Richard Bache, on the same day he wrote: “We are moving to another House in the [street] leaving this to Mr. Hewson.”[40] Writing to a friend on October 30, 1772 he explained: “I should sooner have answered your Questions but that in the Confusion of my Papers, occasioned by removing to another House, I could not readily find the Memorandums …”[41] On November 4, 1772 Benjamin informed his wife Deborah of the move. “We are removed to a more convenient House in the same street, Mrs. Stevenson having accommodated her Son-in-Law with that we lived in. The Removing has been a troublesome Affair, but is now over.”[42]\nAn agreement had been struck between the parties. Margaret and Benjamin would move to another house on Craven Street and allow Polly and William to move into No. 27, the large yard behind the house being spacious enough to accommodate the anatomy school William wished to build.[43] Perhaps the idea was inspired by Margaret’s next-door neighbor at No. 26, Dr. John Leake, a man-midwife and founder of the Westminster Lying-in Hospital, who had built a theater adjoining his residence in which he practiced anatomy and taught midwifery.[44]\nAfter Margaret and Benjamin vacated No. 27, Polly, William, their son William Jr., and William’s younger sister, Dorothy Hewson, took up residence there.[45] In the 1773 Westminster Rate Books for Craven Street, Mrs. Stevenson’s (Stephenson in the Rate Books) name has been crossed out and replaced with “William Hewson.”[46] Further proof that the Hewsons had indeed moved into 27 Craven Street has been confirmed by the discovery of human and animal remains buried in the basement of No. 36 (formerly No. 27 and now the Benjamin Franklin House), a by-product of the dissections that took place at William’s anatomy school.[47]\nSo what house on Craven Street did Mrs. Stevenson and Benjamin move into after vacating No. 27? An examination of the Westminster Rate Books for the years 1774 and 1775 reveal them living not at No. 7 on the west side of Craven Street as one might expect from the overwhelming consensus of nineteenth century guidebooks and biographies, but surprisingly at No. 1.[48] The controversy of No. 7 being torn down was all for naught as it had never been Franklin’s residence. Sir George was correct on that point. Unfortunately, No. 1 was torn down as well in the early part of the twentieth century. The first time No. 1 is mentioned as Franklin’s second residence is in the Survey of London: Volume 18, St Martin-in-The-Fields II: the Strand published by the London County Council in 1937, ironically the same County Council that had declared No. 36 as Franklin’s only residence twenty-four years earlier.\nFrom 1748 until 1772 Margaret ‘Stephenson’ occupied this house [No. 27 (36)], and it was there that Benjamin Franklin settled after his arrival in London in 1757 as Agent to the General Assembly of Pennsylvania … In October, 1772, Mrs. Stevenson and Franklin removed to No. 1, Craven Street (now demolished), and No. 36 was for the next two years occupied by William Hewson, surgeon, who had married Mary Stevenson.[49]\nIn the spring of 1774, William Hewson died unexpectedly of septicemia two weeks after cutting himself while dissecting a cadaver. Polly was left to care for their two young sons and was pregnant with a daughter she would give birth to in August of the same year. Is it possible that Margaret and Benjamin moved back into No. 27 to assist Polly after the death of her husband as suggested in The Americanization of Benjamin Franklin?[50]\nIf the Westminster Rate Books are to be believed, the answer is no. For the year 1774, the Rate Books list Margaret Stevenson at No. 1 and William Hewson at No. 27. For the year 1775, they list Margaret Stevenson at No. 1 and Magnus Falkner (Falconer/Falconar) at No. 27. Magnus was William’s assistant at the anatomy school and fiancé to William’s sister, Dorothy. On his death bed, William instructed Polly, “let Mr. Falconar be my successor.”[51] Magnus would immediately take over the running of the anatomy school and continue William’s unfinished research. Four months later, he and Dorothy would marry.[52] Essentially only two things changed at 27 Craven Street after William’s death: Polly gave birth to her daughter, and Magnus replaced William as the lease holder, so even if Margaret and Benjamin had wished to move back into No. 27, there would have been no room for them. It is also interesting to note that considering the multiple times Benjamin wrote of his move out of No. 27 (and complained of it), he never once mentioned moving back into No. 27 in any of his correspondence after Mr. Hewson’s death.\nFigure 6. No. 36 Craven Street. (Photo courtesy of David Ross, britainexpress.com)\nIn sum, based on the Westminster Rate Books[53] and Franklin’s correspondence, Mrs. Stevenson is known to have resided at No. 27 (36) Craven Street from 1748 to 1772. It follows that, aside from the two years Franklin spent in Philadelphia from 1762 to 1764, he resided there from 1757 to 1772. Franklin’s correspondence also reveals that in the autumn of 1772, he and Mrs. Stevenson moved to another house on Craven Street. The 1773 Westminster Rate Books show her name crossed off at No. 27 and William Hewson’s inserted. The following year the Rate Books list her at No. 1 Craven Street. Evidence for Mrs. Stevenson and Benjamin remaining at No. 1 after William’s death appears in the Westminster Rate Books for 1775 which have Mrs. Stevenson still residing at No. 1 and Magnus Falkner residing at No. 27. Further evidence can be construed from the lack of any mention of a move back into No. 27 in Franklin’s correspondence. Despite the many theories one could devise as to why Franklin was thought to have lived at No. 7 Craven Street by so many guide books and Franklin biographers of the nineteenth century, one thing is certain; at some point after Franklin’s departure to America in March of 1775, and no later than 1807, someone mistakenly associated him with No. 7 on the west side of Craven Street, and it soon became his de facto residence. Credit must go to D. H. Montgomery in 1896 and Sir George in 1913 for setting the record partially straight by placing Franklin at No. 27(36). In 1937, the London County Council gave us the first accurate account of Franklin’s residences on Craven Street in the Survey of London at No. 27(36) and No. 1. It has been shown conclusively that No. 27 was never previously numbered 7. It was, however, renumbered 36 in 1792 after ten additional houses were built at the southern end of the street and remains No. 36 to this day.\n[1] “Craven Street and Hungerford Lane”, in Survey of London: Volume 18, St Martin-in-the-Fields II: the Strand, ed. G H Gater and E P Wheeler (London, 1937), 27-39, Early History of the Site.\nhttp://www.british-history.ac.uk/survey-london/vol18/pt2/pp27-39\n[2] “England, Westminster Rate Books, 1634-1900,” from database with images, Craven Street – 1735, FamilySearch from database by FindMyPast and images digitized by FamilySearch; citing Westminster City Archives, London.\n[3] Ibid., Craven Street – 1748.\n[4] The Statutes at Large, From Magna Charta to the End of the Eleventh Parliament of Great Britain. Anno 1761 Continued, Vol. XXVII, ed. Danby Pickering, (Cambridge, John Archdeacon, 1767), 96.\n[6] James Raven, Publishing Business in Eighteenth-Century England, (Woodbridge: The Boydell Press, 2014), 201.\n[7] The London Directory For the Year 1776, Ninth Edition, (London: T. Lowndes, 1776), title page.\n[8] Kent’s Directory For the Year 1778, Forty-Sixth Edition, (London: Richard and Henry Causton, 1778), title page.\n[9] A listing in Kent’s Directory for the Year 1882 on p. 28 reveals, “Brown Sarah, Leather-seller, 1, Westmoreland-buildings, Aldersgate-street”, and in Kent’s Directory for the Year 1883 on p. 175, “Whiteland Mary, Wine & Brandy Mercht. Jermyn-str. St. James.”\n[10] “The Papers of Benjamin Franklin,” Sponsored by The American Philosophical Society and Yale University, Digital Edition by The Packard Humanities Institute, 22:263a.\nhttp://franklinpapers.org/franklin\nMrs. Stevenson wrote to Benjamin Franklin a letter from her new home at 75 Northumberland Court on November 16, 1775: “In this Court I have a kind friend, Mr. Lechmoen he comes and seats with me and talks of you with a hiy regard and friendship.”\n[11] Survey of London, Early History of the Site.\n[12] Survey of London, Footnotes/n 10.\n[13] Survey of London, Historical Notes/No. 31.\n[14] David Hughson, LL.D., London; Being An Accurate History And Description Of The British Metropolis And Its Neighbourhood, To Thirty Miles Extent, From An Actual Perambulation, Vol. IV, (London: W. Stratford, 1807), 227.\n[15] The Reverend Joseph Nightingale, The Beauties of England and Wales: Or, Original Delineations, Topographical, Historical, and Descriptive, of Each County, Vol. X, Part III, Vol. II (London: J. Harris; Longman and Co.; J. Walker; R. Baldwin; Sherwood and Co.; J. and J. Cundee; B. and R. Crosby and Co.; J Cuthell; J. and J. Richardson; Cadell and Davies; C. and J. Rivington; and G. Cowie and Co., 1815), 245.\n[16] John Britton, F.S.A. & Co., ed., The Original Picture of London, Enlarged and Improved: Being A Correct Guide For The Stranger, As Well As For the Inhabitant, To The Metropolis Of The British Empire Together With A Description Of The Environs, The Twenty-Fourth Edition (London: Longman, Rees, Orme, Brown, and Green, 1826), 479.\n[17] Jared Sparks, The Works of Benjamin Franklin, Vol. VII, (Philadelphia: Childs & Peterson, 1840), 151.\n[18] George Gulliver, F.R.S., The Works of William Hewson, F. R. S., (London: Printed for the Sydenham Society, MDCCCXLVI), xx.\n[19] Peter Cunningham, Handbook for London; Past and Present, Vol. I, (London: John Murray, 1849), 245.\n[20] F. Saunders, Memories of the Great Metropolis: or, London, from the Tower to the Crystal Palace, (New York: G.P. Putnam, MDCCCLII), 138.\n[21] Leigh Hunt, The Town; Its Memorable Characters and Events, (London: Smith, Elder and Co., 1859), 185.\n[22] K. Baedeker, London and Its Environs, Including Excursions To Brighton, The Isle of Wight, Etc.: Handbook For Travelers, Second Edition, (London: Dulau and Co., 1879), 133.\n[23] Herbert Fry, London In 1880 Illustrated With Bird’s-Eye Views of the Principal Streets, Sixth Edition, (New York: Scribner, Welford, & Co., 1880), 50.\n[24] Herbert Fry, London. Illustrated By Eighteen Bird’s-Eye Views of the Principal Streets, (London: W. H. Allen and Co., 1886), 40.\n[25] Henry B. Wheatley, F.S.A., London Past and Present; Its History, Associations, and Traditions, Vol. 1, (London: John Murray, New York: Scribner & Welford, 1891), 473.\n[26] The Journal of the Society of Arts, Vol. XV, No. 778, (October 18, 1867): 717.\n[27] D. G. C. Allen, “Dear and Serviceable to Each Other: Benjamin Franklin and the Royal Society of Arts,” American Philosophical Society, Vol. 144, No. 3, (September 2000): 248-249.\nFranklin was a corresponding member in 1756 because he was still residing in Philadelphia. He became an active member the following year when he moved to London.\n[28] The Journal of the Society of Arts, Vol. XVIII, No. 894, (Jan. 7, 1870): 137.\n“Since the last announcement, the following tablets have been affixed on houses formerly occupied by – Benjamin Franklin, 7 Craven-street, Strand, W.C.”\n[29] Franklin in His Own Time, eds. Kevin J. Haytes and Isabelle Bour, (Iowa City, University of Iowa Press, 2011), xxxvii.\n“Takes lodgings with Margaret Stevenson at No. 7 Craven Street.” It is unknown if the editors are referring to No. 7 on the west side of Craven Street or No. 36 on the east side using Sir George’s explanation of No. 36 being previously numbered 7.\n[30] Henry C. Shelly, “American Shrines on English Soil, III. In the Footprints of Benjamin Franklin,” in The Book News Monthly, September, 1913 to August, 1914, (Philadelphia: John Wanamaker, 1914), 325.\n[31] The Journal of the Royal Society of Arts, Vol. LVI, No. 2,880, (Jan. 31, 1908): 245.\nhttp://babel.hathitrust.org/cgi/pt?id=mdp.39015058423073;view=1up;seq=251\n“His Majesty the King, who is Patron of the Society, has granted permission to the Society to prefix to its title the term ‘Royal,’ and the Society will consequently be known in future as the ‘Royal Society of Arts.’”\n[32] Nineteenth Annual Report, 1914, of the American Scenic and Historic Preservation Society, (Albany: J. B. Lyon Company, 1914), 293.\nhttp://babel.hathitrust.org/cgi/pt?id=wu.89072985302;view=1up;seq=4;size=150\n[33] The Journal of the Society of Arts, Vol. LXII, No. 3,183, (Nov. 21, 1913): 18.\nhttp://babel.hathitrust.org/cgi/pt?id=mdp.39015058422968;view=1up;seq=26\n[36] Allen, “Dear and Serviceable,” 263-264.\n[37] Papers of Benjamin Franklin, 19:20.\n[38] Thomas Joseph Pettigrew, F. L. S., Memoirs of the Life and Writings of the Late John Coakley Lettsom With a Selection From His Correspondence, Vol. I, (London: Nichols, Son, and Bentley, 1817), 144 of Correspondence.\n[39] Papers of Benjamin Franklin, 19:321b.\n[40] Ibid., 19:314.\n[41] Ibid., 19:353a.\n[43] Simon David John Chaplin, John Hunter and the ‘museum oeconomy’, 1750-1800, Department of History, King’s College London. Thesis submitted for the degree of Doctor of Philosophy of the University of London., 202.\n“Following Falconar’s death [1778] the lease [27 Craven Street] was advertised, and the buildings were described as:\nA genteel and commodious house, in good Repair, with Coach-house and Stabling for two Horses…consisting of two rooms and light closets on each floor, with outbuildings in the Yard, a Museum, a Compleat Theatre, and other conveniences. (Daily Advertiser, 27 August 1778)”\n[44] Simon Chaplin, “Dissection and Display in Eighteenth-Century London,” in Anatomical Dissection in Enlightenment England and Beyond: Autopsy, Pathology and Display, ed. Dr. Piers Mitchell, (Burlington: Ashgate Publishing Company, 2012), 108.\n“Given that a nearby building at 35 [ No. 26 in Franklin’s time] was occupied by the man-midwife John Leake, who advertised lectures – including lessons in the art of making preparations – at his ‘theatre’ between 1764 and 1788, it is possible that some facilities were shared. In both cases, however, the buildings [Leake’s residence at No. 26 and Hewson’s residence next door at 27] served a dual function as domestic accommodation and as sites for lecturing and dissection.”\n[45] George Gulliver, F.R.S., The Works of William Hewson, F. R. S., (London: Printed for the Sydenham Society, MDCCCXLVI), xviii.\n[46] Westminster Rate Books, Craven Street – 1773, courtesy of the City of Westminster Archives.\n[47] S.W. Hillson et al., “Benjamin Franklin, William Hewson, and the Craven Street Bones,” Archaeology International, Vol. 2, (Nov. 22, 1998): 14-16.\nhttp://dx.doi.org/10.5334/ai.0206\n[48] Westminster Rate Books, Craven Street – 1774, 1775, courtesy of the City of Westminster Archives.\n[49] Survey of London, Historical Notes/No. 36, Craven Street (not sourced).\n[50] Gordon S. Wood, The Americanization of Benjamin Franklin, (New York: The Penguin Press, 2004), 261.\n[51] Pettigrew, Memoirs, 146 of Correspondence.\n[52] http://founders.archives.gov/documents/Franklin/01-22-02-0178, note 7. “Falconar married Hewson’s sister five months after the Doctor’s death; most of the Craven Street circle attended the wedding, and BF gave away the bride: Polly to Barbara Hewson, Oct. 4, 1774, APS” (American Philosophical Society); “England Marriages, 1538–1973 ,” database, FamilySearch (https://familysearch.org/ark:/61903/1:1:V52W-TGS : accessed September 15, 2015), Magnus Falconar and Dorothy Hewson, September 12, 1774; citing Saint Martin In The Fields, Westminster, London, England, reference ; FHL microfilm 561156, 561157, 561158, 942 B4HA V. 25, 942 B4HA V. 66.\n[53] I chose to rely on the Westminster Rate Books for the numbering system on Craven Street. The books were consistent throughout the eighteenth century in the ordering of residents on the street and were used as the basis for the 1792 re-numbering. For the most part, commercial directories aligned with them as well. If by chance a directory didn’t initially align, it would inevitably produce future editions that did.\nBenjamin Franklin, Benjamin Franklin House, London\nMore from David Turnquist\nIf one looked into Benjamin Franklin’s time on Craven Street, they might...\nI think it’s very ironic that on the street maps included in your excellent article, Craven Street is so close to Scotland Yard. Because following the back and forth juxtapositions of numbers 7, 27 and 36 Craven Street (throw in 75 Northumberland Court and 1 Craven Street, too) was a case that could confound Sherlock Holmes.\nExcellent job of deciphering street renumbering material spanning sixty years, including that of a wrong house number (# 7) being erroneously identified and then perpetuated in subsequent street map printings. It’s gratifying at least to know that the present day #36 Craven Street is the correct house for Ben Franklin tourists to visit. Except for #1 Craven Street for the last three years Franklin was in London, but we won’t get into that.\nAgain, excellent article, David!", "answers": ["1766."], "length": 6539, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "229b5bf2402ae2950c72494dcfd5f1c825c396205b49ca50"} {"input": "What is the main advantage of the proposed method in terms of computation time?", "context": "Paper Info\n\nTitle: Incorporating Human Path Preferences in Robot Navigation with Minimal Interventions\nPublish Date: 16 Mar 2023\nAuthor List: Oriana Peltzer, Dylan Asmar, Mac Schwager, Mykel Kochenderfer\n\nFigure\n\nHyperplane arrangement of a twodimensional space containing two obstacles (colored in gray).The robot is located inside the pink polytope, surrounded by three adjacent obstacle-free polytopes.Each hyperplane on the boundary of the robot's polytope corresponds to one of the nonredundant constraints in eq.(4).(b)Graph derived from the hyperplane arrangement.The nodes on the graph designate polytopes, and edges designate transitions to adjacent polytopes.To estimate the human's preference, the robot updates a posterior over the goal and over which of the graph transitions φ 1 , φ 2 and φ 3 is preferred by the human.(c)Example preference defined over the graph.The location of the goal is indicated in yellow in the lower right polytope.For each node, the outgoing pink arrow designates the edge on the graph corresponding to the preferred transition between polytopes.\nSimple, 10 × 10, 8 polytopes.(b) Map 2: Office, 10 × 10, 56 polytopes.(c) Map 3: Classroom, 20 × 20, 73 polytopes.(d) Sampled observations and robot's executed trajectories.\nFig.5: Maps used for simulating the robot navigation problem with path preferences.In (d), the heading angles observed are indicated with arrows.The goal is indicated with a pink circle, and the orange robot corresponds to the starting location.The blue robot follows a policy that accounts for path preference, while the green robot does not.The opacity of the robots increases with time.\nMap 1 problem setup and example realizations for goal-only (green) and path preference (blue) solution methods.The robot starts at the lower left corner of the environment, and the goal of the task (pink circle) is in the upper left area.The robot does not know which goal, among 10 options (shown in light blue squares), is the correct goal.The human provides noisy observations, indicated by arrows, at each iteration.The green robot selects actions according to the goal-only baseline, and the blue robot uses our proposed method to infer path preferences.The polytopes composing G are drawn in blue.Probability of correct goal.WLPHVWHS +J (c) Entropy of goal distribution g.\nFig. 6: Probability of the correct goal, fig.6b, and entropy of the goal belief distribution P (g), fig.6c, for the same problem setup, fig.6a.In this problem instance, the human's preference is to go to the goal by passing on the right side of the obstacle.Results are averaged over 50 runs and the area filled represents one standard deviation above and below the mean value.The goal-only baseline shows an over-confident prediction (shown by the strong reduction in belief entropy) that the correct goal is less likely, making it more difficult to reach the correct goal compared to a method that accounts for path preference.\nSuccess rates in the simple environment (Map 1).The results are averaged over 6 randomly sampled problem instances (start location, goal location, and goal possibilities), and over 50 runs per problem instance.∆T is the number of time steps separating two consecutive human inputs.The robot's mission time is Tmax = 30 time steps.We selected γ h = 1.5, corresponding to relatively noisy human inputs and making the problem more difficult to solve for the robot.\nComputation times for Goal Only and Path Preference methods on Map 1 (fig.5a),Map 2 (fig.5b), and Map 3 (fig.5c),averaged over 100 runs with randomly sampled problem instances.The 95 % confidence interval is provided with the mean.We evaluate computation time at the first iteration of each run (where the search depth takes on its highest value Tmax).\n\nabstract\n\nRobots that can effectively understand human intentions from actions are crucial for successful human-robot collaboration. In this work, we address the challenge of a robot navigating towards an unknown goal while also accounting for a human's preference for a particular path in the presence of obstacles.\nThis problem is particularly challenging when both the goal and path preference are unknown a priori. To overcome this challenge, we propose a method for encoding and inferring path preference online using a partitioning of the space into polytopes. Our approach enables joint inference over the goal and path preference using a stochastic observation model for the human.\nWe evaluate our method on an unknown-goal navigation problem with sparse human interventions, and find that it outperforms baseline approaches as the human's inputs become increasingly sparse. We find that the time required to update the robot's belief does not increase with the complexity of the environment, which makes our method suitable for online applications.\n\nINTRODUCTION\n\nCollaboration between humans and robots has become increasingly important and one key aspect of this collaboration is the ability for robots to adapt to human decisions. In many scenarios, such as a robot navigating through a busy room to deliver an item, it is important for the robot to take into account human preferences.\nFor instance, humans may prefer a specific path that would allow their colleagues to notice the item being delivered, but this preference may change dynamically based on various factors such as changes in the environment or unforeseen circumstances. While some preferences can be incorporated into the path-planning process, accommodating dynamic user preferences in real-time remains challenging.\nIn this paper, we propose a way to enable robots to adapt to human preferences dynamically by leveraging real-time feedback to inform decision-making. In this work, we tackle the problem of robot navigation in which the robot cannot observe the goal or the preferred path to the goal, but must make navigation decisions that are influenced by humans through recommended actions.\nPrior work has explored how to adapt to a human's preference through feedback, but such approaches often require a high level of intervention, which can be time-consuming and impractical in real-world scenarios. To optimize the use of human input and quickly infer the human's preference, Fig. : An autonomous robot navigates in a simulated classroom towards a goal location (pink circle).\nAt the start of its mission, it receives direction indications (arrows) from a human that indicate which path it should take to get to the goal. In this scenario, the human wants the robot to go around the desks on the right side of the classroom. A robot that does not reason over path preferences (green) will take the shortest path to the goal regardless of the human's input.\nOur method (blue) infers the human's path preference from these indications and adapts to their recommendations. we propose an approach that leverages probabilistic representations of human preference and incorporates real-time feedback. Previous research by Bajcsy et al. considered an online adaptation problem in a manipulation task, where the person can apply forces to the robot to indicate their preferences.\nBy allowing the robot to continue its task while taking into account a probabilistic representation of human preference, their approach does not require frequent inputs. Building on this idea, we adopt a similar approach to adapt to a human's preference in the context of a robot autonomously navigating through a known environment, such as a cluttered office space.\nSpecifically, we focus on allowing the human to influence the robot's trajectory with respect to obstacles, by providing guidance on preferred routes or paths, while the robot continues to execute its task. Paths can be represented using homotopy classes . However, homotopies can pose computational challenges when used to encode and infer human preferences.\nWhen the robot maintains a belief over homotopy classes, the inference problem can become exponentially complex with the number of obstacles in the space. Additionally, when the goal is unknown, the number of variables increases with the number of candidate destinations. This complexity can render the decision-making problem intractable.\nOur solution is to encode path preference based on a partitioning of the environment into polytopes . This representation allows path preferences to be expressed as sets of preferred transitions between adjacent polytopes. Paths belonging to different homotopy classes correspond to different sequences of transitions.\nBy leveraging conditional independence assumptions, we can make the Bayesian inference problem tractable. These assumptions exploit the fact that human actions provide information about the path in a piece-wise manner. For example, indicating a preference for navigating around a particular obstacle only provides information about the local area and not the entire path.\nFinally, after updating its belief representation over the human's preference, the robot can adapt to indications by replanning online. Our contributions are as follows. • We formulate the human-robot collaboration problem as a Partially Observable Markov Decision Process (POMDP) where both the goal of the task and the human's path preference are unknown random variables.\n• We propose an encoding of a human's path preference using a partitioning of the environment into polytopes, along with conditional independence assumptions that make the Bayesian inference problem tractable to infer the task goal and path preference online. • Through simulations in two environments of different sizes and complexity, we show that our method is effective for solving problems where the robot must reach a goal that is unknown a-priori while simultaneously adapting to a human's indications.\nOur method shows higher success rates compared to baseline approaches when the human inputs are sparse. Our approach enables a robot to make effective navigation decisions in collaboration with a human, even when the goal and path preference are not known in advance, and with minimal human input. In recent years, there has been a growing interest in shared autonomy and interactive systems, where humans and robots work together to accomplish tasks.\nSeveral approaches have been proposed to address the challenge of enabling effective collaboration between human and robot agents while still achieving high task performance. Losey et al. and Jeon, Losey, and Sadigh propose a framework where a human operator is given control of a task-relevant latent action space while an autonomous system handles the rest.\nDragan and Srinivasa present a formalism for arbitrating between a user's input and a robot's policy when both human and robot share control of the same action space. Cognetti et al. [7] provide a method for real-time modifications of a path, . . . Fig. : We model the intent inference problem with the above diagram.\nAt each step in time, the robot receives an observation ot from the human conditioned on its current location st, the intended goal g, and the human's path preference θ. The robot updates its belief over g and θ and transitions to a next location st+1. while Hagenow et al. present a method that allows an outside agent to modify key robot state variables and blends the changes with the original control.\nHowever, a common challenge of these approaches is the high level of intervention required from humans. Best and Fitch propose a method for predicting an agent's intended trajectory from observations. Rather than maintaining a belief over the agent's future path, they infer the agent's intended goal among a set of candidate locations at the boundary of the space.\nThis approach provides information on where the agent is heading and generates a distribution of candidate future trajectories for the agent. Inferring the goal of the task among a discrete set of candidates is also relevant to the area of shared autonomy. Javdani, Srinivasa, and Bagnell propose a formalism for shared control of a robotic arm, where the robot must assist the human in picking up an object but needs to infer which object the human has chosen from joystick inputs.\nPlanning with homotopy class constraints is useful in problems where the robot's requirements are given with respect to obstacles, and Yi, Goodrich, and Seppi consider topological constraints provided by human operators. Bhattacharya propose an efficient algorithm for solving pathplanning problems under homotopic constraints.\nHowever, the number of homotopy classes for a given problem can be infinite, and as the robot changes location and updates its representation of the world, carrying out inference over homotopy classes in a dynamic environment requires recomputing the set of homotopies at every iteration, making the belief update challenging.\nPrior work has addressed the challenge of shared autonomy by considering how robots can infer a human's intended goal, or how they can infer the preferred path to a goal. However, we argue that inferring the goal and the path as separate problems can lead to over-confidence in incorrect beliefs about the user's preferences.\nTo illustrate this point, consider the following scenario: a robot and a human are collaborating to move an object from one end of a room to Fig. : Using the hyperplanes composing the H-representation of each obstacle, we construct a hyperplane arrangement of the obstacle-free space (a). We define the human's preference for the robot's one step action choices as the posterior distribution (given all human input up to that point) over transitions from the current to the neighboring polytopes, i.e. edges on the graph.\nEach time the robot transitions to a new polytope, the set of neighbor polytopes and the distribution over human preferences are updated. another, but there is an obstacle in the way. The human would like the robot to take a path around the obstacle on the left, even though the goal is on the right. If the robot only infers the goal from the human's inputs, it may incorrectly assume that the goal is on the right, and become over-confident in this belief.\nOn the other hand, if the robot only infers the preferred path, it may mistakenly assume that the goal is on the left, leading to a failure in completing the task. To overcome these challenges, our work proposes a joint inference approach that considers both the human's intended goal and their preferred path to that goal.\nSpecifically, we model the human's preference over different homotopy classes and leverage a conditional independence assumption to provide a tractable solution. In our approach, we assume that the human's inputs are noisily rational conditioned on both the goal and the preference. By jointly inferring the goal and path preference, we can avoid over-confidence in incorrect beliefs about the user's preferences, leading to improved system performance.\nWe consider the problem of robot navigation in a known environment to an unknown destination, where a human can intervene and provide a heading direction to the robot using a joystick or force cues. The human also has a preference on which path the robot should take with respect to obstacles, and our objective is for the robot to understand the human's intentions and execute the task with minimal interventions.\nLet g be a discrete random variable denoting the goal of the task, belonging to a set of candidates Ω g , and let θ be a discrete-valued random variable representing the human's path preference, belonging to a set of possible preferences Θ. The physical location of the robot at time index t is denoted by s t ∈ R 2 , and the robot's action at time index t, belonging to some action space A, is denoted by a t .\nThe transition model T (s t+1 | s t , a t ) is deterministic, meaning the robot has full control over its future location. At any time step, the human may provide an observation to the robot. When the human intervenes, the robot receives a direction (heading angle) that can be mapped to a future location in space.\nMore specifically, we map the direction to an intended location, which is the resulting robot location after advancing in the indicated direction for one time step. For simplicity, we consider that the robot directly makes an observation o t of the location indicated by the human. We assume that the robot has a stochastic observation model for the human P (o t | s t , g, θ) that is conditioned on both the goal of the task g and the human's preferred path θ.\nWe further assume that having chosen a goal and path preference, the human takes actions to noisily minimize a cost function C g,θ that measures the cost of moving from the robot's current location to the goal along the preferred path. For example, C g,θ (s t , o t ) can be the length of the shortest path from location s t to the goal g after taking a first step to o t , and constrained by path preference θ.\nWe use C g,θ to induce a probability distribution over observations, given by: where γ h is a hyperparameter that designates the rationality coefficient. This model assumes the human will pick the lowest cost action with the highest probability and the likelihood of an action decreases exponentially with the increase in cost .\nOur inclusion of the path preference θ sets our approach apart from . The model is shown in fig. represented as a Bayesian Network.\n\nInference\n\nAt each time step where the human provides an observation, the posterior P (g, θ) is given through the Bayesian update We note that the number of Bayesian updates required at each iteration to update the belief is equal to the cardinality of Ω g × Θ. In addition, each Bayesian update involves computing C g,θ ( .\n, . ) in eq. ( ), which involves solving an optimization problem (such as a shortest path problem). In section IV, we propose a specific encoding of preference θ for resolving eq. ( ), while ensuring the number of computations of the cost C g,θ (., .) per update does not grow exponentially with the number of obstacles.\n\nDecision Making\n\nWe consider a navigation problem where the robot receives reward according to the model R(s t , g, θ, a t ). We wish to find the optimal policy π that maximizes the expected discounted sum of future rewards, with discount factor γ. The above problem is a Partially Observable Markov Decision Process (POMDP) .\nIn this section, we propose an encoding of human's path preference θ for computing the posterior in eq. ( ). Devifrom the concept of homotopy classes, we define the preference according to a partitioning of the environment into polytopes, as shown in fig. , creating a hyperplane arrangement of the space.\nHyperplane arrangements have been used by Vincent and Schwager in the context of Neural Network verification. In our setting, we leverage this representation to define path preferences as preferred transitions between adjacent regions of the space.\n\nHyperplane Arrangement\n\nWe assume a two-dimensional environment composed of m polytopic obstacles, each defined by their half-space representation (H-representation) where A i ∈ R di×2 and b i ∈ R di , and where d i is the number of edges (hyperplanes) composing polytope i. Let n = i d i be the total number of hyperplanes. We leverage each obstacle's H-representation to construct a hyperplane arrangement of the environment as shown in fig.\n.e. a partitioning of the space into polytopes. More specifically, each location in space belongs to a polytope j for which we can write an H-representation of the form where α j i ∈ {−1, 1} di is a vector specific to polytope j and obstacle i corresponding to the relative position of any point in the set with respect to each hyperplane in O i .\nFig. : Intent inference model in a hyperplane arrangement of the obstacle free space. We spatially decompose the preference θ into a set of preferred neighboring polytopes per region of the space. Within each polytope j, the human preference pj is a discrete distribution over the preferred neighbor in N (j).\nWe assume that for a location st belonging to polytope j, and given goal g and preference pj, the observation ot and any other preference p i,i =j are conditionally independent. Concatenating elements from each obstacle's Hrepresentation, we can write polytope j's H-representation as where Some of the constraints in eq. ( ) (corresponding to rows of A, b and α j ) are redundant, i.e. the set P j does not change upon their removal.\nWe can further reduce the Hrepresentation of a polytope to include only non-redundant constraints. By removing the rows corresponding to redundant constraints, we obtain new matrices A j e , b j e and α j e such that we can write the polytope's reduced H-representation as The non-redundant constraints correspond to edges of the polytope.\nIn other words, as the robot continually moves in space, the first hyperplane that it will cross upon exiting the polytope will correspond to one of the polytope's nonredundant constraints. Vincent and Schwager outline an iterative method for removing redundant constraints by solving n linear programs.\nWe use this method in practice for computing α j e for each polytope. We can now characterize each polytope by a vector α j e ∈ {−1, 1} n j e , where n j e ≤ n is the number of essential constraints of the polytope. The polytopes P j partition the environment into a hyperplane arrangement.\n\nPath Preference\n\nIn this section, we provide a definition of preference θ according to a graphical representation of the environment based on the hyperplane arrangement. Under this representation, a path preference corresponds to a set of preferred transitions. In other words, for each polytope in the space, the human will have a preference to which neighboring polytope they wish to transition.\nLet G := (V, E) be an undirected graph, where vertices are obstacle-free polytopes, and edges connect two adjacent polytopes. Each polytope is described by a unique vector α j as defined in eq. ( ). Two polytopes are adjacent if they share non-redundant constraints (rows in eq. ( )) corresponding to the same hyperplane (i.e. they are on opposite sides of the hyperplane).\nLet N (v) be the set of neighbors of a vertex v. For each vertex, we denote p v the discrete-valued random variable describing which edge in N (v) the human intends to transition to. Using this formalism, we define a path preference as the set of preferred transitions over all nodes in the graph, Let m θ = v∈V |N (v)| be the cardinality of Θ, and m g = |Ω g | the number of possible goals.\nA priori, the number of Bayesian updates required to update the belief at every iteration should be m θ × m g . Now, let us assume the conditional independence relationships described by the new problem diagram in fig. . More specifically, we introduce the assumption that conditioned on a robot location s t , the goal g, and the preference for the corresponding vertex p v in the graph, the observation o t and the preference for any other vertex are conditionally independent.\nIn other words, the observations the human provides can be defined conditioned only on the robot location, the goal, and the human's preference for its current vertex p v . By introducing this assumption, each update step only requires updating the joint (p v , g), reducing the number of cost computations to |N (v)| × m g .\nWe can notice that by introducing this assumption, we removed the direct relationship between the number of polytopes in the environment and the complexity of the Bayesian update in eq. ( ). In practice, components of θ are not mutually independent. For example, if the human preference at a vertex v 1 is\n, it is unlikely that the human will also prefer p v2 = (v 2 , v 1 ) (turning back). We can improve our model by assuming a dependent relationship between preferences for adjacent edges, which does not significantly increase the complexity of the inference problem. An interesting property of our encoding is that any two paths that belong to different homotopy classes will cross different sequences of polytopes, i.e. they correspond to a different sequence of edges on G.\nThis can be proved by contradiction. Let us suppose that two continuous trajectories ξ 1 and ξ 2 , with the same start and end points and that do not intersect any obstacle, traverse the same regions in G in the same order. From the construction of the hyperplane arrangement, each polytope that the paths traverse through is obstacle-free.\nTherefore, within each polytope, there is no obstacle in the area located in between the portions of ξ 1 and ξ 2 that belong to the region. A smooth transformation of ξ 1 into ξ 2 can be obtained by transforming each portion of ξ 1 belonging to the polytopes it intersects into the corresponding portion of ξ 2 for the same polytopes, where the extremities of the trajectory portions are connected to one another along the polytope's edges (where the same edge is crossed by both paths).\nAlong this transformation, the paths do not intersect any obstacle, and therefore ξ 1 and ξ 2 belong to the same homotopy class.\n\nEXPERIMENTS\n\nWe evaluate our model on a simulated navigation task where the robot must reach a goal that is unknown a priori while respecting the path preferences indicated by a human. The robot navigates in a grid world containing obstacles. The transition model is deterministic: the robot selects an adjacent location on the grid to reach at the next time step.\nThe robot is also allowed to take diagonal actions. Each location s t in the map can be mapped to a vertex v t ∈ G. Therefore, the actions leading to locations mapped to different vertices correspond to edges on the graph. We note f (s t , a t ) the edge crossed by taking action a t from location s t .\nThe robot is given a mission time limit T max for reaching the goal. In this problem, we assume that the human selects actions to noisily minimize a cost function C g,θ , where θ is defined as per eq. ( ), corresponding to the length of the shortest path to the goal constrained by the preference (where the robot is only allowed to make transitions on G along preferred edges).\nMore specifically, where δ(s t , g | o t , p vt ) designates the length of the shortest path from s t to g passing by o t and constrained by preference p vt . This is a slight variant of the cost function proposed by Best and Fitch , where we add in a conditioning on the path preference. We compute costs by running the A path planning algorithm on the environment maps (grid worlds with diagonal actions) and impose preference constraints by pruning invalid transitions from the search tree.\nReward model. At each step in time, the robot receives a reward which is a sum of three components: a goal-specific reward a preference-specific reward or penalty We compute solutions to the POMDP defined in section III-B with the online solver POMCP , and with the particularity that within the rollouts, the robot does not expect to collect human inputs.\nEach time a solution is computed, the robot takes an action and may receive an observation. If it does, it updates its belief distribution over the unknown problem variables and resolves the POMDP over a receding horizon.\n\nBaselines\n\n• Goal only. The robot solves the POMDP while ignoring the effects of path preference. Similarly to , we assume the human is taking action to minimize a goaldependent cost C g (s t , o t ) = δ(s t , g | o t ), where the conditioning on the preference is removed. We also omit the path preference's contribution to the reward R pref .\n• Compliant. The robot complies with the human input, but does not take an initiative. If the user stops providing information, the robot continues in the last direction indicated for 5 time steps (conserving its momentum), then stops. • Blended. We designed an arbitration function to decide between our proposed policy (accounting for path preferences) and the user's recommendation when the robot receives inputs.\nOur metric to evaluate confidence in the robot's prediction for the purpose of arbitration is the entropy of the intention distribution H(g, p i ), where p i denotes the preferred neighbor for the current region. Because our representation of the world is discrete, the arbitration is given by a step function.\nDenoted by U , the action corresponding to the human's input, and P , the robot's prediction for the optimal action, we write the policy where we chose h = 1.6 as the confidence threshold.\n\nResults\n\nWhen evaluating the algorithm, we consider that a run is successful if the robot reached the goal within its allocated mission time T max and only made transitions between graph vertices corresponding to the human's preferences. We vary the time delay between human inputs, from constant guidance (∆ T = 1) to only a single observation (∆ T ≥ T max ).\nSuccess rates. Table I reports the success rates for experiments conducted over six randomly sampled problem instances and 50 runs per instance in Map 1 (fig. ). When the human provides inputs at every iteration, the compliant policy shows the highest success rates. However, as ∆ T increases, the compliant robot is not able to accomplish the task within the allotted time as it does not receive sufficient inputs to do so, and performance decreases compared to the autonomous baselines.\nWe find that in these runs, accounting for path preference consistently improves performance compared with the goal-only baseline. Results also show that blending the user's input with the robot's policy (Path Preference + Blend) when the human provides information leads to improved performance. Belief entropy.\nFigure shows a challenging problem instance where the directions the human provides do not align directly with the shortest path to the goal. By ignoring the effects of preferences in the problem model (goal only), the robot quickly infers from observations that the upper left goal is less likely than others (P (g) drops).\nThe strong decrease in entropy shows that the robot becomes overconfident in this prediction. Overconfidence in an incorrect goal will prevent the agent from finding the correct goal once the human's indications directly align with it, as it needs to correct for the wrong predictions, as shown in the path realization (fig.\n). In this realization, the goal-only method (green robot) fails to search the upper left area within the allotted time. By accounting for path preferences in its model, the blue robot's entropy over the goal distribution decreases more steadily, allowing for it to leverage the human's latest observations and reach the goal successfully.\nshows an over-confident prediction (shown by the strong reduction in belief entropy) that the correct goal is less likely, making it more difficult to reach the correct goal compared to a method that accounts for path preference. Computation time. In table II we provide the time required to solve the POMDP, and the time required to update the robot's belief as it receives new observations.\nWe compute solutions on three maps: a simple 10 × 10 grid world with 8 polytopes (fig. ), a 10 × 10 grid world with 56 polytopes (fig. ), and a 20×20 grid world with 73 polytopes (fig. ). The latter environment being larger, we increase the mission time and the depth of the search tree in POMCP from T max = 30 (Map 1 and Map 2) to T max = 60 (Map 3).\nWe do not notice an increase in the time required to update the robot's belief with an increase in problem complexity, which is consistent with our observation that the complexity of the Bayesian update should not increase with the number of obstacles or polytopes. On the contrary, the belief update time on Map 2 and Map 3, containing more obstacles, is reduced compared to the first map.\nMore obstacles result in fewer iterations when solving the constrained shortest path problem with A . Adding constraints due to the obstacles and polytopes reduces the size of the A search tree. C. Limitations Simulation environments. In our simulations, we hardcoded the preference policy over the maps (e.g. in Map 1, go around the table counter-clockwise).\nWe randomly sampled problem instances (start and goal locations, and goal options) to reduce the bias introduced by these preference choices. To best evaluate and compare the different approaches, it would be best to sample preferences among a distribution of preferences chosen by a human (for example, from benchmarks resulting from a collection of data).\nCreating such a benchmark is an interesting direction for future work. Hyperplane arrangement construction. The main limitation of our approach is that the size and geometry of each polytope depends strongly on the geometry of the obstacles, as seen in fig. . Because of this, the robot can make predictions over preferences that are too refined compared with the topology of the environment.\nA direct consequence is that when the size of the polytopes is small, the information provided by the human can be incorrectly interpreted as a preference on the robot's immediate action. Our method can be improved by changing the structure of the hyperplane arrangement so that it relies on the topology of the environment, but does not vary strongly with the geometry of the features in the environment.\nFor this purpose, topometric maps and region construction algorithms are promising directions. We presented an approach for encoding and inferring a human's path preference in an environment with obstacles. By leveraging a partitioning of the space into polytopes and a stochastic observation model, our method allows for joint inference over the goal and path preference even when both are unknown a-priori.\nOur experiments on an unknown-goal navigation problem with sparse human interventions demonstrate the effectiveness of our approach and its suitability for online applications. The time required to update the robot's belief does not increase with the complexity of the environment, which further highlights the practicality of our method.", "answers": ["The time required to update the belief does not increase with the complexity of the environment."], "length": 5665, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "aeb34faf1b5507b32d6d5b49474ea5c71d4ec2aa84a82d93"} {"input": "What did Justice Kennedy argue about Quill in Direct Marketing Ass'n v. Brohl?", "context": "South Dakota v. Wayfair, Inc. - Harvard Law Review\nFourth Circuit Invalidates Maryland Statute Regulating Price Gouging in the Sale of Generic Drugs.\nSouth Dakota Supreme Court Holds Unconstitutional State Law Requiring Internet Retailers Without In-State Physical Presence to Remit Sales Tax.\nJudicial junk, the Court has long thought, is easier to scrap when the erroneous precedent cannot be fixed by Congress, as in constitutional cases.1× 1. See Burnet v. Coronado Oil & Gas Co., 285 U.S. 393, 405–10 (1932) (Brandeis, J., dissenting); Lee Epstein, William M. Landes & Adam Liptak, The Decision to Depart (or Not) from Constitutional Precedent: An Empirical Study of the Roberts Court, 90 N.Y.U. L. Rev. 1115, 1116 (2015) (“[Justice Brandeis’s] dissenting opinion . . . now has the status of black letter law.”). On the flip side, whenever a bad precedent can be corrected by Congress, stare decisis applies with “special force.”2× 2. See Patterson v. McLean Credit Union, 491 U.S. 164, 172–73 (1989). The Court, following Justice Brandeis, usually articulates the rule as distinguishing between “constitutional” and “statutory” precedents. See, e.g., id. But the distinction is occasionally said to be between “constitutional” and “nonconstitutional cases.” See, e.g., Glidden Co. v. Zdanok, 370 U.S. 530, 543 (1962) (plurality opinion). Nomenclature aside, the Court has — until now — adhered to Justice Brandeis’s key insight that the important factor is whether or not the mistake may be legislatively corrected. Last Term, in South Dakota v. Wayfair, Inc.,3× 3. 138 S. Ct. 2080 (2018). the Court tinkered with this thinking in overruling an outdated dormant commerce clause precedent. Dormant commerce clause decisions technically produce constitutional holdings, but Congress may override them at will.4× 4. See Prudential Ins. Co. v. Benjamin, 328 U.S. 408, 421–27 (1946). Under the usual logic of stare decisis, it should take special force to dislodge such precedents. But Wayfair applied the weakened stare decisis of constitutional cases, asserting that the Court must “address a false constitutional premise . . . . whether or not Congress can or will act.”5× 5. Wayfair, 138 S. Ct. at 2096–97.\nEmerging from Wayfair is an odd and ominous development in stare decisis doctrine. Odd, because it turns on a formal classification instead of on Congress’s practical ability to fix the problem. Ominous, because the Court’s logic leads far past the dormant commerce clause. Wayfair grants only feeble stare decisis to precedents that set a “constitutional default rule,”6× 6. Id. at 2096 (“While . . . Congress has the authority to change the physical presence rule, Congress cannot change the constitutional default rule.”). meaning constitutional decisions that allow for legislative adjustment or override. This new stare decisis analysis makes other precedents setting constitutional default rules more vulnerable — including, perhaps, mainstays of criminal procedure like Miranda v. Arizona7× 7. 384 U.S. 436 (1966). and Mapp v. Ohio.8× 8. 367 U.S. 643 (1961).\nSince its 1967 decision in National Bellas Hess, Inc. v. Department of Revenue,9× 9. 386 U.S. 753 (1967). the Court has held that, under the “dormant” or “negative” implication of the Commerce Clause,10× 10. The dormant or negative commerce clause is a judicial derivation from the Commerce Clause “prohibiting States from discriminating against or imposing excessive burdens on interstate commerce without congressional approval,” which “strikes at one of the chief evils that led to the adoption of the Constitution, namely, state tariffs and other laws that burdened interstate commerce.” Comptroller of the Treasury of Md. v. Wynne, 135 S. Ct. 1787, 1794 (2015). states may not compel remote sellers with no physical presence in the state to collect and remit sales taxes.11× 11. See Bellas Hess, 386 U.S. at 759–60. In Quill Corp. v. North Dakota,12× 12. 504 U.S. 298 (1992). the Court refused to overrule the “bright-line, physical-presence requirement” of Bellas Hess, leaning heavily on stare decisis.13× 13. Id. at 317–18. Three Justices joined a concurrence explaining that their decision rested solely “on the basis of stare decisis.” Id. at 320 (Scalia, J., concurring in part and concurring in the judgment). So the physical presence test remained the law of the land while the internet conquered the earth. Justice Kennedy had joined the Quill majority and Justice Scalia’s concurring opinion emphasizing stare decisis, but by 2015 he had second thoughts. Writing separately in Direct Marketing Ass’n v. Brohl,14× 14. 135 S. Ct. 1124 (2015). Justice Kennedy acknowledged that “[t]he Internet has caused far-reaching systemic and structural changes in the economy” and therefore “Quill now harms States to a degree far greater than could have been anticipated earlier.”15× 15. Id. at 1135 (Kennedy, J., concurring). He concluded with the wish that “[t]he legal system should find an appropriate case for this Court to reexamine Quill and Bellas Hess.”16× 16. Id.\nSeldom has a concurring opinion signed by a lone Justice prompted a state to officially declare an emergency. Yet in 2016, in response to Justice Kennedy’s overture, the South Dakota legislature passed a law, S.B. 106, “to provide for the collection of sales taxes from certain remote sellers . . . and to declare an emergency.”17× 17. 2016 S.D. Sess. Laws ch. 70 pmbl. 217 (codified at S.D. Codified Laws § 10-64 (2017)). It required every remote seller to collect and remit sales tax if the seller’s business in South Dakota comprised either a “gross revenue” greater than $100,000 or at least 200 “separate transactions” within one calendar year.18× 18. Id. § 1. Significantly, the law did not apply retroactively.19× 19. Id. § 5. The “emergency” declaration was necessary to give the law immediate effect, for the purpose of “permitting the most expeditious possible review of the constitutionality of this law” by the U.S. Supreme Court.20× 20. Id. § 8(8). As Justice Alito put it, the “South Dakota law [was] obviously a test case.”21× 21. Transcript of Oral Argument at 27, Wayfair, 138 S. Ct. 2080 (No. 17-494), https://www.supremecourt.gov/oral_arguments/argument_transcripts/2017/17-494_7lho.pdf [https://perma.cc/8HYH-VU8N].\nExpeditiously, a group of remote sellers challenged the law. After being sued by South Dakota for refusing to register for the newly required sales tax license, Wayfair, Inc., Overstock.com, Inc., and Newegg, Inc. moved for summary judgment in South Dakota circuit court on the grounds that S.B. 106 was unconstitutional under Quill and Bellas Hess — a point South Dakota conceded, indicating that it was seeking review by the U.S. Supreme Court to overturn Quill.22× 22. State v. Wayfair Inc., 2017 SD 56, ¶¶ 9–11, 901 N.W.2d 754, 759–60. Accordingly, the South Dakota circuit court granted the motion for summary judgment and South Dakota appealed to the state’s highest court.23× 23. Id. ¶ 12, 901 N.W.2d at 760. The South Dakota Supreme Court unanimously affirmed, recognizing that South Dakota’s “arguments on the merits” may be “persuasive” but “Quill remains the controlling precedent.”24× 24. Id. ¶ 18, 901 N.W.2d at 761. See generally Recent Case, State v. Wayfair Inc., 2017 SD 56, 901 N.W.2d 754 (S.D. 2017), 131 Harv. L. Rev. 2089 (2018).\nThe U.S. Supreme Court vacated and remanded.25× 25. Wayfair, 138 S. Ct. at 2100. Writing for the Court one last time, Justice Kennedy26× 26. Justices Thomas, Ginsburg, Alito, and Gorsuch joined Justice Kennedy’s opinion. pilloried Quill’s physical presence rule as “arbitrary, formalistic,” “anachronistic,” and “unfair and unjust” to both states and brick-and-mortar retailers.27× 27. Wayfair, 138 S. Ct. at 2092, 2095. After all, the rationale of Quill was that remote sellers lacked a sufficiently “substantial nexus” with the state to justify imposing a duty of tax collection.28× 28. Quill Corp. v. North Dakota, 504 U.S. 298, 311 (1992) (quoting Complete Auto Transit, Inc. v. Brady, 430 U.S. 274, 279 (1977)). This was wrong even in the mail-order catalog days of 1967 and 1992, but “the Internet revolution has made [Quill’s] earlier error all the more egregious and harmful.”29× 29. Wayfair, 138 S. Ct. at 2097; see also id. at 2092. The rule deprived the states of billions of dollars, since they could not force remote sellers to collect the tax and consumers hardly ever paid it on their own.30× 30. Id. at 2088 (“[C]onsumer compliance rates are notoriously low.”). Quill “serve[d] as a judicially created tax shelter” for remote retailers who do a great deal of business online.31× 31. Id. at 2094.\nSatisfied that Bellas Hess and Quill were wrongly decided, the Court then jumped the hurdle of stare decisis. The Quill Court had feared upsetting reliance interests.32× 32. Quill, 504 U.S. at 317 (“Bellas Hess . . . has engendered substantial reliance and has become part of the basic framework of a sizable industry.”). Wayfair shrugged off this concern, noting that “stare decisis accommodates only ‘legitimate reliance interest[s]’”; by contrast, reliance on the physical presence rule was largely due to consumers evading their use-tax obligations.33× 33. Wayfair, 138 S. Ct. at 2098 (alteration in original) (quoting United States v. Ross, 456 U.S. 798, 824 (1982)). Quill had also appealed to Congress’s ultimate authority over interstate commerce as a reason to abide by a precedent, even if wrongly decided.34× 34. See Quill, 504 U.S. at 318–19; id. at 320 (Scalia, J., concurring in part and concurring in the judgment) (“Congress . . . can change the rule of Bellas Hess by simply saying so.”). But Wayfair denied that Congress’s ability to change the law was a proper consideration:\nWhile it can be conceded that Congress has the authority to change the physical presence rule, Congress cannot change the constitutional default rule. It is inconsistent with the Court’s proper role to ask Congress to address a false constitutional premise of this Court’s own creation. Courts have acted as the front line of review in this limited sphere; and hence it is important that their principles be accurate and logical, whether or not Congress can or will act in response.35× 35. Wayfair, 138 S. Ct. at 2096–97.\nHaving dispensed with the physical presence rule, the Court remanded the case to the South Dakota courts to determine in the first instance “whether some other principle in the Court’s Commerce Clause doctrine might invalidate the Act.”36× 36. Id. at 2099. But the Court listed “several features [of South Dakota law] that appear[ed] designed to prevent discrimination against or undue burdens upon interstate commerce.” Id.\nJustices Thomas and Gorsuch each filed concurring opinions. Justice Thomas wistfully likened himself to Justice White — who voted for Bellas Hess but against Quill a quarter-century later — and confessed that he “should have joined [Justice White’s dissenting] opinion.”37× 37. Id. at 2100 (Thomas, J., concurring). Justice Thomas added that the “Court’s entire negative Commerce Clause jurisprudence” is wrong and should be abandoned.38× 38. Id. Justice Gorsuch also wrote separately to express skepticism of the Court’s dormant commerce clause jurisprudence, raising “questions for another day” of whether the doctrine “can be squared with the text of the Commerce Clause, justified by stare decisis, or defended as misbranded products of federalism or antidiscrimination imperatives flowing from Article IV’s Privileges and Immunities Clause.”39× 39. Id. at 2100–01 (Gorsuch, J., concurring).\nChief Justice Roberts dissented.40× 40. Justices Breyer, Sotomayor, and Kagan joined the Chief Justice’s dissent. Surprisingly, the dissenting Justices “agree[d] that Bellas Hess was wrongly decided, for many of the reasons given by the Court.”41× 41. Wayfair, 138 S. Ct. at 2101 (Roberts, C.J., dissenting). The dispute between the majority and the dissent turned entirely on the principles and application of stare decisis. Chief Justice Roberts argued that whether or how to reverse Quill should be left to Congress, which “has the flexibility to address these questions in a wide variety of ways” and “can focus directly on current policy concerns rather than past legal mistakes.”42× 42. Id. at 2104. He also pointed to the “baffling” burdens of compliance with the idiosyncratic tax codes of “[o]ver 10,000 jurisdictions,” particularly for small businesses, and doubted that new “software” — the majority’s proposed solution to this mess43× 43. Id. at 2098 (majority opinion) (“Eventually, software that is available at a reasonable cost may make it easier for small businesses to cope with these problems.”). — would soon solve the problem.44× 44. Id. at 2103–04 (Roberts, C.J., dissenting). In Bellas Hess, the Court reasoned that the dormant commerce clause protects interstate business from being “entangle[d] . . . in a virtual welter of complicated obligations to local jurisdictions.” Nat’l Bellas Hess, Inc. v. Dep’t of Revenue, 386 U.S. 753, 759–60 (1967). The dissent replied that the Court “vastly underestimate[d] the skill of contemporary man and his machines.” Id. at 766 (Fortas, J., dissenting). The dispute in Wayfair over whether software is up to the task effectively reprised the old debate from Bellas Hess, only this time couched as part of the stare decisis inquiry’s concern for reliance interests rather than as a matter of dormant commerce clause doctrine. While Wayfair acknowledged that “[c]omplex state tax systems could have the effect of discriminating against interstate commerce,” 138 S. Ct. at 2099, the Court remarked that “[t]he physical presence rule is a poor proxy” for an inquiry into any actual burdens imposed on interstate commerce, id. at 2093.\nChief Justice Roberts emphasized that a “heightened form of stare decisis”45× 45. Wayfair, 138 S. Ct. at 2102 (Roberts, C.J., dissenting). applies when “Congress . . . can, if it wishes, override this Court’s decisions with contrary legislation.”46× 46. Id. at 2101 (first citing Michigan v. Bay Mills Indian Cmty., 134 S. Ct. 2024, 2036 (2014) (tribal sovereign immunity); then citing Kimble v. Marvel Entm’t, LLC, 135 S. Ct. 2401, 2409 (2015) (statutory interpretation); and then citing Halliburton Co. v. Erica P. John Fund, Inc., 134 S. Ct. 2398, 2411 (2014) (judicially created doctrine implementing a judicially created cause of action)). In Quill, the Chief Justice noted, the Court had taken to heart that “Congress may be better qualified” and “has the ultimate power to resolve” the question47× 47. Id. at 2102 (quoting Quill Corp. v. North Dakota, 504 U.S. 279, 318 (1992)). while Justice Scalia had “recogniz[ed] that stare decisis has ‘special force’ in the dormant Commerce Clause context due to Congress’s ‘final say over regulation of interstate commerce.’”48× 48. Id. (quoting Quill, 504 U.S. at 320 (Scalia, J., concurring in part and concurring in the judgment)). Moreover, “[i]f stare decisis applied with special force in Quill, it should be an even greater impediment” afterward since Quill effectively “tossed [the ball] into Congress’s court.”49× 49. Id. (alteration in original) (quoting Kimble, 135 S. Ct. at 2409); cf. Bay Mills, 134 S. Ct. at 2039 n.12 (“When we inform Congress that it has primary responsibility over a sphere of law, and invite Congress to consider a specific issue within that sphere, we cannot deem irrelevant how Congress responds.”). Because the Court invited Congress to act and then “suddenly chang[ed] the ground rules, the Court may have waylaid Congress’s consideration of the issue.”50× 50. Wayfair, 138 S. Ct. at 2102–03 (Roberts, C.J., dissenting).\nIn Wayfair, the Court applied the flimsier form of stare decisis to a precedent that could have been overruled by Congress. It did so in the context of a dormant commerce clause case, but Wayfair’s logic extends to all constitutional default rules — that is, constitutional decisions that Congress remains free to change. Not only does Wayfair deviate from the Court’s decades-old stare decisis analysis, it also imperils other precedents that set constitutional default rules.\nThe Court’s reasoning in Wayfair departs from its prior stare decisis analysis. In 1932, Justice Brandeis posited that stare decisis must bend “in cases involving the Federal Constitution, where correction through legislative action is practically impossible.”51× 51. Burnet v. Coronado Oil & Gas Co., 285 U.S. 393, 406–07 (1932) (Brandeis, J., dissenting). The Court has long since adopted his argument,52× 52. See, e.g., Smith v. Allwright, 321 U.S. 649, 665 (1944). as well as its corollary — that stare decisis commands “special force in the area of statutory interpretation” where “Congress remains free to alter what [the Court has] done.”53× 53. Patterson v. McLean Credit Union, 491 U.S. 164, 172–73 (1989). For normative evaluations of heightened stare decisis for statutory precedents, see generally Einer Elhauge, Statutory Default Rules: How to Interpret Unclear Legislation 211–23 (2008); and William N. Eskridge, Jr., Overruling Statutory Precedents, 76 Geo. L.J. 1361, 1364–1409 (1988). Justice Brandeis’s logic demands that dormant commerce clause cases, where Congress is free to act, be granted the weightier stare decisis.54× 54. Scholars have noted the curious fact that Justice Brandeis included many dormant commerce clause cases as examples of overruled constitutional precedents. See, e.g., Earl M. Maltz, Commentary, Some Thoughts on the Death of Stare Decisis in Constitutional Law, 1980 Wis. L. Rev. 467, 468–469, 469 n.11. One explanation for this is that Justice Brandeis sought the authority of Chief Justice Taney’s dictum that the Court’s “opinion upon the construction of the Constitution is always open to discussion” — which referred to the dormant commerce clause. See Burnet, 285 U.S. at 408 n.3 (Brandeis, J., dissenting) (quoting The Passenger Cases, 48 U.S. (7 How.) 283, 470 (1849) (Taney, C.J., dissenting)). In Chief Justice Taney’s time, it was thought that Congress could not override the Court’s dormant commerce clause decisions, see Cooley v. Bd. of Wardens, 53 U.S. (12 How.) 299, 321 (1852), so the context of Chief Justice Taney’s dictum does not conflict with Justice Brandeis’s theory of stare decisis. The Court applied this reasoning in Quill, as Chief Justice Roberts underscored.55× 55. Wayfair, 138 S. Ct. at 2102 (Roberts, C.J., dissenting).\nYet the Wayfair majority refused to consider Congress’s authority to legislate as a relevant factor for stare decisis.56× 56. Even Justice Kennedy’s earlier opinion in Direct Marketing contemplated judicially overruling Quill, conspicuously neglecting a possible legislative solution. See supra p. 278. The Court even insisted that to do so “is inconsistent with the Court’s proper role,” since Quill embodied “a false constitutional premise of th[e] Court’s own creation.”57× 57. Wayfair, 138 S. Ct. at 2096 (emphasis added). This refusal breaks from the practical Brandeisian wisdom that has guided the Court’s treatment of precedent for the better part of a century. The point is not that stare decisis should have ultimately propped up Bellas Hess yet again, as Wayfair’s dissenting Justices maintained. After all, a realistic approach that is alert to each branch’s institutional capacities might have led to the conclusion that Congress was actually ill-equipped to overrule Quill. In this vein, the Court could have sensibly pointed out that Congress is unlikely to stick its neck out with a tax hike (or a look-alike) from which only the states would benefit.58× 58. For two practical arguments to this effect, see Brian Galle, Essay, Kill Quill, Keep the Dormant Commerce Clause: History’s Lessons on Congressional Control of State Taxation, 70 Stan. L. Rev. Online 158, 160–62 (2018), https://review.law.stanford.edu/wp-content/uploads/sites/3/2018/03/70-Stan.-L.-Rev.-Online-158-Galle.pdf [https://perma.cc/22YP-P4V5]; Edward A. Zelinsky, The Political Process Argument for Overruling Quill, 82 Brook. L. Rev. 1177, 1191–92 (2017). Indeed, South Dakota advanced such practical arguments in its brief.59× 59. See Petitioner’s Brief at 54, Wayfair, 138 S. Ct. 2080 (No. 17-494) (“Congress has little incentive to act here because it would be (or appear to be) authorizing new or greater tax collections from its constituents, while receiving none of the revenue in return.”). More generally, the Court might have discussed the limits of the states’ influence in the federal system as a reason not to wait for congressional intervention, a topic it has debated on other occasions.60× 60. See Richard H. Pildes, Institutional Formalism and Realism in Constitutional and Public Law, 2013 Sup. Ct. Rev. 1, 30–32; see also Galle, supra note 58, at 159 (“Congress is not a trustworthy guardian of state fiscal power, making continuing judicial involvement a more appealing prospect.”). Or it could have argued that new facts on the ground — namely, the blast of e-commerce that hit like a comet after Quill — overpowered stare decisis of any force, special or plain.61× 61. Two recent studies of stare decisis highlighted the physical presence rule as exemplifying a precedent that may reasonably be overruled due to changed facts. See Bryan A. Garner et al., The Law of Judicial Precedent 364–65 (2016); Randy J. Kozel, Settled Versus Right: A Theory of Precedent 112–13 (2017). It should be noted that the authors of The Law of Judicial Precedent classify the physical presence rule as a constitutional precedent for stare decisis purposes, thus anticipating the Court’s misstep in Wayfair. Garner et al., supra, at 354–65. Because even statutory precedents may sometimes be overruled,62× 62. See Patterson v. McLean Credit Union, 491 U.S. 164, 173–74 (1989) (discussing justifications for overruling statutory precedents). Contra Lawrence C. Marshall, “Let Congress Do It”: The Case for an Absolute Rule of Statutory Stare Decisis, 88 Mich. L. Rev. 177 (1989). the Court could have killed Quill without first planting its constitutional kiss of death.63× 63. Cf. Thomas R. Lee, Stare Decisis in Historical Perspective: From the Founding Era to the Rehnquist Court, 52 Vand. L. Rev. 647, 704 (1999) (“Justice Brandeis’ . . . memorable prose has since become a mandatory part of the burial rite for any constitutional precedent.”).\nThe Court resisted such arguments. Instead, Wayfair reasoned that Congress’s total ability to correct an erroneous decision counts for nothing when the Court gets the Constitution wrong. That such a theory sprouts from a case like Wayfair, which repudiated a “formalistic distinction,”64× 64. Wayfair, 138 S. Ct. at 2092. is ironic. Wayfair’s stare decisis analysis resorts to the formalism of making constitutional a “magic” word65× 65. See Transcript of Oral Argument, supra note 21, at 12. rather than asking whether Congress can step in.\nMoreover, the Court’s new thinking on stare decisis threatens other constitutional default rules. Wayfair now stands for the proposition that a “constitutional default rule” — a term the Court apparently lifted from South Dakota’s reply brief on the merits66× 66. Reply Brief at 22, Wayfair, 138 S. Ct. 2080 (No. 17-494) (“Congress is polarized, which makes it critical . . . to get the constitutional default rule right.”). — gets only weakened stare decisis. To appreciate why this holding matters, it is worth exploring the concept and scope of constitutional default rules. Contract theory describes default rules as legal rules that the parties may “contract around.”67× 67. See, e.g., Ian Ayres & Robert Gertner, Filling Gaps in Incomplete Contracts: An Economic Theory of Default Rules, 99 Yale L.J. 87, 87 (1989). Although “constitutional default rule” could be read broadly to include a variety of actors and contracting mechanisms,68× 68. See John Ferejohn & Barry Friedman, Toward a Political Theory of Constitutional Default Rules, 33 Fla. St. U. L. Rev. 825, 826 (2006) (“When we speak of default rules in constitutional law, we typically are talking about specifications of ways the government can act (or modify its behavior) to get around a constitutional prohibition.”). the Court’s use of the term for purposes of stare decisis may be narrowly defined as judicial precedents of constitutional law that “are ultimately subject to congressional control.”69× 69. Gillian E. Metzger, Congress, Article IV, and Interstate Relations, 120 Harv. L. Rev. 1468, 1525 (2007) (describing judicially enforceable “constitutional default rules imposing obligations on the states in the name of union [that] are ultimately subject to congressional control”). The dormant commerce clause is a paradigmatic constitutional default rule because what the Court does today Congress may undo tomorrow. Justice Scalia declared this fact “[t]he clearest sign that the negative Commerce Clause is a judicial fraud,” for “[h]ow could congressional consent lift a constitutional prohibition?”70× 70. Comptroller of the Treasury of Md. v. Wynne, 135 S. Ct. 1787, 1808 (2015) (Scalia, J., dissenting). But that’s what a constitutional default rule is. The Court has allowed Congress to overturn its dormant commerce clause cases since 1891.71× 71. See In re Rahrer, 140 U.S. 545, 560–62 (1891).\nDormant commerce clause cases are not the only constitutional default rules. Professor Laurence Tribe’s treatise identifies two others.72× 72. 1 Laurence H. Tribe, American Constitutional Law § 6-35 (3d ed. 2000). And in a groundbreaking article, Professor Henry Monaghan revealed “a substructure of substantive, procedural, and remedial rules” forming “a constitutional common law subject to amendment, modification, or even reversal by Congress.”73× 73. Henry P. Monaghan, The Supreme Court, 1974 Term — Foreword: Constitutional Common Law, 89 Harv. L. Rev. 1, 2–3 (1975); see also Susan R. Klein, Identifying and (Re)Formulating Prophylactic Rules, Safe Harbors, and Incidental Rights in Constitutional Criminal Procedure, 99 Mich. L. Rev. 1030 (2001) (further developing Monaghan’s theory in criminal procedure context). What follows is a list of six lines of cases beyond the dormant commerce clause that may be fairly described as constitutional default rules. The first two are drawn from Tribe’s treatise while the next four are found in Monaghan’s article:\n(1) State Taxation of Federal Instrumentalities: States may not tax instrumentalities of the federal government74× 74. McCulloch v. Maryland, 17 U.S. (4 Wheat.) 316, 436 (1819). — unless Congress consents.75× 75. See, e.g., Helvering v. Gerhardt, 304 U.S. 405, 411 n.1 (1938) (“Congress may curtail an immunity which might otherwise be implied or enlarge it beyond the point where, Congress being silent, the Court would set its limits.” (citations omitted)). One court has described such judicial decisions as setting a “constitutional default rule.” United States v. Delaware, 958 F.2d 555, 560 n.9 (3d Cir. 1992) (“[W]e must decide the constitutional default rule for this type of tax, fully aware that Congress could decide at any time to reverse our decision statutorily.”). (2) Article I, Section 10 Cases: Article I, Section 10 provides that certain prohibitions on the states may be waived by Congress.76× 76. See U.S. Const. art. I, § 10, cls. 2–3. The Court has taken note of this when considering whether to overrule, for instance, an Import-Export Clause precedent.77× 77. See Hooven & Allison Co. v. Evatt, 324 U.S. 652, 668 (1945) (“In view of the fact that the Constitution gives Congress authority to consent to state taxation of imports and hence to lay down its own test for determining when the immunity ends, we see no convincing practical reason for abandoning the test which has been applied for more than a century . . . .”), overruled on other grounds by Limbach v. Hooven & Allison Co., 466 U.S. 353 (1984). In Michelin Tire Corp. v. Wages, 423 U.S. 276 (1976), the Court left open the question whether “Congress may authorize, under the Import-Export Clause, an exaction that it could not directly impose under the Tax Clause.” Id. at 301 n.13. Metzger, however, argues that the Import-Export Clause is free of other clauses’ limits on congressional power. See Metzger, supra note 69, at 1500 & n.120. (3) Bivens Cases: In Bivens v. Six Unknown Named Agents of Federal Bureau of Narcotics,78× 78. 403 U.S. 388 (1971). the Court held that a violation of the Fourth Amendment gives rise to a right to sue for damages.79× 79. Id. at 397. But the Court has also held that “[s]uch a cause of action may be defeated . . . when . . . Congress has provided an alternative remedy which it explicitly declared to be a substitute for recovery directly under the Constitution and viewed as equally effective.”80× 80. Carlson v. Green, 446 U.S. 14, 18–19 (1980). (4) Miranda Cases: The Miranda Court famously “encourage[d]” Congress and the states to explore alternative “procedures which are at least as effective in apprising accused persons of their right of silence and in assuring a continuous opportunity to exercise it.”81× 81. Miranda v. Arizona, 384 U.S. 436, 467 (1966). In Dickerson v. United States, 530 U.S. 428 (2000), the Court struck down a congressional attempt to effectively abolish Miranda, holding that “Miranda announced a constitutional rule that Congress may not supersede legislatively.” Id. at 444. But Dickerson also stood by Miranda’s “invitation for legislative action” to replace Miranda with an adequate substitute. Id. at 440; see also Michael C. Dorf & Barry Friedman, Shared Constitutional Interpretation, 2000 Sup. Ct. Rev. 61 (discussing legislative alternatives to Miranda). (5) The Police Lineup Case: In United States v. Wade,82× 82. 388 U.S. 218 (1967). the Court created an exclusionary rule for evidence obtained from a police lineup in violation of the Sixth Amendment right to counsel but acknowledged that it could be replaced by “[l]egislative or other regulations . . . which eliminate the risks of abuse.”83× 83. Id. at 239. (6) The Exclusionary Rule Cases: Mapp v. Ohio made the Fourth Amendment “exclusionary rule” binding on the states,84× 84. 367 U.S. 643, 655 (1961). yet Congress is thought to have the power to replace it.85× 85. See Bivens v. Six Unknown Named Agents of Fed. Bureau of Narcotics, 403 U.S. 388, 422–24 (1971) (Burger, C.J., dissenting) (inviting Congress to replace the Fourth Amendment exclusionary rule); Harold J. Krent, How to Move Beyond the Exclusionary Rule: Structuring Judicial Response to Legislative Reform Efforts, 26 Pepp. L. Rev. 855, 864–71 (1999).\nAll of the above are arguably constitutional default rules set by the Court that remain, to one degree or another, open to congressional revision. The list could be longer or shorter, depending on which default rules the Court will view as constitutional86× 86. A shorter list could be produced by whittling away at the constitutional status of the cases identified by Monaghan. While the Court has held that Miranda is a constitutional decision, Dickerson, 530 U.S. at 444, some of the other cases may be viewed as nonconstitutional. See, e.g., Collins v. Virginia, 138 S. Ct. 1663, 1675–80 (2018) (Thomas, J., concurring) (arguing that Mapp is “nonconstitutional,” id. at 1678 n.5); Richard H. Fallon, Jr. et al., Hart and Wechsler’s The Federal Courts and the Federal System 775–77 (7th ed. 2015) (discussing whether Bivens is constitutionally required). Conversely, a longer list might include any constitutional right that can be waived by a party. See, e.g., Daniel A. Farber, Another View of the Quagmire: Unconstitutional Conditions and Contract Theory, 33 Fla. St. U. L. Rev. 913, 918 (2006) (describing the Eleventh Amendment as “just a contractual default rule that the states are free to barter away”). Such a list might also include various constitutionally inspired judicial presumptions. See, e.g., Jack Goldsmith & John F. Manning, The President’s Completion Power, 115 Yale L.J. 2280, 2299 (2006) (describing the Chevron presumption of delegated interpretive power to administrative agencies as “a constitutionally inspired default rule”); Nicholas Quinn Rosenkranz, Federal Rules of Statutory Interpretation, 115 Harv. L. Rev. 2085, 2097–98 (2002) (describing clear statement rules as “constitutional default rules” reversible by Congress). Many other decisions could likely be characterized as constitutional default rules; the list above is only an initial stab. and on how it will answer open questions about congressional authority over certain constitutional provisions.87× 87. See, e.g., Thomas v. Wash. Gas Light Co., 448 U.S. 261, 272 n.18 (1980) (plurality opinion) (leaving unresolved whether Congress may limit constitutional full faith and credit obligations); White v. Mass. Council of Constr. Emp’rs, Inc., 460 U.S. 204, 215 n.1 (1983) (Blackmun, J., concurring in part and dissenting in part) (leaving unresolved “whether Congress may authorize . . . what otherwise would be a violation” of the Privileges and Immunities Clause); 1 Tribe, supra note 72, § 6-35, at 1243–44 (arguing that Congress cannot override judicial constructions of the Privileges and Immunities Clause); Metzger, supra note 69, at 1486–89 (arguing the opposite). But the takeaway is clear: weaker stare decisis for constitutional default rules. Pre-Wayfair, one would have thought that stare decisis applies with special force to such precedents, given congressional power to set them straight. Not anymore. Why? Because it is improper to “ask Congress to address a false constitutional premise of th[e] Court’s own creation.”88× 88. Wayfair, 138 S. Ct. at 2096. The Latin for Wayfair’s doctrine is not stare decisis, which should reflect a realistic, working relationship between the legislative and judicial branches. It is mea culpa.\nIn its zeal to update the Constitution for “the Cyber Age,”89× 89. Id. at 2097. the Court deleted Congress from stare decisis doctrine in constitutional cases. The Court had better options. It could have left Quill on Congress’s doorstep, as the dissent argued. Or it could have justified overruling Quill notwithstanding the special force of stare decisis. Instead, the Court reasoned that it doesn’t matter whether Congress is willing and able to do the job: a constitutional mess calls for a judicial clean-up crew. For constitutional default rules — a category of decisions embracing the dormant commerce clause and sweeping far beyond — Wayfair’s new theory of stare decisis makes the Court’s precedents less sticky and Congress less relevant.", "answers": ["Quill harmed states more than anticipated due to the Internet."], "length": 5429, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "e4725ed7b6df9933d5336f79d5c9d1d803a3e475831f1017"} {"input": "What is the main focus of the research paper?", "context": "Paper Info\n\nTitle: Nuclear Liquid-Gas Transition in the Strong Coupling Regime of Lattice QCD\nPublish Date: 28 Mar 2023\nAuthor List: J Kim (from Institute for Advanced Simulation (IAS-4), Forschungszentrum Jülich), P Pattanaik (from Fakultät für Physik, Bielefeld University), W Unger (from Fakultät für Physik, Bielefeld University)\n\nFigure\n\nFIG. 1.Typical 2-dimension configuration at β = 1.0, at non-zero quark mass, temperature, chemical potential.The black dots are monomers, the blue lines are dimers, the red arrows are baryon loop segments (or triplets g b + f b = ±3 if adjacent to a non-trivial plaquette), and the green squares are plaquette occupations ±1.The actual configurations are 3+1-dimensional.\nFIG.2.Chiral susceptibility on a 2 4 volume for various quark masses, as a function of the bare anisotropy γ (with aT = γ 2 /2), analytic results from enumeration compared to numerical data from simulations via the worm algorithm.\nFIG.3.Various observables in the µB-T plane on a 2 4 volume at amq = 0.1.The back-bending of the first order transition at temperatures below aT = 0.5 in all observables is an artifact of the small volume, and vanishes in the thermodynamic limit.The temperature aT = 1/2 corresponds to the isotropic lattice here.\nFIG. 4. The chiral condensate (left) and the baryon density (right) for quark mass m = 1.5 as a function of the chemical potential and for various temperatures.\nFIG. 7. ∆f at amq = 0.2 as a function of chemical potential and β the on a 6 3 × 4 lattice\nFIG. 8. Baryon mass from ∆E as a function of the quark mass amq, and contributions from different dual variables: monomers, dimers and baryon segments.\nFIG. 9. Baryon density for volume 4 3 × 8 in the full µB − mq plane, illustrating the strong quark mass dependence of the onset to nuclear matter.\nFIG. 10.Baryonic observables on various volumes in the first order region amq = 1.5.Vertical bands indicate the mean and error of the nuclear transition.\nFIG. 12. Left: Extrapolation of the pseudo-critical values of µB for the various volumes into the thermodynamic limit.Right: Critical baryon chemical potential for different quark masses.The first order transition region is shown in blue, the crossover region is shown in red and the range for critical end point is marked in black.\nFIG. 17. Nuclear interaction scaled with baryon mass.As the quark mass increases, it tends to zero.\nFIG. 18. Critical baryon chemical potential and baryon mass from different approaches.\nParameters for the Monte Carlo runs to determine the nuclear transition at strong coupling, with statistics after thermalization.\n\nabstract\n\nThe nuclear liquid-gas transition from a gas of hadrons to a nuclear phase cannot be determined numerically from conventional lattice QCD due to the severe sign problem at large values of the baryon chemical potential. In the strong coupling regime of lattice QCD with staggered quarks, the dual formulation is suitable to address the nuclear liquid gas transition.\nWe determine this first order transition at low temperatures and as a function of the quark mass and the inverse gauge coupling β. We also determine the baryon mass and discuss the nuclear interactions as a function of the quark mass, and compare to mean field results. It is known from experiments that at low temperatures, there is a phase transition between dilute hadron gas and dense nuclear matter as the baryon chemical potential increases.\nThis transition is of first order and terminates at about T c = 16 MeV in a critical end point. The value of the chemical potential µ 1st B at zero temperature is given roughly by the baryon mass m B , where the difference of µ 1st B −m B is due to nuclear interactions. For a review on nuclear interactions see .\nAs the nuclear force between baryons to form nuclear matter is due to the residual strong interactions between quarks and gluons, it should be accurately described by QCD. We choose to study the nuclear transition and nuclear interaction via lattice QCD , with its Lagrangian being a function of the quark mass and the inverse gauge coupling.\nIn order to understand the nature of the transition, it is helpful to study its dependence on these parameters. However, at finite baryon density, lattice QCD has the infamous sign problem which does not allow us to perform direct Monte Carlo simulations on the lattice. Various methods have been proposed to overcome the numerical sign problem, but they are either limited to µ B /T 3 or can not yet address full QCD in 3+1 dimensions in the whole µ B − T plane , in particular the nuclear transition is out of reach.\nAn alternative method is to study lattice QCD via the strong coupling expansion. There are two established effective theories for lattice QCD based on this: (1) the 3-dim. effective theory for Wilson fermions in terms of Polyakov loops, arising from a joint strong coupling and hopping parameter expansion , the dual representation for staggered fermions in 3+1 dimensions, with dual degrees of freedom describing mesons and baryons.\nBoth effective theories have their limitations: is limited to rather heavy quarks (but is valid for large values of β) whereas ( ) is limited to the strong coupling regime β 1 (but is valid for any quark mass). We study lattice QCD in the dual formulation, both at infinite bare gauge coupling, β = 0, and at leading order of the strong coupling expansion in the regime β < 1, which is far from the continuum limit.\nBut since strong coupling lattice QCD shares important features with QCD, such as confinement, and chiral symmetry breaking and its restoration at the chiral transition temperature, and a nuclear liquid gas transition, we may get insights into the mechanisms, in particular as the dual variables give more information in terms of its world lines, as compared to the usual fermion determinant that depends on the gauge variables.\nTo establish a region of overlap of both effective theories, we have chosen to perform the Monte Carlo simulations in the dual formulation extending to rather large quark masses. This paper is organized as follows: in the first part we explain the dual formulation in the strong coupling regime, in the second part we provide analytic results based on exact enumeration and mean field theory, in the third part we explain the setup of our Monte Carlo simulations and present result on the m q -and β-dependence of the nuclear transition.\nSince the strong coupling regime does not have a well defined lattice spacing, we also determine the baryon mass am B to set the parameters of the grand-canonical partition function, aT and aµ B , in units of am B . We conclude by discussing the resulting nuclear interactions, and compare our findings with other results.\n\nStaggered action of strong coupling QCD and its dual representation\n\nIn the strong coupling regime, the gauge integration is performed first, followed by the Grassmann integration to obtain a dual formulation. This was pioneered for the strong coupling limit in and has been extended by one of us to include gauge corrections . The sign problem is mild in the strong coupling limit and still under control for β < 1, where we can apply sign reweighting.\nThe dual degrees of freedom are color-singlet mesons and baryons, which are point-like in the strong coupling limit, and become extended about a lattice spacing by incorporating leading order gauge corrections. The partition function of lattice QCD is given by where DU is the Haar measure, U ∈ SU(3) are the gauge fields on the lattice links (x, μ) and { χx , χ x } are the unrooted staggered fermions at the lattice sites x.\nThe gauge action S G [U] is given by the Wilson plaquette action and the staggered fermion action S F [ χ, χ, U] is: where the gauge action depends on the inverse gauge coupling β = 2Nc g 2 and the fermion action depends on the quark chemical potential aµ q which favors quarks in the positive temporal direction, and the bare quark mass am q .\nFirst we consider the strong coupling limit where the inverse gauge coupling β=0 and hence the gauge action S G [U] drops out from the partition function in this limit. The gauge integration is over terms depending only on the individual links (x, μ) so the partition function factorizes into a product of one-link integrals and we can write it as:\nwith z(x, μ) the one-link gauge integral that can be eval-uated from invariant integration, as discussed in , where we write the one-link integral in terms of new hadronic variables: Only terms of the form (M (x)M (y)) k x, μ (with k x,μ called dimers which count the number of meson hoppings) and B(y)B(x) and B(x)B(y) (called baryon links) are present in the solution of the one-link integral.\nThe sites x and y = x + μ are adjacent lattice sites. It remains to perform the Grassmann integral of the fermion fields χ, χ. This requires to expand the exponential containing the quark mass in Eq. (4) (left), which results in the terms (2am q M (x)) nx (with n x called monomers). To obtain non-vanishing results, at every site, the 2N c Grassman variables χ x,i and χx,i have to appear exactly once, resulting in the Grassmann constraint (GC):\nwhere n x is the number of monomers, k x,μ is the number of dimers and the baryons form self-avoiding loops x,μ , which due to the constraint cannot coexist with monomers or dimers. With this, we obtain an exact rewriting of the partition function Eq. ( ) for N c = 3, in terms of integer-valued dual degrees of freedom {n, k, }:\nwhere the sum over valid configurations has to respect the constraint (GC). The first term in the partition function is the contribution from dimers and the second term is the contribution from monomers. The weight factor w( ) for each baryon loop depends on the baryon chemical potential µ B = 3µ q and induces a sign factor σ( ) which depends on the geometry of :\nHere, ω is the winding number of the loop . The total sign factor σ( ) ∈ {±1} is explicitly calculated for every configuration. We apply sign reweighting as the dual formulation has a mild sign problem: baryons are non-relativistic and usually have loop geometries that have a positive signs. The dual partition function of the strong coupling limit is simulated with the worm algorithm (see Section III A) and the sign problem is essentially solved in this limit.\n\nExtension to finite β\n\nThe leading order gauge corrections O(β) to the strong coupling limit are obtained by expanding the Wilson gauge action Eq. ( ) before integrating out the gauge links. A formal expression is obtained by changing the order of integration (first gauge links, then Grassmann-valued fermions) within the QCD partition function:\nWith this the O (β) partition function is The challenge in computing Z (1) is to address the SU(N c ) integrals that receive contributions from the elementary plaquette U P . Link integration no longer factorizes, however the tr[U P ] can be decomposed before integration: Integrals of the type J ij with two open color indices -as compared to link integration at strong coupling -have been derived from generating functions\nfor either J = 0 or for G = U(N c ) . The SU(3) result was discussed in , in terms of the dual variables, neglecting rotation and reflection symmetries, there are 19 distinct diagrams to be considered. The resulting partition function, valid to O(β), is with q P ∈ {0, ±1}, and the site weights w x → ŵx , bond weights w b → ŵb and baryon loop weights w → ŵ receive modifications compared to the strong coupling limit Eq. ( ) for sites and bonds adjacent to an excited plaquette q P = 1.\nThe weights are given in , and are rederived for any gauge group in . The configurations {n, k, , q p } must satisfy at each site x the constraint inherited from Grassmann integration: which is the modified version of Eq. ( ) with q x = 1 if located at the corner of an excited plaquette q p = 0, otherwise q x = 0.\nA more general expression that we obtained via group theory and is valid to higher orders of the strong coupling expansion is discussed in terms of tensor networks . A typical 2-dimensional configuration that arises at β = 1 in the Monte Carlo simulations is given in Fig. . Note that if a baryon loop enters a non-trivial plaquette, one quark is separated from the two other quarks, resulting in the baryon being extended object, rather being point-like in the strong coupling limit.\nThe O(β) partition function has been used in the chiral limit to study the full µ B − T plane via reweighting from the strong coupling ensemble. Whereas the second order chiral transition for small values of the aµ B decreased up to the tri-critical point, the first order nuclear transition was invariant: aµ 1st B 1.78(1) at zero temperature has no β-dependence.\nFor the ratio T (µ B = 0)/µ 1st B (T 0) we found the values 0.787 for β = 0 and 0.529 β = 1, which should be compared to T c / 0.165 for full QCD . However, since reweighting cannot be fully trusted across a first order boundary, direct simulations at nonzero β are necessary. The Monte Carlo technique to update plaquette variables is discussed in Section III A.\nIn this section, we provide analytic results from exact enumeration for small volumes, and mean field results based on the 1/d expansion, valid in the thermodynamic limit. The main purpose is to compare our Monte Carlo results to these analytic predictions.\n\nExact enumeration\n\nTo establish that our Monte Carlo simulations indeed sample the partition functions Eq. ( ) and Eq. ( ), we have obtained analytic results on a 2 4 volume at strong coupling, and at finite beta in two dimensions on a 4 × 4 volume, comparing O (β) and O β 2 truncations. Our strategy to obtain an exact enumeration of the partition function Z is to enumerate plaquette configurations first, then fixing the fermion fluxes which together with the gauge fluxes that are induced by the plaquettes form a singlet, a triplet or anti-triplet, i.e. on a given bond b, g b + f b ∈ {−3, 0, 3}, and last we perform the monomerdimer enumeration on the available sites not saturated by fermions yet by a depth-first algorithm .\nAt strong coupling, with no plaquettes, g b = 0 and f b are baryonic fluxes. All observables that can be written in terms of derivatives of log(z), such as the baryon density, the chiral condensate, the energy density, and also the average sign, are shown in Fig.\n\nExpectations from mean field theory\n\nAnother analytical method to study strong coupling lattice QCD is the mean field approach, where the partition function is expanded in 1 d (d is the spatial dimension) and then a Hubbard-Stratonovich transformation performed . After this procedure, the free energy is a function of temperature T , the chiral condensate σ and chemical potential µ B :\nhere E[m] is one-dimensional quark excitation energy which is a function of the quark mass m = am q . For N c = 3 and d = 3 we determined the minimum of the free energy with respect to the chiral condensate. This gives us the equilibrium chiral condensate as a function of (T, m, µ B ). The chiral condensate and the baryon density as a function of the baryon chemical potential in lattice units aµ B and for various temperatures at quark mass m = 1.5 is shown in Fig. . We have determined the critical temperature to be aT c = 0.23 , which is characterized by an infinite slope of the chiral condensate.\nFor lower temperatures, there is a clear discontinuity of the chiral con-densate, separating the low density phase from the high density phase. For temperatures above and in the vicinity of aT c the chiral condensate and baryon density has no discontinuity but rapidly changes, corresponding to a crossover transition.\nWith this method, the phase diagram is plotted for different quark masses in Fig. . The second order phase transition in the chiral limit is plotted in solid blue line, the dotted lines show the first order phase transition for different quark masses and the solid red line indicates the critical end point for the different quark masses.\nMean field theory also gives an expression for the pion mass am π and the baryon mass am B : The mean field baryon mass for N c = 3, d = 3 is also plotted in red in Fig. . Whereas the baryon mass is around N c in the chiral limit (am B 3.12 for N c = 3), it approximately doubles at m = 3.5 (am B 6.28) which corresponds to the pion mass am π = 4.45, i.e. m π /m B = 0.708.\nHence, at around bare mass m = 3.5, the valence quark mass of the baryon corresponds roughly to 1/3 of the chiral limit value of the baryon mass. The first Monte Carlo simulations that could extend in the µ B − T plane was the MDP algorithm , but it required the introduction of the worm algorithm to make substantial progress.\nFirst studies of the worm algorithm applied to the strong coupling limit QCD (with gauge group U(3)) are , and for gauge group SU . Monte Carlo simulations to extend the worm to incorporate leading order corrections were first proposed in . We will shortly review the setup of or Monte Carlo strategy for the nuclear transition, with an emphasis on the challenges to address large quark masses.\n\nStrong Coupling\n\nWithout any further resummation, there is a mild sign problem in the dual formulation of lattice QCD in the strong coupling limit. When the average sign σ is not too small (close to zero), it implies that most of the configurations have a positive weight thus allowing us to perform sign reweighting strategies.\nIn Fig. , ∆f is plotted as a function of the baryon chemical potential and the quark masses. It is seen that ∆f is close to zero for most cases except near the critical chemical potential and for small quark masses, but never exceeds 5 × 10 −4 . Hence sign reweighting can be performed in the full parameter space.\nThe result that the sign problem becomes even milder when increasing the mass is related to the fact that larger critical chemical potentials result in a larger fraction of static baryons (spatial baryon hoppings become rare). FIG. . ∆F at strong coupling as a function of chemical potential and quark mass on a 6 3 × 8.\nThe sign problem becomes milder as the quark mass increases.\n\nFinite β\n\nAll runs at finite β have been obtained for N τ = 4, which corresponds to a moderately low temperature aT = 0.25 compared to the value of the chiral transition aT 1.54. Those simulations were too expensive to attempt N τ = 8 runs, in particular as a higher statistics was required. The spatial volumes are 4 3 , 6 3 and 8 3 .\nFor β values are from 0.0 to 1.0 with step size 0.1, and for am q values from 0.00 to 1.00 with step size 0.01. The values of aµ were chosen close to the nuclear transition, the scanning range is shifted to large values as am q increases. At small quark masses the scanning range is from aµ = 0.4 to 1.0 and for the large quark masses, it is from 0.6 to 1.2 with step size 0.01.\nThe statistics used for are 15 × 10 4 measurements and between measurement, 40 × N 3 s worm updates.\n\nResidual sign problem\n\nAlthough it is possible to resum the sign problem at strong coupling with a resummation of baryon and pion world lines, this is not possible when including gauge corrections. In order to compare both sign problems, we kept the original dual formulation to monitor the severity of the sign problem. This is done via the relation\nbetween the average sign σ and the difference of the free energy density ∆f between the full ensemble f and of the sign-quenched ensemble f || .\n\nNuclear interactions\n\nWe have found that aµ 1st B is very different from the baryon mass. This must be due to strong attractive interactions of nucleons. In contrast to continuum physics, in the strong coupling limit there is no pion exchange due to the Grassmann constraint. Instead, nucleons are point like and hard core repulsive.\nHowever, the pion bath, which is modified by the presence of static baryons, results in an attractive interaction. In , this has been analyzed in the chiral limit using the snake algorithm, and it has been found that the attractive force is of entropic origin. Here, we do not quantify the nuclear interaction via the nuclear potential, but via the difference between critical baryon chemical potential and baryon mass, in units baryon mass, as shown in Fig. , given the am B as measured in Section III C.\nThis compares better to the 3dim. effective theory. The nuclear interaction is maximal and more than 40% in the chiral limit, which is related to pions being massless: the modification of the pion bath is maximal. We clearly find that the nuclear interaction decreases drastically and almost linearly until it almost approaches zero at about am q = 2.0, corresponding to a pion mass am π = 3.36, see Section II B. The large error bars for larger quark masses, that are due to the subtraction of almost same magnitudes, makes it difficult to extract a non-zero nuclear interaction at the largest quark masses.\nIn this work, we have determined the baryon mass and the nuclear transition via Monte Carlo: the worm algorithm based on the dual formulation, at finite β equipped with additional updates. All those numerical results and various analytic expressions are summarized in Fig. . We find that as the quark mass becomes large, spatial mesons hoppings (i.e.\nspatial dimers) become rare, which makes this 3+1-dimensional system closer to 1dim. QCD . Also, both the baryon mass and the baryon chemical potential obtained in our dual representation, i.e. for staggered fermions, approaches the baryon mass of the 3-dim. effective theory which is based on Wilson fermions.\nAnother comparison that summarizes the validity of the mean field approach discussed in Section II B is shown in Fig. . It is evident that mean field theory has strong deviations for small quark masses, but this discrepancy becomes smaller for larger quark masses. The extension of the study of the nuclear transition to finite inverse gauge coupling β is summarized in Fig. , which shows the β-dependence of aµ c B for various quark masses.\nFor all quark masses ranging from am q = 0 to am q = 1.0, there is only a very weak β-dependence, confirming the expectation from mean field theory . This works was restricted to isotropic lattices ξ = a/a t = 1, i.e. we performed simulations at fixed temperature. Non-isotropic lattices are necessary to vary the temperature at fixed values of β.\nThis requires to include two bare anisotropies, γ for the fermionic action and γ G for the gauge action. Finite β has only been studied by us in the chiral limit . Clearly, it is interesting to study the location of the nuclear critical point also including higher order gauge corrections and at finite quark mass.\nSimulations including O(β 2 ) are under preparation.", "answers": ["Nuclear liquid-gas transition in lattice QCD."], "length": 4017, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "4d6cd243b10a8460d2e2239182b797420ccc36335a74d23e"} {"input": "How is the function beta(r) determined in the derivation?", "context": "\\section{Introduction}\nThe Schwarzschild solution plays a key role in teaching about general relativity: It describes the simplest version of a black hole. By Birkhoff's theorem, it more generally describes the gravitational field around any spherical mass distribution, such as the Sun in our own Solar system. As one of two particularly simple, yet physically relevant examples of a non-trivial metric (the other being the FLRW spacetime of an expanding universe), it is particularly well-suited for teaching about general techniques of ``reading'' and interpreting a spacetime metric.\n\nConsider undergraduate courses where students are introduced to selected concepts and results from general relativity without exposing them to the full mathematical formalism. Such courses have the advantage of introducing students to one of the two great fundamental theories of 20th century physics early on (the other being quantum mechanics); they also profit from subject matter that meets with considerable interest from students.\\cite{Hartle2006} Using the terminology of Christensen and Moore,\\cite{Christensen2012} in the ``calculus only'' approach pioneered by Taylor and Wheeler,\\cite{Taylor2001,Taylor2018} spacetime metrics are not derived, but taken as given, and the focus is on learning how to interpret a given spacetime metric. Similar presentations can be found in the first part of the ``physics first'' approach exemplified by Hartle's text book,\\cite{Hartle2003} where the concepts of the metric and of geodesics are introduced early on, and their physical consequences explored, while the mathematics necessary for the Einstein equations is only introduced at a later stage. \n\nWhenever the approach involves an exploration of simple metrics such as the Schwarzschild solution, but stops short of the formalism required for the full tensorial form of Einstein's equations, access to a simple derivation of the Schwarzschild solution that does not make use of the advanced formalism can be a considerable advantage.\n\nSimplified derivations of the Schwarzschild solution have a long tradition within general relativity education,\\cite{Schiff1960,Harwit1973} although specific simplifications have met with criticism.\\cite{Rindler1968} This article presents a derivation which requires no deeper knowledge of the formalism of differential geometry beyond an understanding of how to interpret a given spacetime metric $\\mathrm{d} s^2$. The derivation avoids the criticism levelled at attempts to derive the Schwarzschild solution from the Einstein equivalence principle in combination with a Newtonian limit,\\cite{Gruber1988} relying as it does on a simplified version of the vacuum Einstein equation.\n\nMore specifically, I combine the restrictions imposed by the symmetry with the simple form of Einstein's equations formulated by Baez and Bunn.\\cite{BaezBunn2005} That same strategy was followed by Kassner in 2017,\\cite{Kassner2017} but in this text, I use the ``infalling coordinates'' that are commonly associated with the Gullstrand-Painlev\\'e form of the Schwarzschild metric,\\cite{Martel2001,Visser2005,HamiltonLisle2008} not the more common Schwarzschild coordinates. That choice simplifies the argument even further. In the end, what is required is no more than the solution of an ordinary differential equation for a single function, which yields to standard methods, to obtain the desired result.\n\n\\section{Coordinates adapted to spherical symmetry and staticity}\n\\label{SymmetriesCoordinates}\n\nAssume that the spacetime we are interested in is spherically symmetric and static. In general relativity, a symmetry amounts to the possibility of being able to choose coordinates that are adapted to the symmetry, at least within a restricted sub-region of the spacetime in question. That the spacetime is static is taken to mean that we can introduce a (non-unique) time coordinate ${t}$ so that our description of spacetime geometry does not depend explicitly on ${t}$, and that space and time are completely separate --- in the coordinates adapted to the symmetry, there are no ``mixed terms'' involving $\\mathrm{d} {t}$ times the differential of a space coordinate in the metric. If we use ${t}$ to slice our spacetime into three-dimensional hyperplanes, each corresponding to ``space at time ${t}$,'' then each of those 3-spaces has the same spatial geometry. A mixed term would indicate that those slices of space would need to be shifted relative to another in order to identify corresponding points. The mixed term's absence indicates that in adapted coordinates, there is no need for such an extra shift. In those coordinates, we can talk about the 3-spaces as just ``space,'' without the need for specifying which of the slices we are referring to.\n\nIn the case of spherical symmetry, we can introduce spherical coordinates that are adapted to the symmetry: a radial coordinate $r$ and the usual angular coordinates $\\vartheta,\\varphi$, so that the spherical shell at constant $r$ has the total area $4\\pi r^2$. In consequence, the part of our metric involving $\\mathrm{d}\\vartheta$ and $\\mathrm{d}\\varphi$ will have the standard form\n\\begin{equation}\nr^2(\\mathrm{d}\\vartheta^2+\\sin^2\\theta\\mathrm{d}\\varphi^2) \\equiv r^2\\mathrm{d}\\Omega^2,\n\\end{equation}\nwhere the right-hand side defines $\\mathrm{d}\\Omega^2$, the infinitesimal solid angle corresponding to each particular combination of $\\mathrm{d}\\vartheta$ and $\\mathrm{d}\\varphi$.\n\nThe radial coordinate slices space into spherical shells, each corresponding to a particular value $r=const.$ The rotations around the origin, which are the symmetry transformations of spherical symmetry, map each of those spherical shells onto itself, and they leave all physical quantities that do not explicitly depend on $\\vartheta$ or $\\varphi$ invariant.\n\nIn what follows, we will use the basic structures introduced in this way --- the slices of simultaneous ${t}$, the radial directions within each slice, the angular coordinates spanning the symmetry--adapted spherical shells of area $4\\pi r^2$ --- as auxiliary structures for introducing spacetime coordinates. For now, let us write down the shape that our metric has by simple virtue of the spherical symmetry, the requirement that the spacetime be static, and the adapted coordinates, namely\n\\begin{equation}\n\\mathrm{d} s^2 = -c^2F(r) \\mathrm{d} {t}^2 + G(r) \\mathrm{d} r^2 + r^2\\:\\mathrm{d}\\Omega^2. \n\\label{StaticForm}\n\\end{equation}\nStudents familiar with ``reading'' a spacetime metric will immediately recognize the sign difference between the parts describing space and describing time that is characteristic for spacetime, and the speed of light $c$ that gives us the correct physical dimensions. That there is no explicit dependence on $\\varphi$ and $\\vartheta$ in the remaining functions $F$ and $G$ is a direct consequence of spherical symmetry. That the factor in front of $\\mathrm{d}\\Omega^2$ is $r^2$ is a consequence of our coordinate choice, with spherical angular coordinates so that the area of a spherical surface of constant radius $r$ is $4\\pi r^2$. That there is no explicit dependence on ${t}$ is one consequence of the spacetime being static; the absence of the mixed term $\\mathrm{d} {t}\\cdot \\mathrm{d} r$ is another. We are left with two unknown functions $F(r)$ and $G(r)$. In the following, let us call ${t}$ and $r$ the {\\em static coordinates}. \n \nNote that, since $G(r)$ is as yet undefined, we have not yet chosen a specific physical meaning for the length measurements associated with our $r$ coordinate. But because of the $\\mathrm{d}\\Omega^2$ part, it is clear that whatever choice we make, the locally orthogonal lengths $r\\cdot\\mathrm{d}\\vartheta$ and $r\\cdot\\sin\\vartheta\\cdot\\mathrm{d}\\varphi$ will have the same physical interpretation as for the length measurement corresponding to $\\mathrm{d} r$.\n\n\\section{Infalling observer coordinates}\n\\label{Sec:InfallingObservers}\n\nNow that we know what the radial directions are, at each moment of time ${t}$, we follow Visser\\cite{Visser2005} as well as Hamilton and Lisle\\cite{HamiltonLisle2008} in defining a family of radially infalling observers. Observers in that family are in free fall along the radial direction, starting out at rest at infinity: In mapping each observer's radial progression in terms of the static coordinate time ${t}$, we adjust initial conditions, specifically: the choice of initial speed at some fixed time ${t}$, in just the right way that the radial coordinate speed goes to zero for each observer in the same way as $r\\to\\infty.$\n\nIt is true that talking about ``infalling'' observers already reflects our expectation that our solution should describe the spacetime of a spherically symmetric mass. As we know from the Newtonian limit, such a mass attracts test particles in its vicinity. It should be noted, though, that all our calculations would also be compatible with the limit of no mass being present. In that case, ``infalling'' would be a misnomer, as our family of observers would merely hover in empty space at unchanging positions in $r$. \n\nWe can imagine infinitesimal local coordinate systems associated with our observers --- think of the observer mapping out space and time by defining three orthogonal axes, and by measuring time with a co-moving clock. We assume all such little coordinate systems to be non-rotating --- otherwise, we would break spherical symmetry, since rotation would locally pick out a plane of rotation that is distinguishable from the other planes. The radial direction is a natural choice for the first space axis of those little free-falling systems. The other directions, we take to point to observers falling side by side with our coordinate-defining observer --- and to remain pointed at a specific such other observer, once the choice of direction is made.\n\nWe assume our infalling observers' clocks to be synchronised at some fixed radius value $r$. By spherical symmetry, those clocks should then be synchronised at {\\em all} values of $r$. Anything else would indicate direction-dependent differences for the infalling observers and their clocks, after all. Hence, at any given static time ${t}$, all the infalling observers who are at radius value $r$ show the same proper time $T$ on the ideal clocks travelling along with them. \n\nOnce our definition is complete, our static, spherically symmetric spacetime is filled with infalling observers from that family: Whenever we consider an event $\\cal E$, there will be an observer from that family passing by at that time, at that location. \n\nNow, consider the coordinate speed of those infalling observers. If we position ourselves at some constant radius value $r$ and watch the falling observers fly by, then we can express both their proper time rate and their coordinate speed in the $r$ direction in terms of $r$ and ${t}$. We can combine the two pieces of information to obtain the rate of change in radial position $r$ with proper time $T$ for those infalling observers. But since the initial conditions for those observers are the same, and since our spacetime is, by assumption, static, the resulting function can only depend on $r$, and not explicitly on ${t}$. Let us rescale that function with the speed of light to make it dimensionless, give it an overall minus sign to make it positive for infalling particles, and call it $\\beta(r)$,\n\\begin{equation}\n\\beta(r)\\equiv -\\frac{1}{c}\\frac{\\mathrm{d} r}{\\mathrm{d} T}(r).\n\\label{betaDefinition}\n\\end{equation}\n\nRecall from section \\ref{SymmetriesCoordinates} that we also still have the freedom to decide on the physical meaning of $r$. We make the choice of making $\\mathrm{d} r$ the physical length measured by one of our infalling observers at the relevant location in spacetime, at constant time $T$. Via our angular coordinates, that implies that length measurements orthogonal to the radial direction, $r\\cdot\\mathrm{d}\\vartheta$ and $r\\cdot\\sin\\vartheta\\:\\mathrm{d}\\varphi$ inherit the same physical interpretation.\n\nAs a next step, we transform our metric (\\ref{StaticForm}) from the static form into the form appropriate for our coordinate choice $r$ and $T$. We do so by writing the static time coordinate as a function ${t}(T,r)$ in terms of infalling observer time and radius value. In consequence,\n\\begin{equation}\n\\mathrm{d} {t} = \\frac{\\partial{t}}{\\partial T}\\cdot\\mathrm{d} T+ \\frac{\\partial {t}}{\\partial r}\\cdot\\mathrm{d} r,\n\\end{equation}\nand our new metric now has the form\n\\begin{align}\n \\mathrm{d} s^2 = {} & -c^2 F(r)\\left(\\frac{\\partial t}{\\partial T}\\right)^2\\mathrm{d} T^2 \\nonumber \\\\[0.2em]\n & -2c^2F(r)\\left(\\frac{\\partial t}{\\partial T}\\right)\\left(\\frac{\\partial t}{\\partial r}\\right)\\mathrm{d} T\\:\\mathrm{d} r \\nonumber \\\\[0.2em]\n & +\\left[G(r)-c^2F(r)\\left(\\frac{\\partial t}{\\partial r}\\right)^2\\right]\\mathrm{d} r^2+r^2\\:\\mathrm{d}\\Omega^2.\n \\end{align}\nAt face value, this looks like we are moving the wrong way, away from simplification, since we now have more functions, and they depend on two variables instead of one.\n\nBut in fact, this new formulation paves the way for an even simpler form of the metric. Consider a specific event, which happens at given radius value $r$. In a small region around that event, we will introduce a new coordinate $\\bar{r}$ to parametrize the radial direction. We want this coordinate to be co-moving with our infalling observers at $r$; each such observer then has a position $\\bar{r}=const.$ that does not change over time. \n\nKey to our next step is that we {\\em know} the metric for the local length and time measurements made by any one of our free-falling observers. By Einstein's equivalence principle, the metric is that of special relativity. Locally, namely whenever tidal effects can be neglected, spacetime geometry for any non-rotating observer in free fall is indistinguishable from Minkowski spacetime as described by a local inertial system.\n\nSince we have chosen both the time coordinate $T$ and the physical meaning of the radial coordinate $r$ so as to conform with the measurements of the local infalling observer, the transformation between $\\bar{r}$ and $r$ is particularly simple: It has the form of a Galilei transformation\n\\begin{equation}\n\\mathrm{d}\\bar{r}= \\mathrm{d} r + \\beta(r)c\\:\\mathrm{d} T.\n\\label{barRshift}\n\\end{equation}\nIn that way, as it should be by definition, radial coordinate differences at constant $T$ are the same in both systems, while for an observer at constant $\\bar{r},$ with $\\mathrm{d} \\bar{r}=0$, the relation between $\\mathrm{d} r$ and $\\mathrm{d} T$ is consistent with the definition of the function $\\beta(r)$ in (\\ref{betaDefinition}).\n\nAre you surprised that this is not a Lorentz transformation, as one might expect from special relativity? Don't be. We are not transforming from one local inertial coordinate system to another. The $T$ is already the time coordinate of the infalling observers, so both coordinate systems have the same definition of simultaneity, and time dilation plays no role in this particular transformation. Also, we have chosen $r$ intervals to correspond to length measurements of the infalling observers, so there is no Lorentz contraction, either. It is the consequence of these special choices that gives the relation (\\ref{barRshift}) its simple form.\n\nLast but not least, when we analyse specifically an infinitesimal neighbourhood of the point $r,\\vartheta,\\varphi$, let us make the choice that directly at our point of interest, we make $\\bar{r}$ coincide with $r$. Since before, we had only fixed the differential $\\mathrm{d} \\bar{r}$, we do have the remaining freedom of choosing a constant offset for $\\bar{r}$ that yields the desired result.\n\nBy Einstein's equivalence principle, the metric in terms of the locally co-moving coordinates $T,\\bar{r},\\vartheta,\\varphi$ is the spherical-coordinate version of the Minkowski metric,\n\\begin{equation}\n\\mathrm{d} s^2 = -c^2\\mathrm{d} T^2 + \\mathrm{d}\\bar{r}^2 + \\bar{r}^2\\mathrm{d}\\Omega.\n\\end{equation}\nThis version can, of course, be obtained by taking the more familiar Cartesian-coordinate version\n\\begin{equation}\n\\mathrm{d} s^2=-c^2\\mathrm{d} T^2 + \\mathrm{d} X^2 + \\mathrm{d} Y^2 + \\mathrm{d} Z^2,\n\\label{CartesianMinkowski}\n\\end{equation}\napplying the definition of Cartesian coordinates $X,Y,Z$ in terms of spherical coordinates $\\bar{r},\\vartheta,\\varphi$\n\\begin{equation}\nx= \\bar{r}\\:\\sin\\vartheta\\:\\cos\\varphi, \\;\\;\ny= \\bar{r}\\:\\sin\\vartheta\\:\\sin\\varphi, \\;\\;\nz= \\bar{r}\\:\\cos\\vartheta,\n\\end{equation}\nto express $\\mathrm{d} X, \\mathrm{d} Y, \\mathrm{d} Z$ in terms of $\\mathrm{d} \\bar{r}, \\mathrm{d}\\vartheta, \\mathrm{d}\\varphi$, and substitute the result into (\\ref{CartesianMinkowski}).\n\nBy noting that we have chosen $\\bar{r}$ so that, at the specific spacetime event where we are evaluating the metric, $\\bar{r}=r$, while, for small radial coordinate shifts around that location, we have the relation (\\ref{barRshift}), we can now write down the same metric in the coordinates $T, r, \\vartheta,\\varphi$, namely as\n\\begin{equation}\n\\mathrm{d} s^2 = -c^2\\left[\n1-\\beta(r)^2\n\\right] \\mathrm{d} T^2+2c\\beta(r)\\mathrm{d} r\\:\\mathrm{d} T\n+\\mathrm{d} r^2+r^2\\mathrm{d}\\Omega^2.\n\\label{preMetric}\n\\end{equation}\nSince we can repeat that local procedure at any event in our spacetime, this result is our general form of the metric, for all values of $r$. This, then is the promised simplification: By exploiting the symmetries of our solutions as well as the properties of infalling observers, we have reduced our metric to a simple form with no more than one unknown function of one variable, namely $\\beta(r)$.\n\nSo far, what I have presented is no more than a long-form version of the initial steps of the derivation given by Visser in his heuristic derivation of the Schwarzschild metric.\\cite{Visser2005} In the next section, we will deviate from Visser's derivation.\n\n\\section{$\\beta(r)$ from tidal deformations}\n\\label{TidalSection}\n\nIn the previous section, we had exploited symmetries and Einstein's equivalence principle. In order to determine $\\beta(r)$, we need to bring in additional information, namely the Einstein equations, which link the matter content with the geometry of spacetime. For our solution, we only aim to describe the spacetime metric outside whatever spherically-symmetric matter distribution resides in (or around) the center of our spherical symmetry. That amounts to applying the {\\em vacuum Einstein equations}.\n\nMore specifically, we use a particularly simple and intuitive form of the vacuum Einstein equations, which can be found in a seminal article by Baez and Bunn:\\cite{BaezBunn2005} Consider a locally flat free-fall system around a specific event $\\cal E$, with a time coordinate $\\tau$, local proper time, where the event we are studying corresponds to $\\tau=0$. In that system, describe a small sphere of freely floating test particles, which we shall call a {\\em test ball}. The particles need to be at rest relative to each other at $\\tau=0$. Let the volume of the test ball be $V(\\tau)$. Then the vacuum version of Einstein's equations states that\n\\begin{equation}\n\\left.\\frac{\\mathrm{d}^2 V}{\\mathrm{d}\\tau^2}\\right|_{\\tau=0} = 0.\n\\label{EinsteinVacuum}\n\\end{equation}\nIn words: If there is no matter or energy inside, the volume of such a test ball remains constant in the first order (those were our initial conditions) and the second order (by eq.~[\\ref{EinsteinVacuum}]). \n\nIf you are familiar with Wheeler's brief summary of Einstein's equations, ``spacetime grips mass, telling it how to move'' and ``mass grips spacetime, telling it how to curve'',\\cite{Wheeler1990} you will immediately recognise that this is a specific way for the structure of spacetime telling the test ball particles how to move. The calculation later in this section provides the second part: It will amount to using (\\ref{EinsteinVacuum}) to determine the structure of spacetime, namely the still missing function $\\beta(r)$, and that is the way for mass, in this case: for the absence of mass, to tell spacetime how to curve.\n\nNote that equation (\\ref{EinsteinVacuum}) also holds true in Newtonian gravity. So in a way, this version of Einstein's equation can be seen as a second-order extension of the usual Einstein equivalence principle: Ordinarily, the equivalence principle is a statement about physics in the absence of tidal forces. Equation (\\ref{EinsteinVacuum}) adds to this that the lowest-order correction for tidal forces in a freely falling reference frame is that specified by Newtonian gravity. This makes sense, since by going into a free-fall frame, and restricting our attention to a small spacetime region, we have automatically created a weak-gravity situation. In such a situation, tidal corrections are approximately the same as those described by Newton. This argument can serve as a heuristic justification of (\\ref{EinsteinVacuum}).\n\nIn 2017, Kassner made use of the Baez-Bunn form of Einstein's vacuum equation to derive the Schwarzschild solution, starting from what we have encountered as the static form of the metric (\\ref{StaticForm}).\\cite{Kassner2017} We follow the same general recipe, but using the infalling coordinates introduced in section \\ref{Sec:InfallingObservers}, which makes our derivation even simpler.\n\nConsider five test particles in a small region of space. Let the motion of each be the same as for the local representative from our coordinate-defining family of infalling observers. We take the central particle $C$ to be at radial coordinate value $r=R$ at the time of the snapshot shown in Fig.~\\ref{TestParticlesOutside}. The other four are offset relative to the central particle: As described in the local inertial system that is co-moving with the central particle, one of the particles is shifted by $\\Delta l$ upwards in the radial direction, another downward, while two of the particles are offset orthogonally by the same distance.\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[width=0.5\\linewidth]{01-free-fall-particles.pdf}\n\\caption{Five test particles in our spherically-symmetric spacetime}\n\\label{TestParticlesOutside}\n\\end{center}\n\\end{figure}\nThe $\\Delta l$ is meant to be infinitesimally small, so while Fig.~\\ref{TestParticlesOutside} is of course showing a rather large $\\Delta l$ so as to display the geometry of the situation more clearly, we will in the following only keep terms linear in $\\Delta l$. \n\nConsider a generic particle, which moves as if it were part of our coordinate-defining family of infalling observers, and which at the time $T_0$ is at $r=r_0$. By a Taylor expansion, that particle's subsequent movement is given by\n\\begin{equation}\nr(T) = r_0 + \\frac{\\mathrm{d} r}{\\mathrm{d} T}(T_0) \\cdot \\Delta T +\\frac12 \\frac{\\mathrm{d}^2 r}{\\mathrm{d} T^2}(T_0) \\cdot \\Delta T^2\n\\label{TaylorREvo}\n\\end{equation}\nwhere $\\Delta T\\equiv T-T_0$. We know from (\\ref{betaDefinition}) that the derivative in the linear term can be expressed in terms of $\\beta(r)$; by the same token,\n\\begin{equation}\n\\frac{\\mathrm{d}^2 r}{\\mathrm{d} T^2} = -c\\frac{\\mathrm{d}\\beta}{\\mathrm{d} T}=-c\\beta' \\frac{\\mathrm{d} r}{\\mathrm{d} T} = c^2\\beta\\cdot\\beta',\n\\end{equation}\nwhere the prime denotes differentiation of $\\beta$ with respect to its argument. Since, in the following, the product of $\\beta$ and its first derivative will occur quite often, let us introduce the abbreviation\n\\begin{equation}\nB(r) \\equiv \\beta(r)\\cdot\\beta'(r).\n\\label{BigBDefinition}\n\\end{equation}\nWith these results, can rewrite the Taylor expansion (\\ref{TaylorREvo}) as \n\\begin{equation}\nr(T) = r_0 -c\\beta(r_0)\\cdot\\Delta T + \\frac12 c^2B(r_0)\\cdot\\Delta T^2.\n\\label{RadialOrbitTime}\n\\end{equation}\nIn order to find $r_C(T)$ for our central particle, we simply insert $r_0=R$ into that expression. If, on the other hand, we want to write down the time evolution for particles $U$ and $D$, let us denote it by $r_{U,D}(T)$, we need to evaluate the expression (\\ref{RadialOrbitTime}) at the initial location $r_0=R\\pm\\Delta l$. Since $\\Delta l$ is small, we can make a Taylor expansion of $\\beta(r)$ and its derivative around $r=R$, and neglect everything beyond the terms linear in $\\Delta l$. The result is\n\\begin{multline}\nr_{U,D}(T)=R \\pm\\Delta l-c\\left[\n\\beta(R)\\pm\\beta'(R)\\Delta l\n\\right]\\Delta T \\\\[0.2em]\n+\\frac{c^2}{2}\\big[\nB(R)\\pm B'(R)\\Delta l\n\\big]\\Delta T^2\n\\end{multline}\nIn consequence, the distance between the upper and lower particle, $d_{\\parallel}(T)\\equiv r_U(T)-r_D(T),$ changes over time as\n\\begin{equation}\nd_{\\parallel}(T) = 2\\Delta l\\left[\n1-c\\beta'(R)\\Delta T+\\frac12c^2 B'(R)\\Delta T^2\n\\right].\n\\label{dParallel}\n\\end{equation}\nNext, let us look at how the distance between the particles $L$ and $R$ changes over time. The initial radial coordinate value for each of the particles is\n\\begin{equation}\nr(T_0) = \\sqrt{R^2+\\Delta l^2}=R\\left[1+\\frac12\\left(\\frac{\\Delta l}{R}\\right)^2\\right]\\approx R,\n\\end{equation}\nthat is, equal to $R,$ as long as we neglect any terms that are higher than linear in $\\Delta l$. In consequence, $r_{L,R}(t)$ is the same function as for our central particle, given by eq.~(\\ref{RadialOrbitTime}) with $r_0=R$. The transversal (in Fig.~\\ref{TestParticlesOutside}: horizontal) distance $d_{\\perp}(T)$ between the particles $L$ and $R$ changes in proportion to the radius value,\n\\begin{align}\nd_{\\perp}(T) &= 2\\Delta l\\cdot\\frac{r_{L}(T)}{R} \\nonumber \\\\\n &=2\\Delta \\left[1-\\frac{c\\beta(R)}{R}\\Delta T+\\frac{c^2}{2}\\frac{B(R)}{R}\\Delta T^2\\right].\n \\label{dPerp}\n\\end{align}\nWith these preparations, consider the vacuum Einstein equation (\\ref{EinsteinVacuum}) for the volume of a test ball. Initially, our particles $C, U, D, L, R$ define a circle, which is deformed to an ellipse. By demanding rotational symmetry around the radial direction, we can construct the associated ellipsoid, which is initially a spherical surface. That ellipsoid has one axis in radial direction, whose length is $d_{\\parallel}(T)$, and two axes that are transversal and each have the length $d_{\\perp}(t)$. But that ellipsoid is not quite yet the test ball we need. After all, the particles of the test ball need to be at rest initially, at time $T_0$, in the co-moving system defined by the central particle $C$. Our defining particles are not, as the terms linear in $\\Delta T$ in both (\\ref{dParallel}) and (\\ref{dPerp}) show, where the coefficients of $\\Delta T$ correspond to the particles' initial velocities. \n\nIn order to define our test ball, we need to consider particles at the same location, undergoing the same acceleration, but which are initially at rest relative to the central particle $C$. \n\nWe could go back to the drawing board, back to Fig.~\\ref{TestParticlesOutside}, make a more general Ansatz that includes initial velocities which measure the divergence of the motion of our test ball particles from that of the infalling-observer particles, and repeat our calculation while including those additional velocity terms. But there is a short-cut. The only consequence of those additional velocity terms will be to change the terms linear in $\\Delta T$ in equations (\\ref{dParallel}) and (\\ref{dPerp}). And we already know the end result: We will choose the additional terms so as to cancel the terms linear in $\\Delta T$ in the current versions of (\\ref{dParallel}) and (\\ref{dPerp}). But by that reasoning, we can skip the explicit steps in between, and write down the final result right away. The time evolution of the radial-direction diameter of our test ball, let us call it $L_{\\parallel}(T)$, must be the same as $d_{\\parallel}(T)$, but without the term linear in $\\Delta T$. Likewise, the time evolution $L_{\\perp}(T)$ of the two transversal diameters must be equal to $d_{\\perp}(T)$, but again without the term linear in $\\Delta T$. The result is\n\\begin{align}\nL_{\\parallel}(T) &= 2\\Delta l \\left[1+\\frac12c^2B'(R)\\Delta T^2\\right] \\\\\nL_{\\perp}(T) &= 2\\Delta l \\left[1+\\frac{c^2}{2}\\frac{B(R)}{R}\\Delta T^2\\right].\n\\end{align}\nThus, our test ball volume is\n\\begin{align}\nV(T) &= \\frac{\\pi}{6}L_{\\parallel}(T) L_{\\perp}^2(T) \\\\\n &= \\left.\\frac{4\\pi}{3}\\Delta l^3\\left[1+{c^2}\\left( \\frac{B(r)}{r} + \\frac{B'(r)}{2}\\right)\\Delta T^2\\right]\\right|_{r=R}\n\\end{align}\nFor the second time derivative of $V(T)$ to vanish at the time $T=T_0$, we must have\n\\begin{equation}\n\\frac{B(r)}{r} + \\frac{B'(r)}{2}= 0\n\\label{VolumeConditionR}\n\\end{equation}\nfor all values of $r$. This is readily solved by the standard method of separation of variables: We can rewrite (\\ref{VolumeConditionR}) as\n\\begin{equation}\n\\frac{\\mathrm{d} B}{B} = -2\\frac{\\mathrm{d} r}{r},\n\\end{equation}\nwhich is readily integrated to give\n\\begin{equation}\n\\ln(B) = -\\ln(r^{2}) + const. \\;\\; \\Rightarrow \\;\\; \\ln(Br^2) = C',\n\\end{equation}\nwith a constant $C'$, which upon taking the exponential gives us\n\\begin{equation}\nBr^2= C,\n\\label{BSolution}\n\\end{equation}\nwith a constant $C$. Note that the constant $C$ can be negative --- there is no reason the constant $C'$ needs to be real; only our eventual function $B(r)$ needs to be that, and it is clear that (\\ref{BSolution}) satisfies the differential equation\n(\\ref{VolumeConditionR}) for any constant $C$, positive, zero, or negative. By (\\ref{BigBDefinition}), the solution (\\ref{BSolution}) corresponds to the differential equation\n\\begin{equation}\n\\beta(r)\\beta'(r) = \\frac{C}{r^2}\n\\end{equation}\nfor our function $\\beta$; with another separation of variables, we can re-write this as \n\\begin{equation}\n\\beta\\cdot\\mathrm{d}\\beta=C\\frac{\\mathrm{d} r}{r^2}.\n\\end{equation}\nBoth sides are readily integrated up; we can solve the result for $\\beta(r)$ and obtain\n\\begin{equation}\n\\beta(r) = \\sqrt{\n-\\frac{2C}{r} +2D\n},\n\\end{equation}\nwhere $D$ is the second integration constant, and where we have chosen the proper sign, since we know that $\\beta(r)>0$. That brings us to the last step: The requirement that, for large values of $r$, the description provided by our solution should correspond to the results from Newtonian gravity. First of all, we note that our initial condition for the infalling observers, which had those observers start out at zero speed at infinity, means that we must choose $D=0$. Then, as we would expect, $\\beta(r)$ for large values of $r$ becomes very small, corresponding to small speeds. But at slow speeds, time and length intervals as measured by the infalling observer will become arbitrarily close to time and length intervals as measured by an observer at rest in our static coordinate system at constant $r$, using the static time coordinate ${t}$. As is usual, we identify these coordinates with those of an approximately Newtonian description. In that description, the radial velocity is\n\\begin{equation}\nv(r) = \\sqrt{\\frac{2GM}{r}},\n\\end{equation}\nwhich follows directly from energy conservation for the sum of each observer's kinetic and Newtonian-gravitational potential energy. This fixes the remaining integration constant as\n\\begin{equation}\nC = -\\frac{GM}{c^2},\n\\end{equation}\nand the final form of our function $\\beta(r)$ becomes\n\\begin{equation}\n\\beta(r) = \\sqrt{\\frac{2GM}{rc^2}}.\n\\end{equation}\nInserting this result in (\\ref{preMetric}), we obtain the metric\n\\begin{equation}\n\\mathrm{d} s^2 = -c^2\\left[\n1-\\frac{2GM}{rc^2}\n\\right]\\mathrm{d} T^2+2\\sqrt{\\frac{2GM}{r}}\\mathrm{d} r\\:\\mathrm{d} T+\\mathrm{d} r^2+r^2\\mathrm{d}\\Omega^2.\n\\label{GPMetric}\n\\end{equation}\nThis is known as the Gullstrand-Painlev\\'e version of the Schwarzschild metric.\\cite{Martel2001,Visser2005,HamiltonLisle2008} A last transformation step brings us back to the traditional Schwarzschild form. Recall our discussion in sec.~\\ref{SymmetriesCoordinates}, leading up to the explicitly static form (\\ref{StaticForm}) of the metric? The main difference between our current form and the static version is the mixed term containing $\\mathrm{d} r\\:\\mathrm{d} T$ in (\\ref{GPMetric}). Everything else already has the required shape. Inserting the Ansatz\n\\begin{equation}\n\\mathrm{d} T = \\mathrm{d} t + \\xi(r) \\mathrm{d} r\n\\end{equation}\ninto the metric (\\ref{GPMetric}), it is straightforward to see that the mixed term vanishes iff our transformation is\n\\begin{equation}\n\\mathrm{d} T = \\mathrm{d} t +\\frac{\\sqrt{2GM/r}}{c^2\\left(1-\\frac{2GM}{rc^2}\\right)}\\mathrm{d} r.\n\\label{TtTrafo}\n\\end{equation}\nSubstitute this into (\\ref{GPMetric}), and the result is the familiar form of the Schwarzschild metric in Schwarzschild's original coordinates $t,r,\\vartheta,\\varphi$, \n\\begin{equation}\n\\mathrm{d} s^2 = -c^2\\left(1-\\frac{2GM}{c^2 r}\n\\right)\\mathrm{d} t^2 + \\frac{\\mathrm{d} r^2}{\\left(1-\\frac{2GM}{c^2 r}\n\\right)} + r^2\\mathrm{d}\\Omega^2.\n\\end{equation}\n\n\\section{Conclusion}\nUsing coordinates adapted to the symmetries, we were able to write down the spherically symmetric, static spacetime metric. On this basis, and using the family of infalling observers that is characteristic for the Gullstrand-Painlev\\'e solution, we wrote down the metric in the form (\\ref{preMetric}), with a single unknown function $\\beta(r)$. From the simplified form (\\ref{EinsteinVacuum}) of the vacuum Einstein equations, as applied to a test ball in free fall alongside one of our family of observers, we were able to determine $\\beta(r)$, up to two integration constants. By using the Einstein equation, we escape the restrictions imposed on simplified derivations by Gruber et al.\\cite{Gruber1988} \n\nFrom the initial condition for our infalling observers, as well as from the Newtonian limit at large distances from our center of symmetry, we were able to fix the values of the two intergration constants. Our derivation does not require knowledge of advanced mathematical concepts beyond the ability to properly interpret a given metric line element $\\mathrm{d} s^2$. Even our analysis of tidal effects proceeds via a simple second-order Taylor expansion, leading to differential equations for $\\beta(r)$ that are readily solved using two applications of the method of separation of variables. \n\nWhat is new about the derivation presented here is the combination of the Baez-Bunn equations with the infalling coordinates typical for the Gullstrand-Painlev\\'e form of the metric --- this combination is what, in the end, makes our derivation particularly simple. In turn, this simplicity is what should make the derivation particularly useful in the context of teaching general relativity in an undergraduate setting.\n\nThe derivation proceeds close to the physics, and gives ample opportunity to discuss interesting properties of Einstein's theory of gravity. Students who are presented with this derivation, either as a demonstration or as a (guided) exercise, will come to understand the way that symmetries determine the form of a metric, the deductions that can be made from Einstein's equivalence principle, and last but not least that we need to go beyond the equivalence principle, and consider tidal forces, to completely define our solution.\n\n\\section*{Acknowledgements}\n\nI would like to thank Thomas M\\\"uller for helpful comments on an earlier version of this text.\n\n", "answers": ["Using the vacuum Einstein equation and the Baez-Bunn form."], "length": 4982, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "032ee1448dec7751d00cd9f752fc61c5843a47e49dd7fcb6"} {"input": "What is Professor Tulis's forthcoming book?", "context": "UT College of Liberal Arts: College of Liberal Arts University of Texas at Austin Departments Graduate Resources Undergraduate Resources Courses Online Courses Dean's Office Alumni & Giving Faculty by Department Search the College of Liberal Arts\nnext profile Jeffrey Tulis Associate Professor — Ph.D.,\nE-mail: tulis@austin.utexas.edu\nOffice: MEZ 3.152\nPolitical Theory and American Politics\nProfessor Tulis's interests bridge the fields of political theory and American politics, including more specifically, American political development, constitutional theory, political philosophy and the American presidency. His publications include The Presidency in the Constitutional Order (LSU, 1981; Transaction, 2010), The Rhetorical Presidency (Princeton, 1987), The Constitutional Presidency (Johns Hopkins 2009), The Limits of Constitutional Democracy (Princeton, 2010) and recent journal articles and chapters on constitutional interpretation, the logic of political change, and the meaning of political success. Four collections of essays on The Rhetorical Presidency with responses by Tulis have been published, including a special double issue of Critical Review: An Interdisciplinary Journal of Politics and Society, (2007), where his book is described as \"one of the two or three most important and perceptive works written by a political scientist in the twentieth century.\"\nHe has served as President of the Politics and History Section of the American Political Science Association. He received the President's Associates Teaching Excellence Award at the University of Texas. He has held research fellowships from NEH, ACLS, Olin Foundation, Harvard Law School, and the Mellon Preceptorship at Princeton University, where he taught before moving to Texas. He has held visiting positions at Notre Dame and Harvard. He has served as associate chair of the Department of Government from 1989-2001 and was acting chair during 1992-93. and for part of each year between 1989 and 2001. During the academic year 2008-09, he was a Laurance S. Rockefeller Visiting Fellow at the University Center for Human Values at Princeton. During Spring 2016, he was a Dahrendorf Visiting Fellow at the London School of Economics and Political Science.\nHis forthcoming books include: Legacies of Losing in American Politics, with Nicole Mellow (University of Chicago Press, Fall 2017), and an expanded edition of The Rhetorical Presidency in the Princeton Classics series (Princeton, Fall 2017). For two decades he served as co-editor of the Johns Hopkins Series in Constitutional Thought, and he currently co-edits (with Sanford Levinson) Constitutional Thinking, a Series at the University Press of Kansas.\nGOV 370L • Pres In Constitutional Ord 38840 • Spring 2017 Meets MW 2:30PM-4:00PM CAL 221 show description\nGOV 370 Seminar: The Presidency in the Constitutional Order\nSpring 2017 Unique # 38840\nMW 2:30 to 4pm GDC 2.402\nJeffrey K. Tulis\nIn this Seminar we will discuss a series of constitutional problems including: the problem of executive energy in the American Constitution; presidential selection and the problem of political legitimacy; separation of powers; delegation of powers, the constitutional status of war and foreign affairs, administration and bureaucracy and the meaning of leadership in the constitutional order.\nSeminar will meet twice a week and regular attendance and thorough preparation for discussion is expected. Unexcused absence from more than three classes will result in failure of the participation component of the course. There will also be pop quizzes on the reading that will count as part of your participation grade. In addition to class participation, course requirements include four short analytic essays, and one in-class test. The course grade will be calculated as follows:\nSeminar participation: 20%\nIn-class test: 20%\nThree analytic essays 60% (20% each)\nClass participation is especially important. Preparation for seminar and for your in-class test will be enhanced by careful note taking on the readings. If students appear to be unprepared, pop quizzes will be given and the grades on them will affect the participation component of your course grade.\nTexts: (tentative)\nJoseph M. Bessette and Jeffrey K. Tulis, The Constitutional Presidency\nMichael Nelson, The Presidency in the Political System (tenth edition)\nRichard Ellis and Michael Nelson, Debating the Presidency (third edition)\nThe Federalist (any edition, or online) GOV 310L • American Government-Honors 38335 • Fall 2016 Meets TTH 3:30PM-5:00PM BEN 1.106 show description\nGOV 310 (Honors) (38335) Fall 2016\nTTH 3:30-5:00pm, BEN 1.106\nThis honors seminar offers an introduction to American politics that emphasizes the confluence of ideas, mores, institutions, and interests, in the constitutional system. This course covers more theory, and the readings are more demanding, than other versions of GOV 310. One of the main objectives of the course is to deepen your understanding of the practical aspects of contemporary public affairs by developing your ability to understand the theoretical foundations of American politics. Although we cover the nuts and bolts of politics there is much more theory in this version of GOV 310. If you have registered for this section mainly because 310 is a legislative requirement that you need to fulfill, this is not the right version for you. There is a substantial workload in this class.\nRegular attendance, thorough and timely preparation, and active participation are all necessary to do well.\nFour essays (approximately 1000 words each). Three of these will be assigned analytic essay topics. The last will be a book review of a title chosen by the student from a long list of provided possibilities. (15% each essay, 60% of total course grade)\nTwo in-class tests. These will count 15% each, 30% of total course grade.\nClass participation. (10% of course grade). Both informed participation and occasional leadership of the seminar will be graded.\nNo make-up exams or late papers, except for documented medical or other emergencies.\nMark Landy and Sidney M. Milkis, American Government: Enduring Principles, Critical Choices, Third Edition\nMary Nichols and David Nichols, Readings in American Government, Ninth Edition\nThomas Mann and Norman Ornstein, Its Even Worse Than It Looks: How the American Constitutional System Collided With the New Politics of Extremism\nBruce Ackerman,Before the Next Attack: Preserving Civil Liberties in an Age of Terrorism GOV 381L • Constitutional Conflict 38660 • Fall 2016 Meets W 3:30PM-6:30PM BAT 5.102 show description\nGOV 381L Fall 2016\nConstitutional Conflict\nW 3:30-6:30pm, BAT 5.102\nMany of the most important debates regarding the nature and character of contemporary American politics are essentially arguments regarding the structure of separation of powers. In this seminar we will consider such questions as whether the American system is prone to deadlock of stalemate in the construction of national policy; whether conflict is a hindrance to institutional responsibility or an essential attribute of responsibility; whether there are “political questions” especially suitable to resolution between President and Congress; how one can distinguish salutary from pathological conflict, and whether it is truly possible to harness the ambition of office holders to the duties of their office.\nMore specifically, we will review literature and arguments regarding constitutional reform; divided government; separation of powers theory; and case studies of Supreme Court appointments; the budget process; and war powers and foreign affairs. In these contexts we will also discuss current controversies surrounding war authorization, intelligence and secrecy, sequestration, government shut downs and budget resolutions, and debt ceiling politics.\nThe course is designed to accommodate two different student needs: it will provide a good overview of important literature relevant to the comprehensive examination in American politics and it will provide opportunities for research. This subject area is a treasure trove of “hot” topics, publication possibilities, subjects for MA theses and Ph.D. dissertations. I will tailor the written requirements to the objectives of individual students.\n1. All students will prepare a short analytic essay early in the semester, and an annotated bibliography at mid-semester. These assignments will count (30%) of the grade.\n2. Students interested primarily in exam preparation will complete an examination near the end of the semester based on study questions assigned in advance. OR\nStudents interested in research will write a 20-25 page paper. (60%)\n3. A basic requirement of the course is that students prepare for each seminar by carefully reading the material assigned for that week. Class discussion is an essential component of the course. (10%)\nTentative Texts:\nJones, Separate But Equal Branches\nSilverstein, Imbalance of Powers\nWilson & Schram, Separation of Powers and Good Government\nBurgess, Contest for Constitutional Authority\nFarrier, Passing the Buck: Congress, the Budget and Deficits\nWeissman, A Culture of Deference\nZeisberg, War Powers: The Politics of Constitutional Authority\nFisher, Congressional Abdication on War and Spending\nLowi, The End of Liberalism GOV 379S • Regime Persp Amer Poltc-Honors 38105 • Spring 2016 Meets TH 3:30PM-6:30PM GAR 1.134 (also listed as CTI 335, LAH 350) show description\nGOV 379S Regime Perspectives on American Politics\nThis is a seminar on American politics and culture. Two purposes govern the selection of texts for the course and guide our discussion of them. All of our texts attempt to look at American politics as a whole. Most books and courses on America look at only a part, such as the Presidency, or elections, or popular culture. Here we attempt to think about how the parts of America fit together. Even when these texts speak about a part, for example an institution such as the presidency or the Congress, they present the topic from a vantage point on the whole polity. To see the polity as a whole also means that we will have to revisit and rethink aspects of our political life that we take for granted – that we don’t examine because those parts have become so natural or familiar to us. Seeing the polity whole enables us to render the familiar unfamiliar, to make what we take for granted strange and new.\nTo see the polity as a whole requires that we get some distance from our subject, much as to see the planet earth as a whole requires one to look at it from outer space. Just as it is difficult to get visual perspective on a place living within it, it is difficult to understand the promise or pathologies of a regime from within. To get critical distance from our politics, we will closely study three sets of texts that look at American politics from a distance. The first part of the course will recover the perspective of the founding debate between Federalists and Anti-federalists. This fundamental debate reveals what is a stake in the basic architecture of the American regime. The second part of the course is a close study of Tocqueville’s Democracy in America. Regarded by many as the best book ever written on democracy and the best book written on America, Tocqueville sees our polity whole because he looks at it from the vantage point of Europe, in general, and France, in particular. In the third part of the seminar we think about American politics from the perspective of thoughtful commentators who feel only nominally included in the polity. Half in and half out, these extraordinary black American writers reveal fissures and fault lines in the American regime. We end the class with a discussion of America’s place in the world today – examining a speech by a writer who articulately raises challenges to our self-understanding that are inarticulately expressed today in rage and ranting from enemies of the United States.\nThree take home analytic essays, chosen from a list of topics I provide, each weighted 25% of the course grade. Late essays will not be accepted, except with a doctor’s excuse or a Dean’s excuse for family emergency.\nOR as an option: you may write the two short essays (both together weighted 25%) and do a longer 15 page paper on a topic of your choice in consultation with me (weighted 50% of your course grade). Government honors students who are thinking of doing an honors thesis next year may prefer this option to begin to develop research and writing skills for longer work. Students who prefer this option will need to designate their preferred third short essay and have discussed with me a topic for their long paper by March 30. Texts:\nSelected Anti-Federalist writings\nTocqueville, Democracy in America\nEssays, speeches and articles by Frederick Douglass, W.E.B. Dubois, Booker T. Washington, James Baldwin and Ralph Ellison GOV 382M • Democratic Theory 38120 • Spring 2016 Meets M 3:30PM-6:30PM BAT 1.104 show description\nGOV 382M (38120)\nDemocratic Theory Spring 2016\nThis is a graduate seminar on contemporary topics in democratic theory. Topics to be covered include: democratic epistemology; deliberative democracy; the meaning of the people; oracular democracy; agonistic democracy; and possibly new theories of republicanism, representation and partisanship.\nTexts (tentative)\nHelene Landemore, Democratic Reason\nJeffrey Edward Green, The Eyes of the People\nAmy Gutmann and Dennis Thompson, Why Deliberative Democracy?\nAlan Keenan, Democracy in Question\nJason Frank, Constituent Moments\nJason Frank, Publius and Political Imagination\nNadia Urbanati, Democracy Disfigured\nRussell Muirhead, Partisanship in a Polarized Age\nBryan Garsten, manuscript\nActive seminar participation; an annotated bibliography or review essay; a research/analytic paper. GOV 310L • American Government-Honors 37615 • Fall 2015 Meets TTH 2:00PM-3:30PM BEN 1.106 show description\nTTH 2-3:30/BEN 1.106\nBruce Ackerman,Before the Next Attack: Preserving Civil Liberties in an Age of Terrorism GOV 370L • Presidency In Constitutl Order 37845 • Fall 2015 Meets TTH 5:00PM-6:30PM PAR 310 show description\nGOV 370L (37845)\nTTH 5-6:30 PAR 310\nThe Presidency in the Constitutional Order\nA study of the place of the presidency in the American political order that stresses tension between power and accountability inherent in the office and the system. Topics include: separation of powers, presidential selection, impeachment, relations with Congress and bureaucracy, emergency powers, presidential character, and leadership.\nThis is a very demanding writing flag class. If you are enrolling in this class just in order to satisfy the writing flag, you are in the wrong class. Interest in political theory and willingness to work very hard are necessary for success in this class.\nJoseph M. Bessette, The Constitutional Presidency\nAndrew Rudalevige, The New Imperial Presidency\nBruce Ackerman, The Rise and Decline of the American Republic\nMichael Nelson, ed., The Presidency in the Political System\nMichael Nelson, ed., The Evolving Presidency\nLouis Fisher, Constitutional Conflicts Between Congress and the President\nActive and prepared class participation\nRegular quizzes on the reading\nFour analytic essays (approximately 1200 words).\nOne term paper, (approximately 5000 words). GOV 379S • Regime Persp On Amer Politics 38100 • Spring 2015 Meets T 3:30PM-6:30PM MEZ 1.104 (also listed as LAH 350) show description\nEssays, speeches and articles by Frederick Douglass, W.E.B. Dubois, Booker T. Washington, James Baldwin and Ralph Ellison GOV 382M • Tocqueville 38135 • Spring 2015 Meets M 3:30PM-6:30PM BAT 5.102 show description\nThis graduate seminar will be devoted to close readings of two principal writings of Tocqueville: Democracy in America and The Ancien Regime and the Revolution. We will also assess some of the best secondary studies of Tocqueville, including work by Sheldon Wolin, Harvey Mansfield, Delba Winthrop, Jon Elster, Francois Furet, and a book by Pierre Manent.\nCourse requirements will include two very short analytic essays and one seminar paper (20-25 pages). GOV 310L • American Government-Honors 38722 • Fall 2014 Meets TTH 2:00PM-3:30PM GAR 2.112 show description\nJoseph M. Bessette and John J. Pitney, American Government and Politics: Deliberation, Democracy and Citizenship\nMary Nichols and David Nichols, Readings in American Government\nBruce Ackerman,Before the Next Attack: Preserving Civil Liberties in an Age of Terrorism GOV 370L • Presidency In Constitutl Order 38977 • Fall 2014 Meets TTH 9:30AM-11:00AM CBA 4.332 show description\nA study of the place of the presidency in the American political order that stresses\ntension between power and accountability inherent in the office and the system.\nTopics include: separation of powers, presidential selection, impeachment,\nrelations with Congress and bureaucracy, emergency powers, presidential\ncharacter, and leadership.\nThis is a very demanding writing flag class. If you are enrolling in this class just in order\nto satisfy the writing flag, you are in the wrong class. Interest in political theory and willingness\nto work very hard are necessary for success in this class.\nOne term paper, (approximately 5000 words). GOV 379S • Regime Persp On Amer Politics 39395 • Spring 2014 Meets T 3:30PM-6:30PM MEZ 1.104 (also listed as CTI 335, LAH 350) show description\nEssays, speeches and articles by Frederick Douglass, W.E.B. Dubois, Booker T. Washington, James Baldwin and Ralph Ellison GOV 381L • Constitutional Conflict 39415 • Spring 2014 Meets M 3:30PM-6:30PM BAT 1.104 show description\nLowi, The End of Liberalism GOV 330K • The American President 39140 • Fall 2013 Meets MW 3:00PM-4:30PM MEZ B0.306 show description\nThis course offers an over view of the place of the presidency in the American political order. Topics covered include: constitutional design of the office; nominations and elections; legislative leadership; leadership of the bureaucracy; staffing and organizing the White House; the presidency and the judiciary; war and emergencies. We will spend extra time this fall on the presidential campaign and election of 2012.\nTwo in-class examinations (50% of the final grade)\nOne short (1000 word) take-home essay (30% of the final grade)\nClass participation and quizzes (20% of the final grade)\nRichard J. Ellis, The Development of the American Presidency (Routledge, 2012)\nRichard J. Ellis and Michael Nelson, eds, Debating the American Presidency, (2nd edition, CQ Press, 2009)\nPacket of selected primary texts (to be linked or posted on Blackboard). GOV 330K • The American President 39145 • Fall 2013 Meets MW 5:00PM-6:30PM MEZ B0.306 show description\nPacket of selected primary texts (to be linked or posted on Blackboard). GOV 381L • American Founding 39040 • Spring 2013 Meets T 6:30PM-9:30PM BAT 1.104 show description\nNOTE WELL: Course meets Tuesdays, 6:30 to 9:30pm\nBatts Hall 1.104\nThis is a seminar on American political thought and constitutional design. It is designed for students of American politics and political theory. The principal themes include: 1) the nature of founding and its constitutive significance; 2) the relation of structure and power in American politics; 3) the meaning and significance of the Federalist/Anti-Federalist debate; 4) the philosophic background of the American founding; and 5) the relevance of the founding to debate to prospects for, and pathologies of, American politics today.\nWe will conduct a close reading of the Madison’s Notes, of The Federalist, and selected Anti-Federalist writings. We will also study a larger and growing body of secondary literature on the constitutional convention, ratification and early American political thought.\nJames Madison, Notes of the Debates: In the Federal Convention of 1787\nThe Federalist (Rossiter, ed.)\nThe Anti-Federalist (Storing, ed.)\nDavid Brian Robertson, The Constitution and America’s Destiny (2005)\nPauline Maier, Ratification (2012)\nGordon Wood, The Idea of America (2011)\nJack Rakove, Original Meanings: Politics & Ideas in the Making of the Constitution\nHerbert Storing, What the Anti-Federalists Were For (1981)\nNumerous essays and articles (to be posted on line or gathered in packet)\nGrading: Active seminar participation, including three short papers and presentations (40%) and one article-length seminar paper (60%) T C 357 • Amer Founding/Probs Const Des 43095 • Spring 2013 Meets M 3:30PM-6:30PM CRD 007B show description\nThe American Founding and Problems of Constitutional Design\nJeffrey Tulis, Associate Professor, Department of Government\nSanford Levinson, Professor, School of Law\nThis Plan II seminar will be built around a close reading of the debates that informed the drafting and ratification of the U.S. Constitution. We aim to recover the perspective of these founding thinkers -- their way of thinking -- as much as their concrete ideas, in order to raise fundamental questions about the American political order today. Are some of the most important pathologies of American politics today rooted in design features of our original political architecture? Are the original answers to basic founding questions (such as \"how democratic is our Constitution?) still adequate for contemporary circumstances? What features of the Constitution should we preserve and what features should we amend, if possible? Would it be good for the polity as a whole to reconsider these questions in a new constitutional convention today, or would such an event be a political nightmare? Our reading will include notes from the founding conventions, writings by Federalists and Anti-Federalists, and present-day critiques of the American political order. Our aim will be to generate a dialogue between the thought of the founders and some of the best present day critics and supporters of the Constitution.\nJames Madison, Notes of the Debates in the Federal Convention\nThe Federalist, ed. Clinton Rossiter\nThe Anti-Federalist, ed. Herbert Storing\nPauline Maier, Ratification: The People Debate the Constitution, 1787-1788\nSanford Levinson, Framed: America’s 51 Constitutions and the Crisis of Governance\nBruce Ackerman, The Decline and Fall of the American Republic\nRobert Goldwin, ed. How Democratic is the Constitution?\na course packet of selected articles, essays, and additional primary materials.\nClass participation, including at least one presentation of a short discussion paper 25%\nOne take-home analytic essay 25%\nOne term paper 50%\nAbout the Professors:\nProfessor Tulis's interests bridge the fields of political theory and American politics, including more specifically, American political development, constitutional theory, political philosophy and the American presidency. He received the President's Associates Teaching Excellence Award at the University of Texas. He has held research fellowships from NEH, ACLS, Olin Foundation, Harvard Law School, and the Mellon Preceptorship at Princeton University, where he taught before moving to Texas. He has held visiting positions at Notre Dame and Harvard. During the academic year 2008-09, he was a Laurance S. Rockefeller Visiting Fellow at the University Center for Human Values at Princeton.\nProefessor Levinson holds the W. St. John Garwood and W. St. John Garwood, Jr. Centennial Chair in Law, he joined the University of Texas Law School in 1980. Previously a member of the Department of Politics at Princeton University, he is also a Professor in the Department of Government at the University of Texas. The author of over 350 articles and book reviews in professional and popular journals--and a regular contributor to the popular blog Balkinization. He received the Lifetime Achievement Award from the Law and Courts Section of the American Political Science Association in 2010. He has been a visiting faculty member of the Boston University, Georgetown, Harvard, New York University, and Yale law schools in the United States and has taught abroad in programs of law in London; Paris; Jerusalem; Auckland, New Zealand; and Melbourne, Australia.\nGOV 330K • The American President 38675 • Fall 2012 Meets MW 3:00PM-4:30PM MEZ B0.306 show description\nPacket of selected primary texts (to be linked or posted on Blackboard). GOV 330K • The American President 38675 • Fall 2011 Meets MW 3:30PM-5:00PM WAG 420 show description\nsee syllabus GOV 330K • The American President 38680 • Fall 2011 Meets MW 5:30PM-7:00PM UTC 1.146 show description\nsee syllabus GOV 379S • Regime Persp On Amer Polit-Hon 39110 • Spring 2011 Meets W 3:30PM-6:30PM BAT 5.102 (also listed as CTI 326, LAH 350) show description\nTo see the polity as a whole requires that we get some distance from our subject, much as to see the planet earth as a whole requires one to look at it from outer space. Just as it is difficult to get visual perspective on a place living within it, it is difficult to understand the promise or pathologies of a regime from within it. To get critical distance from our politics, we will closely study three sets of texts that look at American politics from a distance. The first part of the course will recover the perspective of the founding debate between Federalists and Anti-federalists. This fundamental debate reveals what is a stake in the basic architecture of the American regime. The second part of the course is a close study of Tocqueville’s Democracy in America. Regarded by many as the best book ever written on democracy and the best book written on America, Tocqueville sees our polity whole because he looks at it from the vantage point of Europe, in general, and France, in particular. In the third part of the seminar we think about American politics from the perspective of thoughtful commentators who feel only nominally included in the polity. Half in and half out, these extraordinary black American writers reveal fissures and fault lines in the American regime. We end the class with a discussion of America’s place in the world today – examining a speech by a writer who articulately raises challenges to our self-understanding that are inarticulately expressed today in rage and ranting from enemies of the United States.\nFour take home writing assignments. Analytic essays, each 1000-1500 words. (Grades weighted: 10%, 25%, 25%, and 25%) Late essays will not be accepted, except with a doctor’s excuse or a Dean’s excuse for family emergency. Regular preparation and class participation: 15%.\nOR as an option: By prior arrangement with me by the due date of the second analytic essay, students may substitute one longer research paper (15 – 20 pages) for two of the last three analytic papers This paper will be on a topic of the students choosing , if I approve, and the due date will be the same as the last assigned analytic essay. This project would count 50% of the students course grade.\nSelected writings by Frederick Douglass, W.E.B. Dubois, Ralph Ellison, James Baldwin\nSolzhenitsyn, “A World Split Apart”\nTocqueville, Democracy in America GOV 382M • Tocqueville 39150 • Spring 2011 Meets T 6:30PM-9:30PM BAT 5.102 show description\nSee syllabus GOV 370L • President, Congress, And Court 38695 • Fall 2010 Meets TTH 8:00AM-9:30AM UTC 3.112 show description\nCourse Description: A Study of the political relationship of the President, Congress and Court in the American constitutional order. Has this relationship changed over the course of American history? Is American national politics prone to stalemate or deadlock between the branches regarding major issues of public policy? Do we have a new “imperial presidency?” Should the Court arbitrate disputes between the President and Congress over custody of their respective powers? Has Congress abdicated its constitutional responsibilities? We will examine questions like these in light of practical problems such as executive privilege and secrecy, the war on terror, budget politics and controversies regarding appointments to the Supreme Court. Grading:Three in class essay tests, for which study questions will be distributed in advance. The exam questions will be chosen from the list of study questions. (25% each) One short take home essay (10% each). Class participation and attendance (15%). Tentative Texts: The FederalistFisher, Congressional Abdication on War and SpendingRudalevige, The New Imperial PresidencyBessette and Tulis, The Constitutional PresidencySkowronek, Presidency in Political TimeGoldsmith, The Terror PresidencyA course packet of articles and essays GOV 370L • President, Congress, And Court 38700 • Fall 2010 Meets TTH 5:00PM-6:30PM UTC 3.122 show description\nCourse Description: A Study of the political relationship of the President, Congress and Court in the American constitutional order. Has this relationship changed over the course of American history? Is American national politics prone to stalemate or deadlock between the branches regarding major issues of public policy? Do we have a new “imperial presidency?” Should the Court arbitrate disputes between the President and Congress over custody of their respective powers? Has Congress abdicated its constitutional responsibilities? We will examine questions like these in light of practical problems such as executive privilege and secrecy, the war on terror, budget politics and controversies regarding appointments to the Supreme Court. Grading:Three in class essay tests, for which study questions will be distributed in advance. The exam questions will be chosen from the list of study questions. (25% each) One short take home essay (10% each). Class participation and attendance (15%). Tentative Texts: The FederalistFisher, Congressional Abdication on War and SpendingRudalevige, The New Imperial PresidencyBessette and Tulis, The Constitutional PresidencySkowronek, Presidency in Political TimeGoldsmith, The Terror PresidencyA course packet of articles and essays GOV 312L • Iss & Policies In Amer Gov-Hon 38698 • Spring 2010 Meets MW 3:30PM-5:00PM UTC 3.104 show description\nGovernment 312L satisfies the second half of the mandated six hours of government that every UT student must take. Course covers analysis of varying topics concerned with American political institutions and policies, including the United States Constitution, and assumes basic knowledge of government from GOV 310L, which is a prerequiste. May be taken for credit only once. GOV 370L • President, Congress, And Court 38966 • Spring 2010 Meets MW 5:00PM-6:30PM MEZ B0.306 show description\nPrerequisite: Six semester hours of lower-division coursework in government.\nGOV 370L • President, Congress, And Court 39295 • Fall 2009 Meets TTH 2:00PM-3:30PM UTC 3.112 show description\nGOV 370L • President, Congress, And Court 39435 • Spring 2008 Meets MW 3:00PM-4:30PM PAR 203 show description\nGOV 312L • Iss & Policies In Am Gov-Hon-W 38615-38620 • Spring 2007 Meets MW 11:00AM-12:00PM MEZ B0.306 show description\nGovernment 312L satisfies the second half of the mandated six hours of government that every UT student must take. Course covers analysis of varying topics concerned with American political institutions and policies, including the United States Constitution, and assumes basic knowledge of government from GOV 310L, which is a prerequiste. May be taken for credit only once. GOV 312L • Iss & Policies In Am Gov-Hon-W 37600-37605 • Spring 2006 Meets MW 11:00AM-12:00PM MEZ B0.306 show description\nGovernment 312L satisfies the second half of the mandated six hours of government that every UT student must take. Course covers analysis of varying topics concerned with American political institutions and policies, including the United States Constitution, and assumes basic knowledge of government from GOV 310L, which is a prerequiste. May be taken for credit only once. GOV 312L • Iss & Policies In Am Gov-Hon-W 34900-34905 • Spring 2004 Meets MW 11:00AM-12:00PM BUR 134 show description\nGovernment 312L satisfies the second half of the mandated six hours of government that every UT student must take. Course covers analysis of varying topics concerned with American political institutions and policies, including the United States Constitution, and assumes basic knowledge of government from GOV 310L, which is a prerequiste. May be taken for credit only once. GOV 312L • Iss & Policies In Am Gov-Hon-W 34495-34500 • Spring 2003 Meets MW 11:00AM-12:00PM UTC 1.130 show description\nGovernment 312L satisfies the second half of the mandated six hours of government that every UT student must take. Course covers analysis of varying topics concerned with American political institutions and policies, including the United States Constitution, and assumes basic knowledge of government from GOV 310L, which is a prerequiste. May be taken for credit only once. Publications\nTulis, JK (2011), \"Plausible Futures,\" in Dunn, Charles W. (ed.) The Presidency in the Twenty-First Century, University Press of Kentucky.Tulis, J.K. and Macedo, S. (2010) The Limits of Constitutional Democracy, Princeton University Press.Tulis, J.K. and Macedo, S. (2010) \"Constitutional Boundaries,\" in The Limits of Constitutional Democracy, Princeton University Press.Tulis, JK (2010), \"The Possibility of Constitutional Statesmanship,\" in Tulis, JK and Macedo, S (eds.) The Limits of Constitutional Democracy, Princeton University Press.Tulis, J. (2009) The Constitutional Presidency. Johns Hopkins University Press.Tulis, J. (2009) Impeachment in the Constitutional Order. In J. Tulis & J.M. Bessette (Eds.), The Constitutional Presidency. Johns Hopkins University Press.Tulis, J. & Bessette, J.M. (2009) On the Constitution, Politics, and the Presidency. In J. Tulis & J.M. Bessette (Eds.), The Constitutional Presidency. Johns Hopkins University Press.Tulis, J (and Bessette, J.M) (2010) The Presidency in the Constitutional Order: Historical Perspectives, Reissued Classics Series, Transaction Publishers,Tulis, J and Bessette, J.M. (2010, \"Introduction to the Transaction Edition,\" The Presidency in the Constitutional Order: Historical Perspectives, Transaction Publishers.\nTulis, JK, (2009) \"The Two Constitutional Presidencies,\" in Nelson, Michael (ed.) The Presidency in the Political System, Congressional Quarterly Press.Tulis, J. & Mellow, N. (2007) Andrew Johnson and the Politics of Failure. In S. Skowronek & M. Glassman (Eds.), Formative Acts: Reckoning with Agency in American Politics. Philadelphia: University of Pennsylvania Press.Tulis, J. (2007, September) The Rhetorical Presidency in Retrospect. Critical Review: An Interdisciplinary Journal of Politics and Society, 19(2&3). Curriculum Vitae", "answers": ["Legacies of Losing in American Politics and an expanded edition of The Rhetorical Presidency in the Princeton Classics series."], "length": 5306, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "60b19bbfd875f79ca168f6db761d192ab710f9b5f395de89"} {"input": "What is the potential of SNNs in modeling the visual system?", "context": "Paper Info\n\nTitle: Deep Spiking Neural Networks with High Representation Similarity Model Visual Pathways of Macaque and Mouse\nPublish Date: 22 May 2023\nAuthor List: Zhengyu Ma (from Department of Networked Intelligence, Peng Cheng Laboratory), Yu Liutao (from Department of Networked Intelligence, Peng Cheng Laboratory), Huihui Zhou (from Department of Networked Intelligence, Peng Cheng Laboratory), Allen Brain\nAuthor Affiliation: CORNet-S ConvNeXt-Tiny ConvNeXt-Small EfficientNet, AlexNet RegNetY, ResNet34 ConvNeXt-Base CORNetSEW, ResNet8 ResNet101 SEW-ResNet18 ViT-L, GoogLeNet SEW-ResNet34 SEW-ResNet8 Wide\n\nFigure\n\nFigure 1: To conduct neural representation similarity experiments, we apply three similarity metrics to a layer-by-layer comparison between the responses of models and the neural activities of visual cortex.\nFigure 2: For three datasets and three similarity metrics, each point indicates the final representation similarity score of a model.Each pair of SEW ResNet and ResNet with the same depth are linked by a gray solid line.In almost all conditions, SEW ResNet outperforms ResNet by a large margin.\nFigure3: For three datasets and three similarity metrics, we plot the trajectories of similarity score with model layer depth.The models are divided into two groups: ResNet and SEW ResNet.The normalized layer depth ranges from 0 (the first layer) to 1 (the last layer).Because the depths of models are not the same, we first discretize the normalized depth into 50 bins, and then apply the cubic spline interpolation to the scores of each model, yielding the smooth trajectories shown in the plot.The fine, semitransparent lines are the trajectories of each model.The thick lines are the average trajectories among each group.\nFigure 5: For Macaque-Synthetic dataset, trajectories of similarity score with model layer depth are plotted.The models are divided into two groups: ViT and CNN&SNN.The normalized layer depth ranges from 0 (the first layer) to 1 (the last layer).The calculation and plotting of the trajectories are the same as Figure 3.\nFigure6: The basic block of SpikingMobileNet.\"PW CONV\" is the pointwise convolution and \"DW CONV\" is the depthwise convolution.\"SN\" is the spiking neuron.\nFigure 7: Overall model rankings of the similarity scores on Allen Brain mouse dataset.The similarity scores of CNNs, SNNs and vision transformers are shown by blue, green and orange bars, respectively.\nFigure 9: Overall model rankings of the similarity scores on Macaque-Synthetic dataset.\nFigure 10: The Spearman's rank correlation between the overall model rankings of different metrics.There is a strong correlation between SVCCA and TSVD-Reg, but RSA has weaker correlations with them.\nThe correlation between the similarity scores and the model depth.r is Spearman's rank correlation coefficient.\"-\" indicates that there is no significant correlation.\nArchitectures of SNNs.\"sn\" denotes the spiking neuron.\"g = 32\" denotes the grouped convolutions with 32 groups.The hyper-parameters of the spike-element-wise block are shown in the brackets with the number of stacked blocks outside.\n\nabstract\n\nDeep artificial neural networks (ANNs) play a major role in modeling the visual pathways of primate and rodent. However, they highly simplify the computational properties of neurons compared to their biological counterparts. Instead, Spiking Neural Networks (SNNs) are more biologically plausible models since spiking neurons encode information with time sequences of spikes, just like biological neurons do.\nHowever, there is a lack of studies on visual pathways with deep SNNs models. In this study, we model the visual cortex with deep SNNs for the first time, and also with a wide range of state-of-the-art deep CNNs and ViTs for comparison. Using three similarity metrics, we conduct neural representation similarity experiments on three neural datasets collected from two species under three types of stimuli.\nBased on extensive similarity analyses, we further investigate the functional hierarchy and mechanisms across species. Almost all similarity scores of SNNs are higher than their counterparts of CNNs with an average of 6.6%. Depths of the layers with the highest similarity scores exhibit little differences across mouse cortical regions, but vary significantly across macaque regions, suggesting that the visual processing structure of mice is more regionally homogeneous than that of macaques.\nBesides, the multi-branch structures observed in some top mouse brain-like neural networks provide computational evidence of parallel processing streams in mice, and the different performance in fitting macaque neural representations under different stimuli exhibits the functional specialization of information processing in macaques.\nTaken together, our study demonstrates that SNNs could serve as promising candidates to better model and explain the functional hierarchy and mechanisms of the visual system. Originally, the prototype of deep neural networks is inspired by the biological vision system . To date, deep neural networks not only occupy an unassailable position in the field of computer vision , but also become better models of the biological visual cortex compared to traditional models in the neuroscience community (Khaligh-Razavi and Kriegeskorte 2014; .\nThey have been successful at predicting the neural responses in primate visual cortex, matching the hierarchy of ventral visual stream (Güc ¸lü and van Gerven 2015; , and even controlling neural activity . Moreover, as training paradigms of mice and techniques for collecting neural activity (de Vries et al. 2020) have been greatly improved, there is a strong interest in exploring mouse visual cortex.\nDeep neural networks also play an important role in revealing the functional mechanisms and structures of mouse visual cortex . Compared to biological networks, Artificial Neural Networks discard the complexity of neurons . Spiking Neural Networks, incorporating the concept of time and spikes, are more biologically plausible models .\nTo be more specific, because of their capabilities of encoding information with spikes, capturing the dynamics of biological neurons, and extracting spatio-temporal features, deep SNNs are highly possible to yield brain-like representations ). However, deep SNNs have not been employed to model visual cortex due to the immaturity of training algorithms.\nRecently, a state-ofthe-art directly trained deep SNN , makes it possible to use deep SNNs as visual cortex models. Contributions. In this work, we conduct large-scale neural representation similarity experiments on SNNs and other high-performing deep neural networks to study the brain's visual processing mechanisms, with three datasets and three similarity metrics (Figure ).\nSpecifically, to the best of our knowledge, we are the first to use deep SNNs to fit complex biological neural representations and explore the biological visual cortex. We summarize our main contributions in four points as follows. • We find that SNNs outperform their counterparts of CNNs with the same depth and almost the same architectures in almost all experiments.\nIn addition, even with very different depths and architectures, SNNs can achieve top performance in most conditions. • By making a more direct comparison between macaques and mice for the first time, we reveal the differences in the visual pathways across the two species in terms of the homogeneity of visual regions and the increases of receptive field sizes across cortical visual pathways, which is consistent with previous physiological work.\n• The multi-branch structures in neural networks benefit neural representation similarity to mouse visual cortex, providing computational evidence that parallel information processing streams are widespread between cortical regions in the mouse visual system. • Comparing the results of two macaque neural datasets under different stimuli, we reveal that the macaque vision system may have functional specialization for processing human faces and other natural scenes.\nAltogether, as the first work to apply deep SNNs to fit neural representations, we shed light on visual processing mechanisms in both macaques and mice, demonstrating the potential of SNNs as a novel and powerful tool for research on the visual system. Our codes and appendix are available at https://github.com/Grasshlw/SNN-Neural-Similarity.\nThere are plenty of computational models of macaque and mouse visual systems for exploring the visual processing mechanisms recently. We summarize some of the outstanding work in the following. The network models of macaque visual system. In the early days, studies basically used simple feedforward neural networks as the models of the macaque visual system (Khaligh-Razavi and Kriegeskorte 2014; .\nRecently, some bio-inspired or more complex models achieved better performance in fitting the neural representations of macaque visual cortex . proposed a brainlike shallow CNN with recurrent connections to better match the macaque ventral visual stream. By mimicking the primary stage of the primate visual system, VOneNets ) performed more robustly in image recognition while better simulating macaque V1.\nMoreover, the representations learned by unsupervised neural networks ) also effectively matched the neural activity of macaque ventral visual stream. Although the above work developed many bio-inspired structures, the networks are still traditional ANNs in nature. Our work introduces deep SNNs for the first time to explore the visual processing mechanisms of macaque visual system.\nThe network models of mouse visual system. Largescale mouse neural dataset provided an experimental basis for model studies of mouse visual system (de Vries et al. 2020; . conducted comparisons between the representations of mouse visual cortex and the VGG16 trained on the Im-ageNet dataset. In , they developed a single neural network to model both the dorsal and ventral pathways with showing the functional specializations.\nWhat's more, a large survey of advanced deep networks ) revealed some hierarchy and functional properties of mice. Similar to the studies of macaque visual system, deep SNNs have never been used to model the mouse visual system. In this work, we not only use SNNs as one of the candidates to fit the representations of mouse visual cortex, but also conduct direct comparisons between macaques and mice to further investigate the functional hierarchy and mechanisms of the two species.\nOur work is conducted with three neural datasets. These datasets are recorded from two species under three types of stimuli. More specifically, there are neural responses of mouse visual cortex to natural scene stimuli, and responses of macaque visual cortex to face image and synthetic image stimuli. Allen Brain mouse dataset.\nIt is part of the Allen Brain Observatory Visual Coding dataset ) col-lected using Neuropixel probes from 6 regions simultaneously in mouse visual cortex. Compared to two-photon calcium imaging, Neuropixel probes simultaneously record the spikes across many cortical regions with high temporal resolution.\nIn these experiments, mice are presented with 118 250-ms natural scene stimuli in random orders for 50 times. Hundreds to thousands of neurons are recorded for each brain region. To get the stable neurons, we first concatenate the neural responses (average number of spikes in 10-ms bins across time) under 118 images for each neuron, and then preserve the neurons whose split-half reliability across 50 trials reaches at least 0.8.\nMacaque-Face dataset. This dataset ) is composed of neural responses of 159 neurons in the macaque anterior medial (AM) face patch under 2,100 real face stimuli, recorded with Tungsten electrodes. For this dataset, we compute the average number of spikes in a time window of 50-350ms after stimulus onset and exclude eleven neurons with noisy responses by assessing the neurons' noise ceiling.\nThe details of the preprocessing procedure are the same as . Macaque-Synthetic dataset. This dataset is also about macaque neural responses which are recorded by electrodes under 3,200 synthetic image stimuli, and used for neural prediction in the initial version of Brain-Score . The image stimuli are generated by adding a 2D projection of a 3D object model to a natural background.\nThe objects consist of eight categories, each with eight subclasses. The position, pose, and size of each object are randomly selected. 88 neurons of V4 and 168 neurons of IT are recorded. The neural responses are preprocessed to the form of average firing rate and can be downloaded from Brain-Score. Since the core visual function of macaque and mouse visual cortex is to recognize objects, the basic premise of model selection is that the model has good performance on object recognition tasks (e.g.\nclassification on ImageNet). Based on this premise, we employ 12 SNNs, 43 CNNs, and 26 vision transformers, all of which are pretrained on the Ima-geNet dataset and perform well in the classification task. As for SNNs, we use SEW ResNet as the base model, which is the deepest and SOTA directly trained SNN .\nFurthermore, by combining the residual block used in SEW ResNet and the hierarchy of the visual cortex, we build several new SNNs and train them on the ImageNet using SpikingJelly ) (see Appendix A for model structures and the details of model training). As for CNNs and vision transformers, we use 44 models from the Torchvision model zoo , 22 models from the Timm model zoo ) and 3 models from the brain-like CNNs, CORnet family ).\nIn the feature extraction procedures of all models, we feed the same set of images used in biological experiments to the pretrained models and obtain features from all chosen layers. Different from CNNs and vision transformers, the features of SNNs are spikes in multiple time steps. To obtain the representation similarity between biological visual cortex and computational models, we apply three similarity metrics to computing similarity scores: representational similarity analysis (RSA) , regression-based encoding method and singular vector canonical correlation analysis (SVCCA) .\nRSA has already been widely used to analyze neural representations of a model and a brain to different stimuli at the population level, while the regression-based encoding method directly fits the model features to neural activity data. SVCCA is originally proposed to compare features of deep neural networks, and then Buice 2019) used it to compare representation matrices from mouse visual cortex and DNNs, which demonstrated its effectiveness.\nWith the same model and same cortical region, we use these metrics for a layer-by-layer comparison to compute the similarity scores. The maximum similarity score across layers for a given cortical region is considered to be the level of representation similarity between the model and the cortical region.\nFinally, in a given dataset, we take the average score of all cortical regions as the final similarity score for each model, which gives the overall model rankings. The implementation of each similarity metric is as follows. RSA. For two response matrices R ∈ R n×m from each layer of models and each cortical region, where n is the number of units/neurons and m is the number of stimuli, we calculate the representational similarity between the responses to each pair of image stimuli using the Pearson correlation coefficient r, yielding two representational dissimilarity matrices (RDM ∈ R m×m , where each element is the correlation distance 1 − r).\nThen, the Spearman rank correlation coefficient between the flattened upper triangles of these two matrices is the metric score. Regression-Based Encoding Method. Firstly, we run truncated singular value decomposition (TSVD) to reduce the feature dimension of model layers to 40. Secondly, the features after dimensionality reduction are fitted to the representations of each neuron by ridge regression.\nFinally, we compute the Pearson correlation coefficient between the predicted and ground-truth representations of each neuron and take the mean of all correlation coefficients as the metric score. More specifically, we apply leave-one-out crossvalidation to obtain predicted representations of each neuron.\nFor simplicity, we name this method 'TSVD-Reg'. SVCCA. For both the responses of model layers and cortical regions, we use TSVD to reduce the dimension of unit/neuron to 40, yielding two reduced representation matrices. Then we apply canonical correlation analysis (CCA) to these two matrices to obtain a vector of correlation coefficients (the length of the vector is 40).\nThe metric score is the mean of the vector. Because of the invariance of CCA to affine transformations , in this procedure, we only need to ensure that the stimulus dimension is consistent and aligned, even if the unit/neuron dimension is different. Dimensionality reduction plays an important role in this method to make the number of model features comparable to the number of neurons in cortical regions, since the former usually far exceeds the latter.\nIn addition, dimensionality reduction helps to determine which features are important to the original data, while CCA suffers in important feature detection. Using just CCA performs badly, which has been proven by . To check how similar the models are to the visual cortex's mechanisms in visual processing, we rank the final similarity scores of all models and conduct comparisons among three types of models (CNNs, SNNs, and vision transformers).\nSpecially, we focus on comparing SNN (SEW ResNet) and CNN (ResNet) with the same depth and almost the same architectures (Figure ). The final similarity score of a model is the average similarity score across all cortical regions. (The overall rankings can be found in Appendix B and the comparisons among three types of models are shown in Appendix C.)\nAllen brain mouse dataset. No single model achieves the highest final similarity scores with all three metrics. For a fair comparison, we apply the paired t-test to SEW ResNet and ResNet with the same depth. For all three metrics, SEW ResNet performs better than ResNet by a large margin (t = 5.857, p = 0.004; t = 7.666, p = 0.002; t = 7.592, p = 0.002) 1 . 1 The results of the three similarity metrics are separated by semicolons, in the order of SVCCA, TSVD-Reg, and RSA.\nOther Macaque-Face dataset. For both SVCCA and TSVD-Reg, Wide-SEW-ResNet14 and Wide-SEW-ResNet8 achieve the first and second highest final similarity scores respectively. But for RSA, TNT-S and Inception-ResNet-V2 take their place and outperform other models by a large margin. As for SEW ResNet and ResNet, the former performs significantly better than the latter for both SVCCA and TSVD-Reg (t = 8.195, p = 0.001; t = 7.528, p = 0.002).\nHowever, the difference is not significant for RSA (t = 1.117, p = 0.327). Specifically, the similarity score of SEW ResNet152 is only slightly higher than that of ResNet152, and at the depth of 50 and 101, SEW ResNet's scores are lower than ResNet's. Macaque-Synthetic dataset. Similar to the results of Allen Brain dataset, no model performs best for all three metrics.\nSEW ResNet performs moderately better than ResNet (t = 3.354, p = 0.028; t = 3.824, p = 0.019; t = 2.343, p = 0.079). The only contrary is that SEW ResNet18 performs worse than ResNet18 for RSA. Further, to check the details of comparison between the SNNs and their CNN counterparts, we analyze the trajectories of similarity score across model layers (Figure ).\nAs for ResNet and SEW ResNet with the same depth, the trends of their similarities across model layers are almost the same, but the former's trajectory is generally below the latter's. In other words, the similarity scores of SEW ResNet are higher than those of ResNet at almost all layers. Taken together, the results suggest that when the overall results that appear below also correspond to the three metrics in this order, unless the correspondence is stated in the text.\narchitectures and depth are the same, SNNs with spiking neurons perform consistently better than their counterparts of CNNs with an average increase of 6.6%. Besides, SEW ResNet14 also outperforms the brain-like recurrent CNN, CORnet-S, with the same number of layers (see more details in Appendix B). Two properties of SNNs might contribute to the higher similarity scores.\nOn the one hand, IF neurons are the basic neurons of spiking neural networks. The IF neuron uses several differential equations to roughly approximate the membrane potential dynamics of biological neurons, which provides a more biologically plausible spike mechanism for the network. On the other hand, the spiking neural network is able to capture the temporal features by incorporating both time and binary signals, just like the biological visual system during information processing.\nTo figure out the distinctions in the functional hierarchy between macaques and mice, for each cortical region, we obtain the normalized depth of the layer that achieves the highest similarity score in each model. Then, we divide models (excluding vision transformers) into two groups based on their depths and conduct investigations on these two groups separately.\nA nonparametric ANOVA is applied to each group for testing whether layer depths change significantly across cortical regions. For mouse visual cortex (Figure (a)), taking the deep model group as an example, ANOVA shows overall significant changes in depth across cortical regions for TSVD-Reg and RSA (Friedman's χ 2 = 49.169,\np = 2.0 × 10 −9 ; χ 2 = 19.455, p = 0.002). But there is no significant change for SVCCA (χ 2 = 8.689, p = 0.122). According to these results, the differences in depth across regions are indeterminacy and irregular. Meanwhile, the trends of layer depth between some regions contradict the hierarchy observed in physiological experiments of mice (those between VISp and VISrl for TSVD-Reg and between VISal and VISpm for RSA).\nHowever, for macaque visual cortex (Figure (b)), there are significant differences (t = −5.451, p = 6.5 × 10 −6 ; t = −8.312, p = 2.8 × 10 −9 ; t = −3.782, p = 6.9 × 10 −4 , also taking the deep model group as an example) between V4 and IT, and the trend is consistent with the information processing hierarchy in primate visual cortex.\nThe comparative analyses of the best layer depths of the shallow and deep model groups also exhibit the differences between macaques and mice. For mouse visual cortex, the best layer depths of shallow models are significantly higher than those of deep models. Compared to deep models, most shallow models achieve the top similarity scores in intermediate and even later layers.\nDifferently, for macaque visual cortex, the depth of models has little effect on the depth of the most similar layer. What's more, we find that the most similar layer of mouse visual cortex always occurs after the 28 × 28 feature map is downsampled to 14 × 14, which leads to the layer depths' difference between shallow and deep models.\nNevertheless, the best layer of macaque IT appears in the last part of networks, where the feature map has been downsampled more times. In summary, our results might reveal two distinctions in the functional hierarchy between macaques and mice. First, there is a distinct functional hierarchical structure of macaque ventral visual pathway, while there might be no clear sequential functional hierarchy in mouse visual cortex.\nOne explanation is that the mouse visual cortex is organized into a parallel structure and the function of mouse cortical regions are more generalized and homogeneous than those of macaques. Another possibility would be that even though the sequential relations exist among mouse cortical regions as proposed in anatomical and physiological work, they are too weak for the current deep neural networks to capture.\nAdditionally, mice perform more complex visual tasks than expected with a limited brain capacity . Consequently, the neural responses of mouse visual cortex may contain more information not related to object recognition that neural networks focus on. Secondly, it is well known that the units in the neural networks get larger receptive fields after downsampling, and through the analyses of differences between two groups of models based on depth, we find the feature map of the best layer for mouse is downsampled fewer times than that for macaque.\nBased on these results, we provide computational evidence that the increased ratio of the receptive field size in cortical regions across the mouse visual pathway is smaller than those across the macaque visual pathways, which echoes some physio- Macaque-Face dataset --- Table : The correlation between the similarity scores and the number of parameters.\nr is Spearman's rank correlation coefficient. \"-\" indicates that there is no significant correlation. To explore the processing mechanisms in the visual cortex of macaques and mice, we investigate the model properties from the whole to the details. As shown in Table and 2, we first measure the correlation between the similarity scores and the sizes (i.e. the number of trainable parameters and the depth) of network models.\nFor Allen Brain mouse dataset, there are significant negative correlations between the similarity scores and the number of parameters for three metrics while there is no correlation with the depth. Conversely, for the two macaque neural datasets, the similarity scores are highly correlated with the depth of networks, but not with the number of parameters.\nSpecifically, there is a positive correlation for Macaque-Face dataset while a negative correlation for Macaque-Synthetic dataset. (We also apply the linear regression to analyze the correlation between the similarity scores and the model size. The results are consistent with Spearman's rank correlation and are shown in Appendix E).\nBased on these results, we further investigate more detailed properties of neural networks to explain the processing mechanisms in the visual cortex. For the mouse dataset, on the one hand, the best layer depths show non-significant changes across the mouse cortical regions as mentioned in the previous section.\nOn the other hand, the similarity scores of the mouse dataset are only correlated with the number of model parameters but not with the depth of models. It calls into the question whether any detailed structures in the neural networks help to reduce the number of parameters and improve its similarity to mouse visual cortex.\nTherefore, we explore the commonalities between models that have the top 20% representation similarities (see Appendix D) for Allen Brain dataset. As expected, the top models contain similar structures, such as fire module, inception module, and depthwise separable convolution. All these structures essentially process information through multiple branches/channels and then integrate the features from each branch.\nThe models with this type of structure outperform other models (t = 2.411, p = 0.024; t = 3.030, p = 0.007; t = 1.174, p = 0.247). Moreover, we apply the depthwise separable convolution to SNNs, which yields a positive effect. The representation similarity of Spiking-MobileNet is higher than SEW-ResNet50 with a similar depth (+0.8%; +3.9%; +12.1%).\nIn fact, some studies using multiple pathways simulate the functions of mouse visual cortex to some extent . Our results further suggest that not only the mouse visual cortex might be an organization of parallel structures, but also there are extensive parallel information processing streams between each pair of cortical regions .\nFor the two macaque datasets with different stimuli, not only are the model rankings significantly different, but also the correlations between the similarity scores and the model depth are totally opposite. These results corroborate the following two processing mechanisms in macaques: the ventral visual stream of primate visual cortex possesses canonical coding principles at different stages; the brain exhibits a high degree of functional specialization, such as the visual recognition of faces and other objects, which is reflected in the different neural responses of the corresponding region (although the face patch AM is a sub-network of IT, they differ in the neural representations).\nBesides, as shown in Figure , The calculation and plotting of the trajectories are the same as Figure . the similarity scores of vision transformers reach the maximum in the early layers and then decrease. Differently, the scores of CNNs and SNNs keep trending upwards, reaching the maximum in almost the last layer.\nOn the other hand, Appendix C shows that vision transformers perform well in Macaque-Face dataset but poorly in Macaque-Synthetic dataset. Considering the features extraction mechanism of vision transformers, it divides the image into several patches and encodes each patch as well as their internal relation by self-attention.\nThis mechanism is effective for face images that are full of useful information. However, the synthetic image consists of a central target object and a naturalistic background. When vision transformers are fed with this type of stimuli, premature integration of global information can lead to model representations containing noise from the unrelated background.\nWhat's more, when we take all models with the top 20% representation similarities as a whole for analyses, as described in the above paragraph, the properties that enable networks to achieve higher neural similarity are not yet clear. Taken together, the computational mechanism of the better models may reveal core processing divergence to different types of stimuli in the visual cortex.\nIn this work, we take large-scale neural representation similarity experiments as a basis, aided by analyses of the similarities across models and the visual cortical regions. Compared to other work, we introduce SNNs in the similarity analyses with biological neural responses for the first time, showing that SNNs achieve higher similarity scores than CNNs that have the same depth and almost the same architectures.\nAs analyzed in Section 3.1, two properties of SNNs might serve as the explanations for their high similarity scores. The subsequent analyses of the models' simulation performance and structures indicate significant differences in functional hierarchies between macaque and mouse visual cortex. As for macaques, we observed a clear sequential hi-erarchy.\nHowever, as for mouse visual cortex, some work ) exhibits that the trend of the model feature complexity roughly matches the processing hierarchy, but other work suggests that the cortex ) is organized into a parallel structure. Our results are more supportive of the latter. Furthermore, we provide computational evidence not only that the increased ratio of the receptive field size in cortical regions across the mouse visual pathway is smaller than those across the macaque visual pathway, but also that there may be multiple pathways with parallel processing streams between mouse cortical regions.\nOur results also clearly reveal that the processing mechanisms of macaque visual cortex differ to various stimuli. These findings provide us with new insights into the visual processing mechanisms of macaque and mouse, which are the two species that dominate the research of biological vision systems and differ considerably from each other.\nCompared to CNNs, the study of task-driven deep SNNs is just in its initial state. Although we demonstrate that SNNs outperform their counterparts of CNNs, SNNs exhibit similar properties as CNNs in the further analyses. In this work, we only build several new SNNs by taking the hints from the biological visual hierarchy, while many well-established structures and learning algorithms in CNNs have not been applied to SNNs yet.\nIn addition, the neural datasets used in our experiments are all collected under static image stimuli, lacking rich dynamic information to some certain, which may not fully exploit the properties of SNNs. Given that SNNs perform well in the current experiments, we hope to explore more potential of SNNs in future work.\nIn conclusion, as more biologically plausible neural networks, SNNs may serve as a shortcut to explore the biological visual cortex. With studies on various aspects of SNNs, such as model architectures, learning algorithms, processing mechanisms, and neural coding methods, it's highly promising to better explain the sophisticated, complex, and diverse vision systems in the future.\n\nImplementation Details of SNNs Spiking Neuron Model\n\nFor all SNNs, we use the Integrate-and-Fire (IF) model as the spiking neuron model, which acts as the activation layer in neural networks. As mentioned in , V t , X t and S t denote the state (membrane voltage), input (current) and output (spike) of the spiking neuron model respectively at time-step t, and the dynamics of the IF model can be described as follows:\n(1) (2) (3) While V t is the membrane voltage after the trigger of a spike, H t is also the membrane voltage, but after charging and before a spike firing. Θ(x) is the unit step function, so S t equals 1 when H t is greater than or equal to the threshold voltage V thresh and 0 otherwise. Meanwhile, when a spike fires, V t is reset to V reset .\nHere, we set V thresh = 1 and V reset = 0. In addition, because Θ(x) is non-differentiable at 0, the surrogate gradient method is applied to approximate the derivative function during back-propagation. Here, we use the inverse tangent function as the surrogate gradient function and the derivative function is\n(5) In our experiments on SNNs, we not only use SEW ResNet proposed by ), but also build several new SNNs. On the one hand, we improve the spike-elementwise block in SEW ResNet with new architectures referring to studies on ResNet , as shown in Table . On the other hand, as the multi-branch structures in CNNs increase neural representation similarity to mouse visual cortex, we use depthwise separable convolutions and follow the overall architecture of MobileNetV2 to build the SpikingMobileNet, the basic block of which is shown in Figure .\nOur implementation is based on SpikingJelly , an open-source framework of deep SNN. We use the ImageNet dataset to pre-train the new SNNs. Following the settings for training SEW ResNet , we train the models for 320 epochs on 8 GPUs (NVIDIA V100), using SGD with a mini-batch size of 32. The momentum is 0.9 and the weight decay is 0. The initial learning rate is 0.1 and we decay it with a cosine annealing, where the maximum number of iterations is the same as the number of epochs.\nFor all SNNs, we set the simulation duration T = 4.\n\nOverall model rankings\n\nThe results of model rankings are shown in Figure , 8 and 9. We also apply the Spearman's rank correlation to the overall model rankings of different metrics, which is shown in Figure .\n\nScore Comparisons among Model Groups\n\nWe conduct comparisons of similarity scores among CNNs, SNNs, and vision transformers. The results are shown in Figure .\n\nOverall CNN rankings\n\nThe results of CNN rankings are shown in Figure , 13 and 14.\n\nCorrelations between the Model Sizes and the Similarity Scores\n\nThe results of linear regression to model sizes and the similarity scores are shown in Figure , 16 and 17.\n\nThe ImageNet Accuracy and the Similarity Scores\n\nThe results are shown in Figure .", "answers": ["SNNs have the potential to better model and explain the functional hierarchy and mechanisms of the visual system."], "length": 5588, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "6b35731428ea6d9b480338b90572d21690c2fbb89ebba249"} {"input": "What is the water depth in the Greater Ekofisk Area?", "context": "Filip Fremo Minge – Ekofisk\nAuthor: Filip Fremo Minge\nPosted on 1. October 2019 12. October 2019\n— Sunset over Ekofisk. Photo: Husmo Foto/Norwegian Petroleum Museum\nThe three are operated by ConocoPhillips on behalf of the Ekofisk licensees. The area also embraces former producers Albuskjell, Cod, Edda, Tor, West Ekofisk and Tommeliten G.\nThese fields all lie within production licence 018 apart from Tommeliten G, which was operated by Statoil from 1976 to 2003.\nIn all, 31 installations have been positioned in the Greater Ekofisk Area.\nFirst Norwegian offshore field\nEkofisk began production on 15 June 1971, following its discovery in the autumn of 1969. Development of the field has occurred in several phases.\nIts central facilities were installed during the early 1970s, with oil initially being buoy-loaded into tankers. From 1975, it has been piped to Teesside in the UK. The gas has been landed by pipeline at Emden in Germany from 1977.\nekofisk i et nøtteskall, engelsk\nJacked up six metres\nThe water depth in the Greater Ekofisk Area is 70-75 metres. However, declining pressure in the Ekofisk reservoir over the years has caused the seabed to subside.\nEfforts began as early as 1985 to safeguard the installations against the effects of this development, and the steel platforms in the Ekofisk Complex were jacked up by six metres in 1987.\nIn addition, a protective breakwater was installed around the Ekofisk tank in 1989. The rate of seabed subsidence has declined sharply in recent years.\nWaterflooding improves recovery\nThe Ekofisk 2/4 K water injection platform became operational in December 1987 as part of efforts to improve Ekofisk’s recovery factor – the share of petroleum in place actually produced.\nWaterflooding capacity on the field to help maintain reservoir pressure was later expanded several times, and had reached just over 500 000 barrels per day by 2019.\nMeasured in barrels of oil equivalent, the recovery factor on Ekofisk has risen from an original estimate of 17 per cent to over 50 per cent.\nEkofisk I and II plus licence extension\nThe first phase of development and production on Ekofisk began with initial oil output from the converted Gulftide jack-up rig in 1971 and ended with the start-up of Ekofisk II in 1998.\nLarge parts of the Greater Ekofisk Area were restructured in the latter year, leading to plans for removing 15 installations – 14 steel platforms and the process facilities on the Ekofisk tank.\nplattformer, historie, 2004, driftsenter åpnet,\nEmbla 2/7 D. Photo: ConocoPhillips/Norwegian Petroleum Museum\nDesignated Ekofisk I, these redundant structures include Ekofisk 2/4 A, 2/4 B, 2/4 FTP, 2/4 Q, 2/4 H, 2/4 R, 2/4 P and 2/4 T.\nIn addition come the Edda 2/7 C, Albuskjell 1/6 A, Albuskjell 2/4 F, Cod 7/11 A, West Ekofisk 2/4 D, Norpipe 36/22 A and Norpipe 37/4 A installations.\nThe concrete part of the tank – Ekofisk 2/4 T – will remain. Gulftide was removed as far back as 1974. Two platforms owned by other companies – Ekofisk 2/4 G and 2/4 S – have also gone.\nA new plan for development and operation (PDO) of the field (Ekofisk II) was approved in 1994, at the same time as the Ekofisk licence was extended to 2028.\nThis creates a new Ekofisk Complex with two structures – the Ekofisk 2/4 X wellhead unit installed in the autumn of 1996 and the Ekofisk 2/4 J processing and transport platform in 1997.\nEkofisk II became operational in August 1998 and is intended to produce until 2028. Ekofisk, Eldfisk and Embla are tied back to the new complex, as was Tor until it shut down in December 2015.\nEkofisk West\nhistorie, forsidebilde, 2003, ekofisk vekst godkjent i statsråd\nEkofisk Growth. Illustration: Ståle Ådland\nIn December 2002, soon after the Conoco-Phillips merger had been announced, the Ekofisk West project was presented to improve oil and gas recovery. Process capacity and reliability on Ekofisk were also to be enhanced.\nThis development primarily involved the construction and installation of a new platform, Ekofisk 2/4 M, with processing facilities and 24 new wells drilled over five years.\nThe latter could contribute to improved recovery both because there were more wells and because they would tap new locations in the reservoir. On stream in 2005, 2/4 M was linked to the Ekofisk Complex with a bridge.\nProcess capacity for produced water was also to be increased through upgrading on Ekofisk 2/4 J and Eldfisk 2/7 E. A third measure concerned laying a power cable from the Ekofisk Complex to 2/4 K in order to make electricity supplies more efficient.\nNew developments: Eldfisk II and Ekofisk South\nEldfisk 2/7 S løft\nThe deck of Eldfisk 2/7 S being mated with the steel jacket. Foto: Øyvind Sætre/ConocoPhillips\nThe plan for development and operation (PDO) of Eldfisk II, approved by the Storting (parliament) on 9 June 2011, includes a new wellhead, process and accommodation platform – Eldfisk 2/7 S.\nIn addition come 42 new wells as well as upgrades to existing platforms which extend their commercial life.\nThe PDO for Ekofisk South involves the construction of a new wellhead platform – Ekofisk 2/4 Z – as well as a new subsea water injection facility and 44 additional wells.\nConocoPhillips Norge, 2004.\nMinistry of Petroleum and Energy, press release, “Vekstprosjekt på Ekofisk godkjent”, 6 June 2003.\nhttps://www.stortinget.no/no/Saker-og-publikasjoner/Saker/Sak/?p=50343\nhttps://www.stortinget.no/globalassets/pdf/innstillinger/stortinget/2010-2011/inns-201011-398.pdf\nhttps://www.regjeringen.no/no/aktuelt/klart-for-40-nye-ar-pa-ekofisk-feltet/id642376/)\nPublished 1. October 2019 • Updated 12. October 2019\n— Gassterminalen i Emden. Foto: Husmo Foto/Norsk Oljemuseum\nOil terminal in Teesside\nOlje- og gassterminalene, engelsk,\nTeesside terminal. Brian Henderson Thynne takes samples of refrigerated propane. Photo: Husmo Foto/Norwegian Petroleum Museum\nThe terminal at Teesside in north-east England receives oil and natural gas liquids (NGL) by pipeline from the Ekofisk field. It comprises stabilisation, NGL fractionation, storage tanks for crude oil and an export port.\nAfter arriving through the Norpipe Oil line, crude and NGL are separated and the oil goes through a stabilisation process before reaching the 10 storage tanks, which each hold 750 000 barrels.\nThe NGLs go to the fractionation facility, with a daily capacity of 64 000 barrels, for separation into methane, ethane, propane, and normal and iso butane.\nWhile the methane (natural gas) is used to fuel the plant, the other products (now known as liquefied petroleum gases – LPG) are made liquid by cooling and stored for export by sea.\nOne reason for the choice of Teesside as the landfall for the Ekofisk pipeline was the opportunity it offered to install deepwater quays.\nThe terminal has four of these, with those for crude oil able to handle tankers up to 150 000 deadweight tonnes. The LPG quays can accept carriers loading as much as 60 000 cubic metres.\nTwo of the crude oil quays lie on the main channel of the River Tees, while the others have been installed in dredged docks.\nGas terminal in Emden\nGas arriving at the Emden terminal from the Ekofisk Complex enters nine parallel treatment trains for cleaning, metering and onward distribution to the buyers.\nThe North Sea gas is very clean, and needs only limited treatment to remove small amounts of sulphur compounds using an absorption process. Impure molecules from the gas accumulate on the surface of small particles, which act as filter spheres.\nEach of the nine trains comprises four process columns and a process oven. The gas enters the top of a column and leaves through the base after passing through the filter spheres.\nThat leaves the gas ready for sale, and it is piped to the fiscal metering station before entering the buyer receiving pipelines and distribution network.\nThree separate commercial pipeline systems connect to the terminal, operated by Ruhrgas, BEB and Gastransport Services (previously Gasunie) respectively. They pipe the gas away on behalf of the gas buyers.\nThe Norsea Gas Terminal in Emden was officially opened in September 1977 by Norwegian industry minister Bjartmar Gjerde and Phillips executive Gordon Goerin.\nRanking as the first gas sales deal for the Norwegian continental shelf, the Ekofisk agreement paved the way for later contracts covering other fields off Norway.\nRegularity at the Emden terminal has been very high, with its own equipment never causing shutdowns. Maintenance takes place when other parts of the system are off line.\nThe terminal has a daily capacity of about 2.1 million cubic feet of gas per day.\nGas transport restructured\nNorpipe AS owned the gas pipeline from Ekofisk to Emden until the transport system for the Norwegian offshore sector was restructured at 1 January 2003.\nNorsea Gas A/S furthermore served as the formal owner of the Emden facility, with Phillips Petroleum and then ConocoPhillips as operator for both pipeline and terminal.\nolje- og gassterminalene,\nTeesside gas terminal. Photo: Husmo Foto/Norwegian Petroleum Museum\nSince 2007, Norway’s state-owned Gassco company has been responsible for technical operation of the facilities on behalf of their owners.\nThat included operator responsibility for the H7 and B11 booster platforms along the gas pipeline, which were shut down in 2007 and 2013 respectively and have since been removed.\nThe Gassled partnership is a project collaboration embracing 10 companies which collective own large parts of the gas infrastructure on the Norwegian continental shelf (NCS).\nA substantial proportion of Norway’s gas deliveries to Germany continues to arrive at the Emden terminal, including the volumes piped from Ekofisk.\nPreliminary planning for a new terminal in the German port began in 2011, with Gassled taking the investment decision for this development in the autumn of 2012.\nConstruction work began in the following year, with the new facility being built on an unused part of the existing terminal site.\nThe new terminal has not expanded export capacity. But its functionality is well adapted to future processing needs for fields in the Greater Ekofisk Area and other parts of the NCS sending gas through the Norpipe system.\nIt was officially opened on 24 May 2016 by Elisabeth Aspaker, the Norwegian government minister for the EU and the European Economic Area. That closed a chapter in Ekofisk’s history.\nSource: ConocoPhillips Norge\n— Gas pipes at Ekofisk. Photo: Husmo Foto/Norwegian Petroleum Museum\nIn addition to ConocoPhillips’ own production from Ekofisk, these pipelines carry gas and oil from the company’s fields in the UK sector and from other fields on the Norwegian and British continental shelves.\nThe three fields in the Greater Ekofisk Area are also tied together by pipelines.\nOil pipeline to Teesside\nrørledningene, engelsk,\nPipes and oil tanks at the Teesside plant. Photo: ConocoPhillips/Norwegian Petroleum Museum\nThe pipeline linking Ekofisk with the terminal for oil and natural gas liquids (NGL) at Teesside on the north-east English coast became operational in October 1975.\nPumps raise the pressure of the oil and NGL before they start their journey to land. Two pumping stations – 37/4 A and 36/22 A ­– originally stood along the pipeline to maintain this pressure, but have now been disconnected and removed.\nThe pipeline was installed with the ability to carry a million barrels per day. However, that much capacity has never been required.\nIn the UK sector, a 24-inch pipeline has been tied in with a Y connection to receive input from several British fields – including the J block developments operated by ConocoPhillips.\nOutput from the Greater Ekofisk Area is supplemented by crude from Valhall, Hod, Ula and Gyda heading for Teesside, optimising pipeline utilisation and thereby boosting value creation.\nThe pipeline is owned by Norpipe Oil AS and operated by ConocoPhillips.\nGas pipeline to Emden\nSandbags and gravel were used to cover Norpipe to Emden. Photo: Unknown/Norwegian Petroleum Museum\nThis pipeline became operational in September 1977. The starting pressure of around 132 bar is provided by compressors on the Ekofisk Complex.\nThe 443-kilometre distance to Emden was split into three equal sections, with platforms B11 and H7 located at the intermediate points to provide boosting if required.\nHowever, additional compression was seldom needed on the final stage to Emden. H7 was shut down in 2007 and B11 in 2013, and both have since been removed.\nThese two booster platforms were located in the German sector of the North Sea, while the pipeline also crosses the Danish sector.\nThe pipeline has been trenched or covered with sand. Its final section passes the island of Juist before making landfall on the coast of East Friesland to the north of Emden.\nIts daily capacity is roughly 59.4 million standard cubic metres (2.1 billion cubic feet). In addition to gas from the Greater Ekofisk Area, it carries output from Valhall, Hod, Ula, Gyda and the Statpipe system (primarily Statfjord and Gullfaks).\nPosted on 24. June 2017 25. October 2019\nEmbla 2/7 D\nThis unmanned wellhead facility is remotely controlled from Eldfisk 2/7 S located 5.2 kilometres to the north, where oil and gas output from the platform is also processed.\nUnmanned and remotely operated wellhead platform\nOn stream 12 May 1993\n— Embla 2/7 D. Photo: ConocoPhillips\nsokkelkart, illustrasjon, blokker, lisens, forsidebilde, engelsk,\nHand-colored map of the licenses of the first licensing round on the Norwegian continental shelf. Norwegian Continental Shelf Map, 1965.\nThe Phillips group was awarded block 2/7 as early as 1965, and the Embla reservoir lies in the southern part of this acreage. Drilling began there in 1974 to depths of 4 500-5 000 metres, but pressure and temperature in the wells were too high for testing with the available equipment.\nThe first production well was not drilled and tested until 1988, followed by a second in 1990. Both yielded very promising results, and the field came on stream in May 1993.\nEmbla comprises a sandstone reservoir at least 250 million years old. The other fields in the Greater Ekofisk Area comprise fine-grained carbonate rocks deposited about 70 million years ago.\nThe Embla reservoir has a temperature of 160°C compared with the 125°C normally found in the chalk formations 1 000 metres higher up, and its pressure is almost twice as high.\nFabricated by Heerema in the Netherlands, the Embla 2/7 D jacket (support structure) was installed by the M 7000 crane vessel. It stands 84 metres high and weighs 2 300 tonnes.\nA 5.2-kilometre subsea umbilical from Eldfisk comprises three power cables for electricity supply and eight fibreoptic lines handling data transmission and telecommunication.\nEldfisk 2/7 S, embla,\nEldfisk 2/7 S. Photo: ConocoPhillips\nThe platform has six production wells and an average daily output of roughly 7 000 barrels of oil. All processing and metering took place on Eldfisk 2/7 FTP until 2015, and has now been switched to Eldfisk 2/7 S.\nA 14-inch flowline linked 2/7 D with 2/7 FTP and runs today to 2/7 S. Produced at Wick in Scotland, this line was floated out to the field in one piece.\nTopside equipment includes the wellhead area, helideck (built by Vindholmen Services in Arendal), crane, control room, workshop, test separator and glycol pump.\nNormally unmanned, the platform is maintained as and when required and therefore incorporates a simplified accommodation module with lounge, mess, coffee room, galley, changing room, WC and 12 emergency beds.\nMore about platforms\nEkofisk 2/4 Z\nThis installation is a wellhead platform in the Ekofisk Complex.\nGulftide\nThis four-leg jack-up drilling rig was built in Glasgow during 1967 for Ocean Drilling & Exploration Co.\nPosted on 1. September 2019 8. October 2019\n— Gulftide with Ekofisk 2/4 A in the background. Photo: Aker Mek. Verksted/Norwegian Petroleum Museum\nGulftide was converted to cope with conditions on Ekofisk in the Åmøy Fjord near Stavanger. This jack-up drilling rig was equipped with process equipment and its derrick, helideck, hangar and legs were reinforced.\nTo win time, it was decided that the discovery well and three appraisals drilled on Ekofisk by Ocean Viking would be completed for production.\nPrinciples for producing from Gulftide were relatively simple. Output flowed from the subsea wellheads to the platform, where it went through two-stage separation to remove gas and water.\nWith pressure also reduced, the gas was flared off and the oil sent on by flowlines to two loading buoys where shuttle tankers moored to take on cargo.\nutbyggingen,\nTankskipet Donovania laster olje fra lastebøyen på Ekofisk. I bakgrunnen skimtes så vidt Gulftide. Foto: ConocoPhillips/Norsk Oljemuseum\nProduction could only continue while ships were loading. As soon as one tanker had been filled, the oil stream was diverted to the vessel waiting at the other loading buoy.\nThe problem with this approach was manifested when weather conditions ­– strong winds and/or high waves – forced the tankers to leave the buoys.\nIf that happened, production from the wellheads had to be suspended immediately. Given the prevailing weather on Ekofisk, that happened regularly. Output was halted for 20 per cent of the time during the first year.\nhttps://ekofisk.industriminne.no/wp-content/uploads/sites/2/2019/09/Building-Ekofisk.mp4\nGulftide was replaced as the temporary production installation in 1974 by the permanent Ekofisk 2/4 A (Alpha) and 2/4 B (Bravo) platforms for production, drilling and quarters.\nIn addition came the Ekofisk 2/4 C (Charlie) production, drilling and compression facility, the Ekofisk 2/4 FTP (field terminal platform) for production and risers, and Ekofisk 2/4 Q for accommodation.\nOil and gas were produced by 2/4 A, B and C through their own wells for processing in their separation plants and piping on the 2/4 FTP for a three-stage separation process.\nAt the same time, the tanker loading buoys were moved further from the platforms and the Ekofisk 2/4 T oil storage tank became operational.\nThis facility was extremely advantageous, because it allowed production to continue virtually regardless of whether bad weather prevented tankers from connecting to the buoys.\nEkofisktanken ble satt i drift i 1974. Foto: ConocoPhillips/Norsk Oljemuseum\nThe 2/4 FTP platform, where oil and gas from the three producing facilities was processed, had been planned to handle the level of output estimated for the main field.\nClear restrictions had been imposed by the Norwegian government on the amount of gas Phillips was allowed to flare. That also set a ceiling for oil production, since gas accompanies it up from the reservoir.\nThe solution was to install two powerful compression packages on 2/4 C in order to inject the gas under pressure back into the producing formation.\nAccommodation facilities had to be provided on the two first platforms, 2/4 A and B. Where 2/4 C and FTP were concerned, however, they were tied together with bridges and to 2/4 Q.\nPublished 1. September 2019 • Updated 8. October 2019\nPosted on 9. April 2019 25. October 2019\nJack-up drilling rig\nBuilt 1967 in Glasgow for Ocean Drilling & Exploration Co.\nBegan test production on Ekofisk 15 June 1971\nProduced on Ekofisk until 1974\n— Gulftide at theEkofisk field. Photo: Terje Tveit/Norwegian Petroleum Museum\ngulftide,\nGulftide. Photo: Unknown/Norwegian Petroleum Museum\nA mere 17 months after the Ekofisk discovery was announced in December 1969, Gulftide was ready to come on stream as a temporary production platform.\nIts official inauguration took place on 9 June, with initial test output commencing on 15 June. Full production began on 8 July.\nThe rig was chosen because it was available on the market. Established equipment for processing oil and gas was tailored to the limited space on board. Separate flowlines carried wellstreams from four subsea wells. Oil, gas and water were separated on board, with the gas flared and the oil piped to two buoys for loading into shuttle tankers.\nWork on the process equipment was relatively simple. The problem was to tailor it to the rig. The subsea wellheads had to be reinforced to meet the demands posed by the North Sea, and a buoy loading system needed to be developed for waters where this technology had never been used before.\nTo gain time, it was decided that the three appraisal wells drilled by Ocean Viking to map the extent of the field – in addition to the discovery well – would be completed for production.\nFørste testflamme tent på Ekofisk. På Gulftide\n1973, Teddy Broadhurst, gulftide,\narbeidsliv, hjelpearbeider\nGulftide, separator – på bildet kan man se at det er fire brønner.\narbeidsliv, gulftide, pionerkultur, arbeid, dekk, Norges første havbunnsbrønner, historie, 1971,\nThe producers would be topped with hydraulically controlled wellheads. Such equipment had been tried out on the seabed earlier, but on a limited scale and not in the deep and rough waters found on Ekofisk. This challenge was overcome by having the wellheads manufactured and then reinforced at the Phillips base in Dusavik outside Stavanger. Flowlines and control cables would also be laid from each well to Gulftide, with production comingled in a single riser to the topsides.\nWeather conditions also represented a major problem when designing the loading buoys. Phillips itself had experience with such facilities, but the concept had only been used before in harbour-like conditions and waters no deeper than 27 metres. They were now to stand in 70 metres in the middle of the North Sea.\nGulftide was converted in the Åmøy Fjord outside Stavanger to cope with conditions on Ekofisk. The processing facilities were installed and reinforcements made to the derrick, helideck, hangar and leg structures.\nGulftide, Ekofisk 2/4 A, boretårn, flare, 1971, utbygging,\nGulftide with Ekofisk 2/4 A in the background. Photo: Aker Mek. Verksted/Norwegian Petroleum Museum\nPlanning began in late 1970, when Phillips received approval to begin laying the flowlines between wellheads and rig. Brown & Root won this contract, with the first oil pipelines on the Norwegian continental shelf laid by the Hugh W Gordon laybarge.\nThe production principle on Gulftide was relatively simple. Output flowed from the subsea wellheads to the rig, where it passed through two separation levels to be split into oil and gas while the huge pressure was reduced.\nGas was flared off and the oil was piped to one of the loading buoys where a shuttle tanker was moored. Production could only take place when a ship was present.\nOffisiell åpning av norsk oljeproduksjon,\nThe Greek tanker, Theogennitor, unloads crude oil from loading buoys on the Ekofisk field. Gulftide in the background. Photo: ConocoPhillips/Norwegian Petroleum Museum\nAs soon as one tanker had become fully laden, the oil flow was switched to the other buoy where another ship was waiting to take on cargo.\nThe problem with this approach arose when weather conditions meant the tankers had to cast off from the buoys because of strong winds or high waves. The rig then had to shut down production from the wellheads immediately.\nGiven the weather conditions found on Ekofisk, output regularly had to cease. Production was suspended for 20 per cent of the first year for this reason.\nOutput began cautiously on 8 July 1971 from a single well. The second producer came on stream that September, the third was ready the following month and all four were producing by February 1972. They each flowed 10 000 barrels of oil per day.\nSource: Kvendseth, Stig, Giant discovery, 1988.\nPublished 9. April 2019 • Updated 25. October 2019\nNorpipe H-7\nThis platform served as a pumping/compressor station to maintain pressure in the 443-kilometre Norpipe gas pipeline from Ekofisk to Emden in Germany, which became operational in September 1977.\nKjappe fakta::\nCompressor platform on Ekofisk-Emden gas pipeline\nInstalled 1976\nOperational 1977\nShut down 29 October 2007\nRemoved 2013\n— Norpipe GNSC-H7. Photo: Husmo Foto/Norwegian Petroleum Museum\nGas received initial compression to 132 bar at the Ekofisk Complex. The pipeline was divided into three equal lengths, with Norpipe GNSC B11 positioned at the end of the first third to maintain pressure as and when required.\nFrom there, the gas then travelled the next third of the distance to the second and virtually identical compressor platform, H7.\nThis was also responsible for maintaining pressure, but additional compression was seldom required on this final leg of the journey to Emden.\nBoth platforms stood on the German continental shelf, but 48 kilometres of the pipeline also ran across the Danish North Sea sector.\nThe pipeline is trenched or covered with sand. On its final approach to the coast of East Friesland, it passes beneath the island of Juist before making landfall north of Emden.\nCapacity in Norpipe is about 60 million standard cubic metres (scm) or 2.1 billion cubic feet per day. In addition to output from the Ekofisk-area fields, it carries gas from Valhall, Ula and the Statpipe system – primarily Statfjord and Gullfaks. Gas was also transported for a time from Hod and Gyda, but that has ceased.\nfritid, Norpipe GNSC-H7,\nMagnus Refsland and Werner Hein have pulled the crab trap (full of starfish) on the Norpipe H-7 platform. Photo: Husmo Foto/Norwegian Petroleum Museum\nBuilt in 1976, the B11 platform had six decks. Its permanent staffing totalled 14 people, but various service personnel were also often on board. The regular crew included three in catering.\nThe 11 Phillips employees comprised the offshore installation manager, the nurse/radio operator, eight operators and a roustabout.\nIn addition to their direct function, the operators covered various other trades which meant the crew was self-sufficient in most circumstances.\nBoth platforms obtained a satellite antenna in 1986 which allowed them to received Norwegian TV, while the 24-bed accommodation were redecorated in 1981 and upgraded in the summer of 1990.\nWork on the upgrading largely comprised converting all cabins to doubles with shower and WC. The galley and changing rooms were renewed and changing facilities for women provided.\nA new module with a lounge for non-smokers, a smoking room, gym and pool room was also installed. During this work, the West Gamma accommodation rig was positioned alongside.\nUpgrading equipment on the platform was also initiated in 1990. While the pipeline’s original daily capacity had been estimated at 2 100 million standard cubic feet, this was found to have declined after a number of years to 1 975 million.\nTo return to the original capacity, the compressors needed to be upgraded and power supply from the turbines increased. This was done both on the Ekofisk tank and on the H7 and B11 platforms. Gas coolers on the tank were replaced as well.\nNorpipe GNSC-H7, yrker, radiooperatør,\nRadio operator Torleif Førland on the platform Norpipe H-7, with his amateur radio. Photo: Husmo Foto/Norwegian Petroleum Museum\nThe control systems were also upgraded in parallel. Control panels on turbines and compressors were replaced and metering instruments installed to conduct measurements in this equipment.\nWhile the nearest neighbour to B11 was a Danish oil field, H7 stood in the middle of the shipping channel. M/S Hero broke down 15 nautical miles west of the latter platform at around 13.00 on 12 November 1977.\nBy 21.00, the ship was still adrift and heading directly for H7, and all 14 crew on the platform made ready to evacuate by helicopter – the waves were too high for the lifeboats. The wreck passed at 21.40 with a clearance of 400 metres.\nGerman cargo carrier Reint collided with H7 on 30 September 1995, despite efforts by the standby ship to avert the threat. Production was halted as a safety measure, but the platform luckily suffered only minor damage. The collision was caused by inadequate watchkeeping on the ship’s bridge.\nOperator responsibility for B11 and H7 was transferred at the beginning of 2003 to Norway’s state-owned Gassco company, which runs the Norwegian gas transport network.\nThis change had little significance for operation of the platforms, since the actual work was still carried out by ConocoPhillips as a technical service provider to Gassco.\nH7 was shut down in 2007, and removal had been completed in 2013. In connection with preparations to remove the structure, operator responsibility was transferred to Statoil as the company in charge of the project on Gassco’s behalf.\nPublished 24. August 2016 • Updated 22. October 2019\nPhillips inundates Sola with oil revenues\nperson by Kristin Øye Gjerde\nStavanger and neighbouring Sola were the first Norwegian local authorities to experience fantastic oil-related growth after the award of the first exploration licences in 1965.\n— Phillips er i ferd med å etablere seg på Norscobasen nederst til høyre Ca 1972 Foto: Norsk fly og flyfoto/Norsk Oljemuseum\nThe Shell refinery at Risavika in Sola was completed two years later, while the Norsco base in Tananger became operational as early as 1966.\nBut things really took off once the Ekofisk field had been discovered in the autumn of 1969 and started trial production on 14 July 1971.\nOperator Phillips Petroleum Company moved its offices from the Dusavik base outside Stavanger to Tananger in Sola, and Shell could finally start refining Norwegian rather than imported crude.\nSola’s population now rose steadily from 8 400 in 1965 to 15 000 two decades later, and jobs grew even faster – from about 2 000 in 1970 to almost 8 000 in 1985. That averages 10 per cent annually.\nPhillips and Shell became cornerstone companies. A large part of their workforce, particularly in Phillips, worked offshore. In addition came newly established oil supply firms.\nMore jobs were also created in retail, public administration, education, health and social care, personal services and so forth.\nAlthough traditional agriculture remained important for the local authority, the number of farmers gradually declined as a result of mechanisation.[REMOVE]Fotnote: This article is based on the chapter “Elverket i Oljealderen” in I det regionale spenningsfelt. Sola Energi 1913-1999, Kristin Øye Gjerde.\nBoreskipet Drillship ligger ved kai på Norscobasen i Tananger (1968). Foto: NOM/Norsk Fly og Flyfoto\nBoreskipet Drillship ligger ved kai på Norscobasen i Tananger (1968). Foto: Norsk Fly og Flyfoto/Norsk Oljemuseum\nThe “agio tax”\nThe sharp rise in Sola’s revenues was attributable entirely to the oil industry, and it found itself in an enviable position during this period. Tax revenues rose even faster than population and jobs.\nTo give an indication, the local authority’s overall income from wealth and income taxes rose from NOK 9.3 million in 1966 to NOK 198 million in 1990. The biggest growth came in 1978-82, when it averaged 39 per cent a year.[REMOVE]Fotnote: Sola local authority, plans.\nThe secret behind this sharp increase was the tax paid by the oil companies – primarily Phillips – on agio, or the percentage fee charged when exchanging one currency for another.\nUnder Norwegian law at the time, the companies paid tax on their interest income to the local authority where they had their head office. In making this rule, however, the government had failed to take account of the considerable sums involved.\nAs operator of the Greater Ekofisk Area, Phillips had placed capital to be used for new investment in banks around the world – particularly the UK.\nThese deposits yielded substantial interest payments, and tax was payable on converting this income into Norwegian kroner.[REMOVE]Fotnote: Toralv Torstenbø, former chief executive officer in Sola local authority, interviewed by Kristin Øye Gjerde, 22 February 2001.\nSola council is said to have almost gone into shock the first time Phillips paid this agio tax. It suddenly had more money than it could spend.\nDuring the 1970s and early 1980s, Sola’s municipal income always exceeded the budgeted amount. Large sums could be transferred every year to a capital fund.\nSince the local authority was in a growth phase, additional funding was needed for the big developments it faced. While the rest of Norway experienced a slump in the late 1970s, Sola continued in top gear without a sign of unemployment.\nNet income tax revenues came to NOK 55.5 million in 1978, while net spending was NOK 31.9 million. And these fantastic results went on improving.\nBy 1982, wealth and income taxes yielded NOK 203.4 million – compared with a budget of NOK 146 million, which was upgraded to NOK 190 million during the year.\nAccording to Toralv Torstensbø, the financial controller, agio tax accounted for almost half this amount – in other words, as much as the tax paid by all other enterprises, private individuals and industry in Sola.\nIts chief executive officer became a little overweening. In his comments on the 1982 budget, he declared that it would be “natural for Sola local authority to feel a strong regional responsibility and not to be too strict about the traditional division of costs between state, county and local authority.”\nIn line with this open-handed policy, Sola paid for both road projects and an upper secondary modern school which the county council was supposed to fund.[REMOVE]Fotnote: Chief executive officer’s budget proposal for Sola local authority covering 1974-85.\nTightening up petroleum tax\nThis unexpected prosperity undoubtedly created some jealously in the neighbouring local authorities, and the media began to show an interest in the issue.\nLocal daily Stavanger Aftenblad interviewed Sola’s chief executive and controller in 1981, when its photographer took a shot which illustrated the boundless wealth – Torstensbø stood showering hundred-krone notes over his colleague.\nThis story was not only read by the paper’s regular subscribers. The following day, 150 copies were distributed to members of the Storting (parliament).\nThat in turn prompted Centre Party representative Lars Velsand to make a passionate speech in which he described the position as a misuse of tax revenues.\nHe called on the government to intervene so that individual local authorities were unable to benefit in this way. Nor was he alone in finding it unreasonable that a small community like Sola should get so much money.\nThe result was an amendment to the Petroleum Tax Act on 11 June 1982, which specified that the proceeds from the agio tax should be transferred in future to central government.\nLøfteskipet Uglen i aksjon ved Norscobasen i juli 1980. Foto: NOM/Norsk Fly og Flyfoto\nLøfteskipet Uglen i aksjon ved Norscobasen i juli 1980. Foto: Norsk Fly og Flyfoto/Norsk Oljemuseum\nUnfortunately, however, Sola had got used to consuming these revenues. It is easy to learn expensive habits, but not so straightforward to shrug them off again.\nMatters had become a little unusual when the council’s executive board adopted the style of the oil company chiefs and took a helicopter outing during an ordinary budget meeting.[REMOVE]Fotnote: Oskar Goa, former chief technical officer in Sola local authority, interviewed by Kristin Øye Gjerde, 23 October 2000.\nHowever, most of the tax money benefitted the general public. Paying for Sola upper secondary school and new national and county highways is an example of this.\nThe council also invested on local authority school buildings and community facilities such as the big sports complex at Åsen, with an outdoor athletics ground and two modern indoor arenas. Dysjaland and Tananger also acquired new sports arenas.\nA new cultural centre built in central Sola has a distinctive architecture in brick and glass, with a grassed roof to blend with the surrounding Jæren landscape. With two stages and a public library, this became the community’s main venue for events and so forth.\nThe local authority thereby built up a very good infrastructure. Power cables were laid in the same trenches as water and sewage pipes, a network of cycle lanes was built and street lighting installed.\nOn the downside, virtually all these investments boosted operating expenses. The council’s running costs rose by an annual average of 30 per cent in 1978-84, with the biggest growth in the last three years of the period.\nSo the calls by Storting representatives to transfer agio tax receipts from councils to central government represented a real threat to local politicians.\nSola joined forces with other local authorities in the same position, including Stavanger, Oslo and Bærum as well as Rogaland county council.\nA delegation met the Storting’s standing committee on finance to present their case, and secured a commitment to accept a phased reduction in revenues over four years.\nThe local authorities would receive 80 per cent of agio tax receipts during the first year, then 60 per cent, 40 per cent and finally 20 per cent.[REMOVE]Fotnote: Amendment to the Petroleum Tax Act adopted on 14 May 1982.\nIn reality, however, the run-down percentages were adjusted to extend over five years in annual steps of 80, 60, 20, 20 and 20 per cent. The total amount going to the local authorities was the same.\nThe arrangement was controversial to the last, and also uncertain because it had to be approved in each annual government budget.\nLiving within its means\nAfter the tax change, Sola’s chief executive officer saw the writing on the wall. It seemed “to be unquestionable that [Sola] has seen its best days in purely financial terms and must return to setting tougher priorities for various assignments,” he asserted in connection with the budget process for 1983.[REMOVE]Fotnote: Chief executive officer’s budget proposal for Sola local authority, 1983.\nIt took the politicians a little longer to accept this reality, but they were forced to reduce investment and operating expenditures in the years which followed.\nCutting back on the new sports arenas and cultural centre was not very desirable. Nor was it pleasant to have to slow down. But savings had to be made, and long-terms spending plans were removed from the budget for possible reintroduction later.\nA raft of measures were stripped from the budget in 1985, such as extensions to and modernisation of schools, sports arenas and swimming pools, a new somatic nursing home, housing for the intellectually disabled and sheltered housing. Grants for national and county roads were reduced.[REMOVE]Fotnote: Chief executive officer’s budget proposal for Sola local authority, 1985.\nOnce the government’s compensation scheme had ended, Torstensbø – now chief executive officer – told Stavanger Aftenblad that he did not want to paint too gloomy a picture.\n“But it’s clear that we must set much more moderate financial priorities than we’ve been used to. To sum up the position, we were previously flush with cash and poor in facilities. We’re now flush with facilities and poor in cash.”[REMOVE]Fotnote: Stavanger Aftenblad, ”Alt blir dyrere i det rike Sola”, 19 May 1987.\nSola kulturhus fotografert vinteren 2004\nRogaland county council also raised the question of whether it would be possible to establish a permanent arrangement which allowed local authorities and counties to benefit from some of the tax revenues paid by local oil companies.\nThe council pointed out that it was otherwise normal practice for Norwegian companies to pay taxes to the local communities they were based in.\nThis request was turned by Labour finance minister Gunnar Berge because the councils concerned still benefitted from bigger tax payments by oil company employees and on property.[REMOVE]Fotnote: Stavanger Aftenblad, “Rogaland reiser skattekrav på ny”, 16 January 1988.\nAccording to Torstensbø, this was only partly true. The big oil companies were not so significant for Sola’s income once the agio tax was excluded.\nAbout NOK 2 million was received annually from Phillips, primarily in property tax. The most important taxpayers in the local authority were the roughly 90 companies at Aker Base. These were service providers such as Halliburton, Schlumberger and Baker Hughes.\nAt the same time, Sola acquired a steadily growing number of affluent residents and a growing share of its revenue came from income tax. Despite the cut-backs, it remained prosperous.\nPublished 29. July 2019 • Updated 29. July 2019\nMore about economy\nParticipants in Ekofisk\nThe question of who “owns” Ekofisk is not straightforward. In simple terms, however, the field and the rest of Norway’s continental shelf (NCS) belongs to the Norwegian state. This was determined on 14 June 1963, when the Storting (parliament) passed the Act Relating to Exploration for and Exploitation of Submarine Natural Resources. This permits licences to be awarded on certain terms.\nRiding out the oil crisis\nThe greatest-ever oil bonanza, with oil prices hitting USD 130 per barrel, came to an abrupt end in 2014, when the cost of a barrel of crude slumped to less than USD 50 from June to December. And the bottom had still not been reached – this was only the start of a new oil crisis which lasted several years. What effect did this have on ConocoPhillips’ financial position off Norway?", "answers": ["The water depth in the Greater Ekofisk Area is 70-75 meters."], "length": 6625, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "18ef34b54d2ddc134e1be7cae3d6101432465011d016c77a"} {"input": "What is the recommended daily intake of vitamin K for adult women and men?", "context": "Vitamin K - Wikipedia\n(Redirected from Vitamin k)\nThis article needs more medical references for verification or relies too heavily on primary sources. Please review the contents of the article and add the appropriate references if you can. Unsourced or poorly sourced material may be challenged and removed. (November 2015)\nThis article is about the family of vitamers. For vitamin K1 the form usually used as a supplement, see Phytomenadione.\nVitamin K structures. MK-4 and MK-7 are both subtypes of K2.\nVitamin K deficiency, Warfarin overdose\nVitamin K is a group of structurally similar, fat-soluble vitamins the human body requires for complete synthesis of certain proteins that are prerequisites for blood coagulation and which the body also needs for controlling binding of calcium in bones and other tissues. The vitamin K-related modification of the proteins allows them to bind calcium ions, which they cannot do otherwise. Without vitamin K, blood coagulation is seriously impaired, and uncontrolled bleeding occurs. Low levels of vitamin K also weaken bones and promote calcification of arteries and other soft tissues[citation needed].\nChemically, the vitamin K family comprises 2-methyl-1,4-naphthoquinone (3-) derivatives. Vitamin K includes two natural vitamers: vitamin K1 and vitamin K2.[1] Vitamin K2, in turn, consists of a number of related chemical subtypes, with differing lengths of carbon side chains made of isoprenoid groups of atoms.\nVitamin K1, also known as phylloquinone, is made by plants, and is found in highest amounts in green leafy vegetables because it is directly involved in photosynthesis. It may be thought of as the plant form of vitamin K. It is active as a vitamin in animals and performs the classic functions of vitamin K, including its activity in the production of blood-clotting proteins. Animals may also convert it to vitamin K2.\nBacteria in the gut flora can also convert K1 into vitamin K2. In addition, bacteria typically lengthen the isoprenoid side chain of vitamin K2 to produce a range of vitamin K2 forms, most notably the MK-7 to MK-11 homologues of vitamin K2. All forms of K2 other than MK-4 can only be produced by bacteria, which use these forms in anaerobic respiration. The MK-7 and other bacterially derived forms of vitamin K2 exhibit vitamin K activity in animals, but MK-7's extra utility over MK-4, if any, is unclear and is a matter of investigation.\nThree synthetic types of vitamin K are known: vitamins K3, K4, and K5. Although the natural K1 and all K2 homologues and synthetic K4 and K5 have proven nontoxic, the synthetic form K3 (menadione) has shown toxicity.[2]\n1.2 Cardiovascular health\n1.4 Coumarin poisoning\n4.1 Conversion of vitamin K1 to vitamin K2\n4.2 Vitamin K2\n6 Absorption and dietary need\n7 Dietary reference intake\n10 Biochemistry\n10.1 Function in animals\n10.2 Gamma-carboxyglutamate proteins\n10.3 Methods of assessment\n10.4 Function in bacteria\n11 Injection in newborns\n11.3 Controversy\nA review of 2014 concluded that there is positive evidence that monotherapy using MK-4, one of the forms of Vitamin K2, reduces fracture incidence in post-menopausal women with osteoporosis, and suggested further research on the combined use of MK-4 with bisphosphonates. In contrast, an earlier review article of 2013 concluded that there is no good evidence that vitamin K supplementation helps prevent osteoporosis or fractures in postmenopausal women.[3]\nA Cochrane systematic review of 2006 suggested that supplementation with Vitamin K1 and with MK4 reduces bone loss; in particular, a strong effect of MK-4 on incident fractures among Japanese patients was emphasized.[4]\nA review article of 2016 suggested to consider, as one of several measures for bone health, increasing the intake of foods rich in vitamins K1 and K2.[5]\nCardiovascular health[edit]\nAdequate intake of vitamin K is associated with the inhibition of arterial calcification and stiffening,[6] but there have been few interventional studies and no good evidence that vitamin K supplementation is of any benefit in the primary prevention of cardiovascular disease.[7]\nOne 10-year population study, the Rotterdam Study, did show a clear and significant inverse relationship between the highest intake levels of menaquinone (mainly MK-4 from eggs and meat, and MK-8 and MK-9 from cheese) and cardiovascular disease and all-cause mortality in older men and women.[8]\nVitamin K has been promoted in supplement form with claims it can slow tumor growth; there is however no good medical evidence that supports such claims.[9]\nCoumarin poisoning[edit]\nVitamin K is part of the suggested treatment regime for poisoning by rodenticide (coumarin poisoning).[10]\nAlthough allergic reaction from supplementation is possible, no known toxicity is associated with high doses of the phylloquinone (vitamin K1) or menaquinone (vitamin K2) forms of vitamin K, so no tolerable upper intake level (UL) has been set.[11]\nBlood clotting (coagulation) studies in humans using 45 mg per day of vitamin K2 (as MK-4)[12] and even up to 135 mg per day (45 mg three times daily) of K2 (as MK-4),[13] showed no increase in blood clot risk. Even doses in rats as high as 250 mg/kg, body weight did not alter the tendency for blood-clot formation to occur.[14]\nUnlike the safe natural forms of vitamin K1 and vitamin K2 and their various isomers, a synthetic form of vitamin K, vitamin K3 (menadione), is demonstrably toxic at high levels. The U.S. FDA has banned this form from over-the-counter sale in the United States because large doses have been shown to cause allergic reactions, hemolytic anemia, and cytotoxicity in liver cells.[2]\nPhylloquinone (K1)[15][16] or menaquinone (K2) are capable of reversing the anticoagulant activity of the anticoagulant warfarin (tradename Coumadin). Warfarin works by blocking recycling of vitamin K, so that the body and tissues have lower levels of active vitamin K, and thus a deficiency of vitamin K.\nSupplemental vitamin K (for which oral dosing is often more active than injectable dosing in human adults) reverses the vitamin K deficiency caused by warfarin, and therefore reduces the intended anticoagulant action of warfarin and related drugs.[17] Sometimes small amounts of vitamin K are given orally to patients taking warfarin so that the action of the drug is more predictable.[17] The proper anticoagulant action of the drug is a function of vitamin K intake and drug dose, and due to differing absorption must be individualized for each patient.[citation needed] The action of warfarin and vitamin K both require two to five days after dosing to have maximum effect, and neither warfarin or vitamin K shows much effect in the first 24 hours after they are given.[18]\nThe newer anticoagulants dabigatran and rivaroxaban have different mechanisms of action that do not interact with vitamin K, and may be taken with supplemental vitamin K.[19][20]\nVitamin K2 (menaquinone). In menaquinone, the side chain is composed of a varying number of isoprenoid residues. The most common number of these residues is four, since animal enzymes normally produce menaquinone-4 from plant phylloquinone.\nA sample of phytomenadione for injection, also called phylloquinone\nThe three synthetic forms of vitamin K are vitamins K3 (menadione), K4, and K5, which are used in many areas, including the pet food industry (vitamin K3) and to inhibit fungal growth (vitamin K5).[21]\nConversion of vitamin K1 to vitamin K2[edit]\nVitamin K1 (phylloquinone) – both forms of the vitamin contain a functional naphthoquinone ring and an aliphatic side chain. Phylloquinone has a phytyl side chain.\nThe MK-4 form of vitamin K2 is produced by conversion of vitamin K1 in the testes, pancreas, and arterial walls.[22] While major questions still surround the biochemical pathway for this transformation, the conversion is not dependent on gut bacteria, as it occurs in germ-free rats[23][24] and in parenterally-administered K1 in rats.[25][26] In fact, tissues that accumulate high amounts of MK-4 have a remarkable capacity to convert up to 90% of the available K1 into MK-4.[27][28] There is evidence that the conversion proceeds by removal of the phytyl tail of K1 to produce menadione as an intermediate, which is then condensed with an activated geranylgeranyl moiety (see also prenylation) to produce vitamin K2 in the MK-4 (menatetrione) form.[29]\nVitamin K2[edit]\nMain article: Vitamin K2\nVitamin K2 (menaquinone) includes several subtypes. The two subtypes most studied are menaquinone-4 (menatetrenone, MK-4) and menaquinone-7 (MK-7).\nVitamin K1, the precursor of most vitamin K in nature, is a stereoisomer of phylloquinone, an important chemical in green plants, where it functions as an electron acceptor in photosystem I during photosynthesis. For this reason, vitamin K1 is found in large quantities in the photosynthetic tissues of plants (green leaves, and dark green leafy vegetables such as romaine lettuce, kale and spinach), but it occurs in far smaller quantities in other plant tissues (roots, fruits, etc.). Iceberg lettuce contains relatively little. The function of phylloquinone in plants appears to have no resemblance to its later metabolic and biochemical function (as \"vitamin K\") in animals, where it performs a completely different biochemical reaction.\nVitamin K (in animals) is involved in the carboxylation of certain glutamate residues in proteins to form gamma-carboxyglutamate (Gla) residues. The modified residues are often (but not always) situated within specific protein domains called Gla domains. Gla residues are usually involved in binding calcium, and are essential for the biological activity of all known Gla proteins.[30]\nAt this time[update], 17 human proteins with Gla domains have been discovered, and they play key roles in the regulation of three physiological processes:\nBlood coagulation: prothrombin (factor II), factors VII, IX, and X, and proteins C, S, and Z[31]\nBone metabolism: osteocalcin, also called bone Gla protein (BGP), matrix Gla protein (MGP),[32] periostin,[33] and the recently discovered Gla-rich protein (GRP).[34][35]\nVascular biology: growth arrest-specific protein 6 (Gas6)[36]\nUnknown function: proline-rich γ-carboxyglutamyl proteins (PRGPs) 1 and 2, and transmembrane γ-carboxy glutamyl proteins (TMGs) 3 and 4.[37]\nLike other lipid-soluble vitamins (A, D and E), vitamin K is stored in the fatty tissue of the human body.\nAbsorption and dietary need[edit]\nPrevious theory held that dietary deficiency is extremely rare unless the small intestine was heavily damaged, resulting in malabsorption of the molecule. Another at-risk group for deficiency were those subject to decreased production of K2 by normal intestinal microbiota, as seen in broad spectrum antibiotic use.[38] Taking broad-spectrum antibiotics can reduce vitamin K production in the gut by nearly 74% in people compared with those not taking these antibiotics.[39] Diets low in vitamin K also decrease the body's vitamin K concentration.[40] Those with chronic kidney disease are at risk for vitamin K deficiency, as well as vitamin D deficiency, and particularly those with the apoE4 genotype.[41] Additionally, in the elderly there is a reduction in vitamin K2 production.[42]\nThe National Academy of Medicine (NAM) updated an estimate of what constitutes an adequate intake (AI) for vitamin K in 2001. The NAM does not distinguish between K1 and K2 – both are counted as vitamin K. At that time there was not sufficient evidence to set the more rigorous estimated average requirement (EAR) or recommended dietary allowance (RDA) given for most of the essential vitamins and minerals. The current daily AIs for vitamin K for adult women and men are 90 μg and 120 μg respectively. The AI for pregnancy and lactation is 90 μg. For infants up to 12 months the AI is 2–2.5 μg, and for children aged 1 to 18 years the AI increases with age from 30 to 75 μg. As for safety, the FNB also sets tolerable upper intake levels (known as ULs) for vitamins and minerals when evidence is sufficient. In the case of vitamin K no UL is set, as evidence for adverse effects is not sufficient. Collectively EARs, RDAs, AIs and ULs are referred to as dietary reference intakes.[43] The European Food Safety Authority reviewed the same safety question and did not set an UL.[44]\nFor U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percentage of daily value (%DV). For vitamin K labeling purposes the daily value was 80 μg, but as of May 2016 it has been revised upwards to 120 μg. A table of the pre-change adult daily values is provided at reference daily intake. Food and supplement companies have until 28 July 2018 to comply with the change.\nSee also: Vitamin K2 § Dietary sources\nK1 (μg)[45]\nKale, cooked\nCollards, cooked\nCollards, raw\nSwiss chard, cooked\nSwiss chard, raw\nTurnip greens, raw\nRomaine lettuce, raw\nTable from \"Important information to know when you are taking: Warfarin (Coumadin) and Vitamin K\", Clinical Center, National Institutes of Health Drug Nutrient Interaction Task Force.[46]\nVitamin K1 is found chiefly in leafy green vegetables such as dandelion greens (which contain 778.4 μg per 100 g, or 741% of the recommended daily amount), spinach, swiss chard, lettuce and Brassica vegetables (such as cabbage, kale, cauliflower, broccoli, and brussels sprouts) and often the absorption is greater when accompanied by fats such as butter or oils; some fruits, such as avocados, kiwifruit and grapes, are also high in vitamin K. By way of reference, two tablespoons of parsley contains 153% of the recommended daily amount of vitamin K.[47] Some vegetable oils, notably soybean oil, contain vitamin K, but at levels that would require relatively large calorie consumption to meet the USDA-recommended levels.[48] colonic bacteria synthesize a significant portion of humans' vitamin K needs; newborns often receive a vitamin K shot at birth to tide them over until their colons become colonized at five to seven days of age from the consumption of breast milk.\nThe tight binding of vitamin K1 to thylakoid membranes in chloroplasts makes it less bioavailable. For example, cooked spinach has a 5% bioavailability of phylloquinone, however, fat added to it increases bioavailability to 13% due to the increased solubility of vitamin K in fat.[49]\nMain article: Vitamin K deficiency\nAverage diets are usually not lacking in vitamin K, and primary deficiency is rare in healthy adults. Newborn infants are at an increased risk of deficiency. Other populations with an increased prevalence of vitamin K deficiency include those who suffer from liver damage or disease (e.g. alcoholics), cystic fibrosis, or inflammatory bowel diseases, or have recently had abdominal surgeries. Secondary vitamin K deficiency can occur in people with bulimia, those on stringent diets, and those taking anticoagulants. Other drugs associated with vitamin K deficiency include salicylates, barbiturates, and cefamandole, although the mechanisms are still unknown. Vitamin K1 deficiency can result in coagulopathy, a bleeding disorder.[50]Symptoms of K1 deficiency include anemia, bruising, nosebleeds and bleeding of the gums in both sexes, and heavy menstrual bleeding in women.\nOsteoporosis[51][52] and coronary heart disease[53][54] are strongly associated with lower levels of K2 (menaquinone). Vitamin K2 (as menaquinones MK-4 through MK-10) intake level is inversely related to severe aortic calcification and all-cause mortality.[8]\nFunction in animals[edit]\nMechanism of action of vitamin K1.\nThe function of vitamin K2 in the animal cell is to add a carboxylic acid functional group to a glutamate (Glu) amino acid residue in a protein, to form a gamma-carboxyglutamate (Gla) residue. This is a somewhat uncommon posttranslational modification of the protein, which is then known as a \"Gla protein\". The presence of two −COOH (carboxylic acid) groups on the same carbon in the gamma-carboxyglutamate residue allows it to chelate calcium ions. The binding of calcium ions in this way very often triggers the function or binding of Gla-protein enzymes, such as the so-called vitamin K-dependent clotting factors discussed below.\nWithin the cell, vitamin K undergoes electron reduction to a reduced form called vitamin K hydroquinone, catalyzed by the enzyme vitamin K epoxide reductase (VKOR).[55] Another enzyme then oxidizes vitamin K hydroquinone to allow carboxylation of Glu to Gla; this enzyme is called gamma-glutamyl carboxylase[56][57] or the vitamin K-dependent carboxylase. The carboxylation reaction only proceeds if the carboxylase enzyme is able to oxidize vitamin K hydroquinone to vitamin K epoxide at the same time. The carboxylation and epoxidation reactions are said to be coupled. Vitamin K epoxide is then reconverted to vitamin K by VKOR. The reduction and subsequent reoxidation of vitamin K coupled with carboxylation of Glu is called the vitamin K cycle.[58] Humans are rarely deficient in vitamin K1 because, in part, vitamin K1 is continuously recycled in cells.[59]\nWarfarin and other 4-hydroxycoumarins block the action of VKOR.[60] This results in decreased concentrations of vitamin K and vitamin K hydroquinone in tissues, such that the carboxylation reaction catalyzed by the glutamyl carboxylase is inefficient. This results in the production of clotting factors with inadequate Gla. Without Gla on the amino termini of these factors, they no longer bind stably to the blood vessel endothelium and cannot activate clotting to allow formation of a clot during tissue injury. As it is impossible to predict what dose of warfarin will give the desired degree of clotting suppression, warfarin treatment must be carefully monitored to avoid overdose.\nGamma-carboxyglutamate proteins[edit]\nMain article: Gla domain\nThe following human Gla-containing proteins (\"Gla proteins\") have been characterized to the level of primary structure: blood coagulation factors II (prothrombin), VII, IX, and X, anticoagulant proteins C and S, and the factor X-targeting protein Z. The bone Gla protein osteocalcin, the calcification-inhibiting matrix Gla protein (MGP), the cell growth regulating growth arrest specific gene 6 protein (Gas6), and the four transmembrane Gla proteins (TMGPs), the function of which is at present unknown. Gas6 can function as a growth factor to activate the Axl receptor tyrosine kinase and stimulate cell proliferation or prevent apoptosis in some cells. In all cases in which their function was known, the presence of the Gla residues in these proteins turned out to be essential for functional activity.\nGla proteins are known to occur in a wide variety of vertebrates: mammals, birds, reptiles, and fish. The venom of a number of Australian snakes acts by activating the human blood-clotting system. In some cases, activation is accomplished by snake Gla-containing enzymes that bind to the endothelium of human blood vessels and catalyze the conversion of procoagulant clotting factors into activated ones, leading to unwanted and potentially deadly clotting.\nAnother interesting class of invertebrate Gla-containing proteins is synthesized by the fish-hunting snail Conus geographus.[61] These snails produce a venom containing hundreds of neuroactive peptides, or conotoxins, which is sufficiently toxic to kill an adult human. Several of the conotoxins contain two to five Gla residues.[62]\nMethods of assessment[edit]\nVitamin K status can be assessed by:\nThe prothrombin time (PT) test measures the time required for blood to clot. A blood sample is mixed with citric acid and put in a fibrometer; delayed clot formation indicates a deficiency. This test is insensitive to mild deficiency, as the values do not change until the concentration of prothrombin in the blood has declined by at least 50%.[63]\nUndercarboxylated prothrombin (PIVKA-II); in a study of 53 newborns, found \"PT (prothrombin time) is a less sensitive marker than PIVKA II\",[64] and as indicated above, PT is unable to detect subclinical deficiencies that can be detected with PIVKA-II testing.\nPlasma phylloquinone was found to be positively correlated with phylloquinone intake in elderly British women, but not men,[65] but an article by Schurgers et al. reported no correlation between FFQ[further explanation needed] and plasma phylloquinone.[66]\nUrinary γ-carboxyglutamic acid responds to changes in dietary vitamin K intake. Several days are required before any change can be observed. In a study by Booth et al., increases of phylloquinone intakes from 100 μg to between 377 and 417 μg for five days did not induce a significant change. Response may be age-specific.[67]\nUndercarboxylated osteocalcin (UcOc) levels have been inversely correlated with stores of vitamin K[68] and bone strength in developing rat tibiae. Another study following 78 post-menopausal Korean women found a supplement regimen of vitamins K and D, and calcium, but not a regimen of vitamin D and calcium, was inversely correlated with reduced UcOc levels.[69]\nFunction in bacteria[edit]\nMany bacteria, such as Escherichia coli found in the large intestine, can synthesize vitamin K2 (menaquinone-7 or MK-7, up to MK-11),[70] but not vitamin K1 (phylloquinone). In these bacteria, menaquinone transfers two electrons between two different small molecules, during oxygen-independent metabolic energy production processes (anaerobic respiration).[71] For example, a small molecule with an excess of electrons (also called an electron donor) such as lactate, formate, or NADH, with the help of an enzyme, passes two electrons to menaquinone. The menaquinone, with the help of another enzyme, then transfers these two electrons to a suitable oxidant, such fumarate or nitrate (also called an electron acceptor). Adding two electrons to fumarate or nitrate converts the molecule to succinate or nitrite plus water, respectively.\nSome of these reactions generate a cellular energy source, ATP, in a manner similar to eukaryotic cell aerobic respiration, except the final electron acceptor is not molecular oxygen, but fumarate or nitrate. In aerobic respiration, the final oxidant is molecular oxygen (O2), which accepts four electrons from an electron donor such as NADH to be converted to water. E. coli, as facultative anaerobes, can carry out both aerobic respiration and menaquinone-mediated anaerobic respiration.\nInjection in newborns[edit]\nThe blood clotting factors of newborn babies are roughly 30–60% that of adult values; this may be due to the reduced synthesis of precursor proteins and the sterility of their guts. Human milk contains 1–4 μg/L of vitamin K1, while formula-derived milk can contain up to 100 μg/L in supplemented formulas. Vitamin K2 concentrations in human milk appear to be much lower than those of vitamin K1. Occurrence of vitamin K deficiency bleeding in the first week of the infant's life is estimated at 0.25–1.7%, with a prevalence of 2–10 cases per 100,000 births.[72] Premature babies have even lower levels of the vitamin, so they are at a higher risk from this deficiency.\nBleeding in infants due to vitamin K deficiency can be severe, leading to hospitalization, blood transfusions, brain damage, and death. Supplementation can prevent most cases of vitamin K deficiency bleeding in the newborn. Intramuscular administration is more effective in preventing late vitamin K deficiency bleeding than oral administration.[73][74]\nAs a result of the occurrences of vitamin K deficiency bleeding, the Committee on Nutrition of the American Academy of Pediatrics has recommended 0.5–1 mg of vitamin K1 be administered to all newborns shortly after birth.[74]\nIn the UK vitamin K supplementation is recommended for all newborns within the first 24 hours.[75] This is usually given as a single intramuscular injection of 1 mg shortly after birth but as a second-line option can be given by three oral doses over the first month.[76]\nControversy arose in the early 1990s regarding this practice, when two studies suggested a relationship between parenteral administration of vitamin K and childhood cancer,[77] however, poor methods and small sample sizes led to the discrediting of these studies, and a review of the evidence published in 2000 by Ross and Davies found no link between the two.[78] Doctors reported emerging concerns in 2013,[79] after treating children for serious bleeding problems. They cited lack-of newborn vitamin K administration, as the reason that the problems occurred, and recommended that breastfed babies could have an increased risk unless they receive a preventative dose.\nIn the early 1930s, Danish scientist Henrik Dam investigated the role of cholesterol by feeding chickens a cholesterol-depleted diet.[80] He initially replicated experiments reported by scientists at the Ontario Agricultural College (OAC).[81] McFarlane, Graham and Richardson, working on the chick feed program at OAC, had used chloroform to remove all fat from chick chow. They noticed that chicks fed only fat-depleted chow developed hemorrhages and started bleeding from tag sites.[82] Dam found that these defects could not be restored by adding purified cholesterol to the diet. It appeared that – together with the cholesterol – a second compound had been extracted from the food, and this compound was called the coagulation vitamin. The new vitamin received the letter K because the initial discoveries were reported in a German journal, in which it was designated as Koagulationsvitamin. Edward Adelbert Doisy of Saint Louis University did much of the research that led to the discovery of the structure and chemical nature of vitamin K.[83] Dam and Doisy shared the 1943 Nobel Prize for medicine for their work on vitamin K (K1 and K2) published in 1939. Several laboratories synthesized the compound(s) in 1939.[84]\nFor several decades, the vitamin K-deficient chick model was the only method of quantifying vitamin K in various foods: the chicks were made vitamin K-deficient and subsequently fed with known amounts of vitamin K-containing food. The extent to which blood coagulation was restored by the diet was taken as a measure for its vitamin K content. Three groups of physicians independently found this: Biochemical Institute, University of Copenhagen (Dam and Johannes Glavind), University of Iowa Department of Pathology (Emory Warner, Kenneth Brinkhous, and Harry Pratt Smith), and the Mayo Clinic (Hugh Butt, Albert Snell, and Arnold Osterberg).[85]\nThe first published report of successful treatment with vitamin K of life-threatening hemorrhage in a jaundiced patient with prothrombin deficiency was made in 1938 by Smith, Warner, and Brinkhous.[86]\nThe precise function of vitamin K was not discovered until 1974, when three laboratories (Stenflo et al.,[87] Nelsestuen et al.,[88] and Magnusson et al.[89]) isolated the vitamin K-dependent coagulation factor prothrombin (factor II) from cows that received a high dose of a vitamin K antagonist, warfarin. It was shown that, while warfarin-treated cows had a form of prothrombin that contained 10 glutamate (Glu) amino acid residues near the amino terminus of this protein, the normal (untreated) cows contained 10 unusual residues that were chemically identified as γ-carboxyglutamate (Gla). The extra carboxyl group in Gla made clear that vitamin K plays a role in a carboxylation reaction during which Glu is converted into Gla.\nThe biochemistry of how vitamin K is used to convert Glu to Gla has been elucidated over the past thirty years in academic laboratories throughout the world.\n^ \"Vitamin K Overview\". University of Maryland Medical Center. ^ a b Higdon, Jane (Feb 2008). \"Vitamin K\". Linus Pauling Institute, Oregon State University. Retrieved 12 Apr 2008. ^ Hamidi, M. S.; Gajic-Veljanoski, O.; Cheung, A. M. (2013). \"Vitamin K and bone health\". Journal of Clinical Densitometry (Review). 16 (4): 409–413. doi:10.1016/j.jocd.2013.08.017. PMID 24090644. ^ Cockayne, S.; Adamson, J.; Lanham-New, S.; Shearer, M. J.; Gilbody, S; Torgerson, D. J. (Jun 2006). \"Vitamin K and the prevention of fractures: systematic review and meta-analysis of randomized controlled trials\". Archives of Internal Medicine (Review). 166 (12): 1256–1261. doi:10.1001/archinte.166.12.1256. PMID 16801507. ^ O'Keefe, J. H.; Bergman, N.; Carrera Bastos, P.; Fontes Villalba, M.; Di Nicolantonio, J. J.; Cordain, L. (2016). \"Nutritional strategies for skeletal and cardiovascular health: hard bones, soft arteries, rather than vice versa\". Open Heart (Review). 3 (1): e000325. doi:10.1136/openhrt-2015-000325. PMC 4809188. PMID 27042317. ^ Maresz, K. (Feb 2015). \"Proper Calcium Use: Vitamin K2 as a Promoter of Bone and Cardiovascular Health\". Integrative Medicine (Review). 14 (1): 34–39. PMC 4566462. PMID 26770129. ^ Hartley, L.; Clar, C.; Ghannam, O.; Flowers, N.; Stranges, S.; Rees, K. (Sep 2015). \"Vitamin K for the primary prevention of cardiovascular disease\". The Cochrane Database of Systematic Reviews (Systematic review). 9 (9): CD011148. doi:10.1002/14651858.CD011148.pub2. PMID 26389791. ^ a b Geleijnse, J. M.; Vermeer, C.; Grobbee, D. E.; Schurgers, L. J.; Knapen, M. H.; van der Meer, I. M.; Hofman, A.; Witteman, J. C. (Nov 2004). \"Dietary intake of menaquinone is associated with a reduced risk of coronary heart disease: the Rotterdam Study\". Journal of Nutrition. 134 (11): 3100–3105. PMID 15514282. ^ Ades, T. B., ed. (2009). \"Vitamin K\". American Cancer Society Complete Guide to Complementary and Alternative Cancer Therapies (2nd ed.). American Cancer Society. pp. 558–563. ISBN 978-0-944235-71-3. ^ Lung, D. (Dec 2015). Tarabar, A., ed. \"Rodenticide Toxicity Treatment & Management\". Medscape. WebMD. ^ Rasmussen, S. E.; Andersen, N. L.; Dragsted, L. O.; Larsen, J. C. (Mar 2006). \"A safe strategy for addition of vitamins and minerals to foods\". European Journal of Nutrition. 45 (3): 123–135. doi:10.1007/s00394-005-0580-9. PMID 16200467. ^ Ushiroyama, T.; Ikeda, A.; Ueki, M (Mar 2002). \"Effect of continuous combined therapy with vitamin K2 and vitamin D3 on bone mineral density and coagulofibrinolysis function in postmenopausal women\". Maturitas. 41 (3): 211–221. doi:10.1016/S0378-5122(01)00275-4. PMID 11886767. ^ Asakura, H.; Myou, S.; Ontachi, Y.; Mizutani, T.; Kato, M.; Saito, M.; Morishita, E.; Yamazaki, M.; Nakao, S. (Dec 2001). \"Vitamin K administration to elderly patients with osteoporosis induces no hemostatic activation, even in those with suspected vitamin K deficiency\". Osteoporosis International. 12 (12): 996–1000. doi:10.1007/s001980170007. PMID 11846334. ^ Ronden, J. E.; Groenen-van Dooren, M. M.; Hornstra, G.; Vermeer, C. (Jul 1997). \"Modulation of arterial thrombosis tendency in rats by vitamin K and its side chains\". Atherosclerosis. 132 (1): 61–67. doi:10.1016/S0021-9150(97)00087-7. PMID 9247360. ^ Ansell, J.; Hirsh, J.; Poller, L.; Bussey, H.; Jacobson, A.; Hylek, E (Sep 2004). \"The pharmacology and management of the vitamin K antagonists: the Seventh ACCP Conference on Antithrombotic and Thrombolytic Therapy\". Chest. 126 (3 Suppl.): 204S–233S. doi:10.1378/chest.126.3_suppl.204S. PMID 15383473. ^ Crowther, M. A.; Douketis, J. D.; Schnurr, T.; Steidl, L.; Mera, V.; Ultori, C.; Venco, A.; Ageno, W. (Aug 2002). \"Oral vitamin K lowers the international normalized ratio more rapidly than subcutaneous vitamin K in the treatment of warfarin-associated coagulopathy. A randomized, controlled trial\". Annals of Internal Medicine. 137 (4): 251–254. doi:10.7326/0003-4819-137-4-200208200-00009. PMID 12186515. ^ a b \"Important Information to Know When You Are Taking: Warfarin (Coumadin) and Vitamin K\" (PDF). National Institute of Health Clinical Center Drug-Nutrient Interaction Task Force. Retrieved 17 Apr 2015. ^ \"Guidelines For Warfarin Reversal With Vitamin K\" (PDF). American Society of Health-System Pharmacists. Retrieved 17 Apr 2015. ^ \"Pradaxa Drug Interactions\". Pradaxapro.com. 19 Mar 2012. Retrieved 21 Apr 2013. ^ Bauersachs, R.; Berkowitz, S. D.; Brenner, B.; Buller, H. R.; Decousus, H.; Gallus, A. S.; Lensing, A. W.; Misselwitz, F.; Prins, M. H.; Raskob, G. E.; Segers, A.; Verhamme, P.; Wells, P.; Agnelli, G.; Bounameaux, H.; Cohen, A.; Davidson, B. L.; Piovella, F.; Schellong, S. (Dec 2010). \"Oral rivaroxaban for symptomatic venous thromboembolism\". New England Journal of Medicine. 363 (26): 2499–2510. doi:10.1056/NEJMoa1007903. PMID 21128814. ^ McGee, W. (1 Feb 2007). \"Vitamin K\". MedlinePlus. Retrieved 2 Apr 2009. ^ Shearer, M. J.; Newman, P. (Oct 2008). \"Metabolism and cell biology of vitamin K\". Thrombosis and Haemostasis. 100 (4): 530–547. doi:10.1160/TH08-03-0147. PMID 18841274. ^ Davidson, R. T.; Foley, A. L.; Engelke, J. A.; Suttie, J. W. (Feb 1998). \"Conversion of dietary phylloquinone to tissue menaquinone-4 in rats is not dependent on gut bacteria\". Journal of Nutrition. 128 (2): 220–223. PMID 9446847. ^ Ronden, J. E.; Drittij-Reijnders, M. J.; Vermeer, C.; Thijssen, H. H. (Jan 1998). \"Intestinal flora is not an intermediate in the phylloquinone–menaquinone-4 conversion in the rat\". Biochimica et Biophysica Acta. 1379 (1): 69–75. doi:10.1016/S0304-4165(97)00089-5. PMID 9468334. ^ Thijssen, H. .H.; Drittij-Reijnders, M. J. (Sep 1994). \"Vitamin K distribution in rat tissues: dietary phylloquinone is a source of tissue menaquinone-4\". The British Journal of Nutrition. 72 (3): 415–425. doi:10.1079/BJN19940043. PMID 7947656. ^ Will, B. H.; Usui, Y.; Suttie, J. W. (Dec 1992). \"Comparative metabolism and requirement of vitamin K in chicks and rats\". Journal of Nutrition. 122 (12): 2354–2360. PMID 1453219. ^ Davidson, R. T.; Foley, A. L.; Engelke, J. A.; Suttie, J. W. (Feb 1998). \"Conversion of dietary phylloquinone to tissue menaquinone-4 in rats is not dependent on gut bacteria\". Journal of Nutrition. 128 (2): 220–223. PMID 9446847. ^ Ronden, J. E.; Drittij-Reijnders, M. J.; Vermeer, C.; Thijssen, H. H. (Jan 1998). \"Intestinal flora is not an intermediate in the phylloquinone-menaquinone-4 conversion in the rat\". Biochimica et Biophysica Acta. 1379 (1): 69–75. doi:10.1016/S0304-4165(97)00089-5. PMID 9468334. ^ Al Rajabi, Ala (2011). The Enzymatic Conversion of Phylloquinone to Menaquinone-4 (PhD thesis). Tufts University, Friedman School of Nutrition Science and Policy. ^ Furie, B.; Bouchard, B. A.; Furie, B. C. (Mar 1999). \"Vitamin K-dependent biosynthesis of gamma-carboxyglutamic acid\". Blood. 93 (6): 1798–1808. PMID 10068650. ^ Mann, K. G. (Aug 1999). \"Biochemistry and physiology of blood coagulation\". Thrombosis and Haemostasis. 82 (2): 165–174. PMID 10605701. ^ Price, P. A. (1988). \"Role of vitamin-K-dependent proteins in bone metabolism\". Annual Review of Nutrition. 8: 565–583. doi:10.1146/annurev.nu.08.070188.003025. PMID 3060178. ^ Coutu, D. L.; Wu, J. H.; Monette, A.; Rivard, G. E.; Blostein, M. D.; Galipeau, J (Jun 2008). \"Periostin, a member of a novel family of vitamin K-dependent proteins, is expressed by mesenchymal stromal cells\". Journal of Biological Chemistry. 283 (26): 17991–18001. doi:10.1074/jbc.M708029200. PMID 18450759. ^ Viegas, C. S.; Simes, D. C.; Laizé, V.; Williamson, M. K.; Price, P. A.; Cancela, M. L. (Dec 2008). \"Gla-rich protein (GRP), a new vitamin K-dependent protein identified from sturgeon cartilage and highly conserved in vertebrates\". Journal of Biological Chemistry. 283 (52): 36655–36664. doi:10.1074/jbc.M802761200. PMC 2605998. PMID 18836183. ^ Viegas, C. S.; Cavaco, S.; Neves, P. L.; Ferreira, A.; João, A.; Williamson, M. K.; Price, P. A.; Cancela, M. L.; Simes, D. C. (Dec 2009). \"Gla-rich protein is a novel vitamin K-dependent protein present in serum that accumulates at sites of pathological calcifications\". American Journal of Pathology. 175 (6): 2288–2298. doi:10.2353/ajpath.2009.090474. PMC 2789615. PMID 19893032. ^ Hafizi, S.; Dahlbäck, B. (Dec 2006). \"Gas6 and protein S. Vitamin K-dependent ligands for the Axl receptor tyrosine kinase subfamily\". The FEBS Journal. 273 (23): 5231–5244. doi:10.1111/j.1742-4658.2006.05529.x. PMID 17064312. ^ Kulman, J. D.; Harris, J. E.; Xie, L.; Davie, E. W. (May 2007). \"Proline-rich Gla protein 2 is a cell-surface vitamin K-dependent protein that binds to the transcriptional coactivator Yes-associated protein\". Proceedings of the National Academy of Sciences of the United States of America. 104 (21): 8767–8772. doi:10.1073/pnas.0703195104. PMC 1885577. PMID 17502622. ^ \"Vitamin K\". MedlinePlus. US National Library of Medicine, National Institutes of Health. Sep 2016. Retrieved 26 May 2009. ^ Conly, J; Stein, K. (Dec 1994). \"Reduction of vitamin K2 concentrations in human liver associated with the use of broad spectrum antimicrobials\". Clinical and Investigative Medicine. 17 (6): 531–539. PMID 7895417. ^ Ferland, G.; Sadowski, J. A.; O'Brien, M. E. (Apr 1993). \"Dietary induced subclinical vitamin K deficiency in normal human subjects\". Journal of Clinical Investigation. 91 (4): 1761–1768. doi:10.1172/JCI116386. PMC 288156. PMID 8473516. ^ Holden, R. M.; Morton, A. R.; Garland, J. S.; Pavlov, A.; Day, A. G.; Booth, S. L. (Apr 2010). \"Vitamins K and D status in stages 3-5 chronic kidney disease\". Clinical Journal of the American Society of Nephrology. 5 (4): 590–597. doi:10.2215/CJN.06420909. PMC 2849681. PMID 20167683. ^ Hodges, S. J.; Pilkington, M. J.; Shearer, M. J.; Bitensky, L.; Chayen, J (Jan 1990). \"Age-related changes in the circulating levels of congeners of vitamin K2, menaquinone-7 and menaquinone-8\". Clinical Science. 78 (1): 63–66. PMID 2153497. ^ \"Vitamin K\". Dietary Reference Intakes for Vitamin A, Vitamin K, Arsenic, Boron, Chromium, Copper, Iodine, Iron, Manganese, Molybdenum, Nickel, Silicon, Vanadium, and Zinc (PDF). National Academy Press. 2001. p. 162–196. ^ Tolerable Upper Intake Levels For Vitamins And Minerals (PDF), European Food Safety Authority, 2006 ^ a b Rhéaume-Bleue, p. 42\n^ \"Important information to know when you are taking: Warfarin (Coumadin) and Vitamin K\" (PDF). National Institutes of Health Clinical Center. ^ \"Nutrition Facts and Information for Parsley, raw\". Nutritiondata.com. Retrieved 21 Apr 2013. ^ \"Nutrition facts, calories in food, labels, nutritional information and analysis\". Nutritiondata.com. 13 Feb 2008. Retrieved 21 Apr 2013. ^ \"Vitamin K\". Vivo.colostate.edu. 2 Jul 1999. Retrieved 21 Apr 2013. ^ \"Vitamin K\". Micronutrient Data Centre. ^ Ikeda, Y.; Iki, M.; Morita, A.; Kajita, E.; Kagamimori, S.; Kagawa, Y.; Yoneshima, H. (May 2006). \"Intake of fermented soybeans, natto, is associated with reduced bone loss in postmenopausal women: Japanese Population-Based Osteoporosis (JPOS) Study\". Journal of Nutrition. 136 (5): 1323–1328. PMID 16614424. ^ Katsuyama, H.; Ideguchi, S.; Fukunaga, M.; Saijoh, K.; Sunami, S. (Jun 2002). \"Usual dietary intake of fermented soybeans (Natto) is associated with bone mineral density in premenopausal women\". Journal of Nutritional Science and Vitaminology. 48 (3): 207–215. doi:10.3177/jnsv.48.207. PMID 12350079. ^ Sano, M.; Fujita, H.; Morita, I.; Uematsu, H.; Murota, S. (Dec 1999). \"Vitamin K2 (menatetrenone) induces iNOS in bovine vascular smooth muscle cells: no relationship between nitric oxide production and gamma-carboxylation\". Journal of Nutritional Science and Vitaminology. 45 (6): 711–723. doi:10.3177/jnsv.45.711. PMID 10737225. ^ Gast, G. C ; de Roos, N. M.; Sluijs, I.; Bots, M. L.; Beulens, J. W.; Geleijnse, J. M.; Witteman, J. C.; Grobbee, D. E.; Peeters, P. H.; van der Schouw, Y. T. (Sep 2009). \"A high menaquinone intake reduces the incidence of coronary heart disease\". Nutrition, Metabolism, and Cardiovascular Diseases. 19 (7): 504–510. doi:10.1016/j.numecd.2008.10.004. PMID 19179058. ^ Oldenburg, J.; Bevans, C. G.; Müller, C. R.; Watzka, M. (2006). \"Vitamin K epoxide reductase complex subunit 1 (VKORC1): the key protein of the vitamin K cycle\". Antioxidants & Redox Signaling. 8 (3–4): 347–353. doi:10.1089/ars.2006.8.347. PMID 16677080. ^ Suttie, J. W. (1985). \"Vitamin K-dependent carboxylase\". Annual Review of Biochemistry. 54: 459–477. doi:10.1146/annurev.bi.54.070185.002331. PMID 3896125. ^ Presnell, S. R.; Stafford, D. W. (Jun 2002). \"The vitamin K-dependent carboxylase\". Thrombosis and Haemostasis. 87 (6): 937–946. PMID 12083499. ^ Stafford, D. W. (Aug 2005). \"The vitamin K cycle\". Journal of Thrombosis and Haemostasis. 3 (8): 1873–1878. doi:10.1111/j.1538-7836.2005.01419.x. PMID 16102054. ^ Rhéaume-Bleue, p. 79.\n^ Whitlon, D. S.; Sadowski, J. A.; Suttie, J. W. (Apr 1978). \"Mechanism of coumarin action: significance of vitamin K epoxide reductase inhibition\". Biochemistry. 17 (8): 1371–1377. doi:10.1021/bi00601a003. PMID 646989. ^ Terlau, H.; Olivera, B. M. (Jan 2004). \"Conus venoms: a rich source of novel ion channel-targeted peptides\". Physiological Reviews. 84 (1): 41–68. doi:10.1152/physrev.00020.2003. PMID 14715910. ^ Buczek, O.; Bulaj, G.; Olivera, BM (Dec 2005). \"Conotoxins and the posttranslational modification of secreted gene products\". Cellular and Molecular Life Sciences. 62 (24): 3067–3079. doi:10.1007/s00018-005-5283-0. PMID 16314929. ^ \"Prothrombin Time\". WebMD. ^ Dituri, F.; Buonocore, G.; Pietravalle, A.; Naddeo, F.; Cortesi, M; Pasqualetti, P; Tataranno M. L.; R., Agostino (Sep 2012). \"PIVKA-II plasma levels as markers of subclinical vitamin K deficiency in term infants\". Journal of Maternal, Fetal & Neonatal Medicine. 25 (9): 1660–1663. doi:10.3109/14767058.2012.657273. PMID 22280352. ^ Thane, C. W.; Bates, C. J.; Shearer, M. J.; Unadkat, N; Harrington, D. J.; Paul, A. A.; Prentice, A.; Bolton-Smith, C. (Jun 2002). \"Plasma phylloquinone (vitamin K1) concentration and its relationship to intake in a national sample of British elderly people\". British Journal of Nutrition. 87 (6): 615–622. doi:10.1079/BJNBJN2002582. PMID 12067432. ^ McKeown, N. M.; Jacques, P. F.; Gundberg, C. M.; Peterson, J. W.; Tucker, K. L.; Kiel, D. P.; Wilson, P. W.; Booth, SL (Jun 2002). \"Dietary and nondietary determinants of vitamin K biochemical measures in men and women\" (PDF). Journal of Nutrition. 132 (6): 1329–1334. PMID 12042454. ^ Yamano, M.; Yamanaka, Y.; Yasunaga, K.; Uchida, K. (Sep 1989). \"Effect of vitamin K deficiency on urinary gamma-carboxyglutamic acid excretion in rats\". Nihon Ketsueki Gakkai Zasshi. 52 (6): 1078–1086. PMID 2588957. ^ Matsumoto, T.; Miyakawa, T.; Yamamoto, D. (Mar 2012). \"Effects of vitamin K on the morphometric and material properties of bone in the tibiae of growing rats\". Metabolism. 61 (3): 407–414. doi:10.1016/j.metabol.2011.07.018. PMID 21944271. ^ Je, S.-H.; Joo, N.-S.; Choi, B.-H.; Kim, K.-M.; Kim, B.-T.; Park, S.-B.; Cho, D.-Y.; Kim, K.-N.; Lee, D.-J. (Aug 2011). \"Vitamin K supplement along with vitamin D and calcium reduced serum concentration of undercarboxylated osteocalcin while increasing bone mineral density in Korean postmenopausal women over sixty-years-old\". Journal of Korean Medical Science. 26 (8): 1093–1098. doi:10.3346/jkms.2011.26.8.1093. PMC 3154347. PMID 21860562. ^ Bentley, R.; Meganathan, R. (Sep 1982). \"Biosynthesis of vitamin K (menaquinone) in bacteria\" (PDF). Microbiological Reviews. 46 (3): 241–280. PMC 281544. PMID 6127606. ^ Haddock, B. A.; Jones, C. W. (Mar 1977). \"Bacterial respiration\" (PDF). Bacteriological Reviews. 41 (1): 47–99. PMC 413996. PMID 140652. ^ Shearer, M. J. (Jan 1995). \"Vitamin K\". Lancet. 345 (8944): 229–234. doi:10.1016/S0140-6736(95)90227-9. PMID 7823718. ^ Greer, J. P.; Foerster, J.; Lukens, J. N.; Rodgers, G. M.; Paraskevas, F.; Glader, B. (eds.). Wintrobe's Clinical Hematology (11th ed.). Philadelphia, Pennsylvania: Lippincott, Williams and Wilkens. ^ a b American Academy of Pediatrics Committee on Fetus Newborn. (Jul 2003). \"Controversies concerning vitamin K and the newborn. American Academy of Pediatrics Committee on Fetus and Newborn\" (PDF). Pediatrics. 112 (1.1): 191–192. doi:10.1542/peds.112.1.191. PMID 12837888. ^ Logan, S.; Gilbert, R. (1998). \"Vitamin K For Newborn Babies\" (PDF). Department of Health. Retrieved 12 Oct 2014. ^ \"Postnatal care: Routine postnatal care of women and their babies [CG37]\". www.nice.org.uk. NICE. Jul 2006. Retrieved 12 Oct 2014. ^ Parker, L.; Cole, M.; Craft, A. W.; Hey, E. N. (1998). \"Neonatal vitamin K administration and childhood cancer in the north of England: retrospective case-control study\". BMJ (Clinical Research Edition). 316 (7126): 189–193. doi:10.1136/bmj.316.7126.189. PMC 2665412. PMID 9468683. ^ McMillan, D. D. (1997). \"Routine administration of vitamin K to newborns\". Paediatric Child Health. 2 (6): 429–431. ^ \"Newborns get rare disorder after parents refused shots\". Having four cases since February just at Vanderbilt was a little bit concerning to me ^ Dam, C. P. H. (1935). \"The Antihaemorrhagic Vitamin of the Chick: Occurrence And Chemical Nature\". Nature. 135 (3417): 652–653. doi:10.1038/135652b0. ^ Dam, C. P. H. (1941). \"The discovery of vitamin K, its biological functions and therapeutical application\" (PDF). Nobel Prize Laureate Lecture. ^ McAlister, V. C. (2006). \"Control of coagulation: a gift of Canadian agriculture\" (PDF). Clinical and Investigative Medicine. 29 (6): 373–377. ^ MacCorquodale, D. W.; Binkley, S. B.; Thayer, S. A.; Doisy, E. A. (1939). \"On the constitution of Vitamin K1\". Journal of the American Chemical Society. 61 (7): 1928–1929. doi:10.1021/ja01876a510. ^ Fieser, L. F. (1939). \"Synthesis of Vitamin K1\". Journal of the American Chemical Society. 61 (12): 3467–3475. doi:10.1021/ja01267a072. ^ Dam, C. P. H. (12 Dec 1946). \"The discovery of vitamin K, its biological functions and therapeutical application\" (PDF). Nobel Prize lecture. ^ Warner, E. D.; Brinkhous, K. M.; Smith, H. P. (1938). \"Bleeding Tendency of Obstructive Jaundice\". Proceedings of the Society of Experimental Biology and Medicine. 37 (4): 628–630. doi:10.3181/00379727-37-9668P. ^ Stenflo, J; Fernlund, P.; Egan, W.; Roepstorff, P. (Jul 1974). \"Vitamin K dependent modifications of glutamic acid residues in prothrombin\". Proceedings of the National Academy of Sciences of the United States of America. 71 (7): 2730–2733. doi:10.1073/pnas.71.7.2730. PMC 388542. PMID 4528109. ^ Nelsestuen, G. L.; Zytkovicz, T. H.; Howard, J. B. (Oct 1974). \"The mode of action of vitamin K. Identification of gamma-carboxyglutamic acid as a component of prothrombin\" (PDF). Journal of Biological Chemistry. 249 (19): 6347–6350. PMID 4214105. ^ Magnusson, S.; Sottrup-Jensen, L.; Petersen, T. E.; Morris, H. R.; Dell, A. (Aug 1974). \"Primary structure of the vitamin K-dependent part of prothrombin\". FEBS Letters. 44 (2): 189–193. doi:10.1016/0014-5793(74)80723-4. PMID 4472513. Bibliography[edit]\nRhéaume-Bleue, Kate (2012). Vitamin K2 and the Calcium Paradox. John Wiley & Sons, Canada. ISBN 1-118-06572-7. External links[edit]\n\"Vitamin K: Another Reason to Eat Your Greens\". v\nTPP / ThDP (B1)\nFMN, FAD (B2)\nNAD+, NADH, NADP+, NADPH (B3)\nCoenzyme A (B5)\nPLP / P5P (B6)\nTHFA / H4FA, DHFA / H2FA, MTHF (B9)\nAdoCbl, MeCbl (B12)\nPhylloquinone (K1), Menaquinone (K2)\nnon-vitamins\nCoenzyme B\nHeme / Haem (A, B, C, O)\nMolybdopterin/Molybdenum cofactor\nTHMPT / H4MPT\nFe2+, Fe3+\nvitamins: see vitamins\nAntihemorrhagics (B02)\n(coagulation)\nPhytomenadione (K1)\nMenadione (K3)\nintrinsic: IX/Nonacog alfa\nVIII/Moroctocog alfa/Turoctocog alfa\nextrinsic: VII/Eptacog alfa\ncommon: X\nII/Thrombin\nI/Fibrinogen\nXIII/Catridecacog\ncombinations: Prothrombin complex concentrate (II, VII, IX, X, protein C and S)\nCarbazochrome\nthrombopoietin receptor agonist (Romiplostim\nEltrombopag)\nTetragalacturonic acid hydroxymethylester\nEpinephrine/Adrenalone\namino acids (Aminocaproic acid\nAminomethylbenzoic acid)\nserpins (Aprotinin\nAlfa1 antitrypsin\nCamostat).", "answers": ["90 μg for women and 120 μg for men."], "length": 7142, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "b3f3be2f0b46c0df08868f749519635186e6e22cf054ca79"} {"input": "What are the benefits of using binary variables in the SLAS formulation?", "context": "Paper Info\n\nTitle: SLAS: Speed and Lane Advisory System for Highway Navigation\nPublish Date: Unkown\nAuthor List: Faizan Tariq, David Isele, John Baras, Sangjae Bae\n\nFigure\n\nFig. 1.Motivational Example.With a slow moving vehicle ahead, the ego vehicle (in blue) may decide to either change lane to the fast moving lane (left) to minimize travel time or adjust its speed without changing lanes to preserve safety but it would be unwise for it to switch to the slow moving lane (right) as that would not benefit travel time or safety.\nFig. 3. Simulation Setup.Scenario Runner sets up the scenario for the CARLA Simulator, which then communicates with the SLAS and the Planning and Control ROS (Robot Operating System) nodes through the ROS bridge node.\nFig. 4. Testing scenario with three lanes: lane 0 (left), lane 1 (center) and lane 2 (right).The expected motion of the ego vehicle, over the course of the simulation, is shown with numbered frames.The right most lane (lane 3) is reserved for merging traffic so it is not utilized in our simulation.\nFig. 5. Left: Travel time comparison.Center: Lane choice (lateral position) comparison.The center lines of lanes 0 (left), 1 (center) and 2 (right) have fixed lateral displacements of 0m, 3.5m and 7m respectively.Right: Headway comparison.With no leading vehicle, the headway is restricted by the visibility range of 50m.\n\nabstract\n\nThis paper proposes a hierarchical autonomous vehicle navigation architecture, composed of a high-level speed and lane advisory system (SLAS) coupled with low-level trajectory generation and trajectory following modules. Specifically, we target a multi-lane highway driving scenario where an autonomous ego vehicle navigates in traffic.\nWe propose a novel receding horizon mixed-integer optimization based method for SLAS with the objective to minimize travel time while accounting for passenger comfort. We further incorporate various modifications in the proposed approach to improve the overall computational efficiency and achieve real-time performance.\nWe demonstrate the efficacy of the proposed approach in contrast to the existing methods, when applied in conjunction with state-of-the-art trajectory generation and trajectory following frameworks, in a CARLA simulation environment.\n\nINTRODUCTION\n\nLane changing is considered to be one of the most risky driving behaviors since it is highly contingent upon multimodal trajectory predictions of neighboring vehicles and requires timely decision making . It is further influenced by a number of uncertainty factors such as road conditions, measurement accuracy, and a long tail of behavioral uncertainty of on-road agents.\nHowever, if executed efficiently, lane changing coupled with speed adjustment can yield significant improvement in minimizing overall travel time while ensuring passenger comfort . To elaborate further, consider the scenario presented in Fig. . Based on the predicted motion (shown in a lighter shade) of the neighboring vehicles (shown in orange), the ego vehicle (shown in blue) may decide to either change lane left in an attempt to minimize its travel time or slow down in the current lane to maintain safety.\nHowever, it would be imprudent for the ego vehicle to risk changing lane right and consequently get stuck behind a slow moving vehicle even though there is presently a greater headway. This simple scenario highlights the importance of foresight and long planning-horizon in strategic decision making for autonomous vehicles.\nExisting methods like MOBIL give us the ability to change lanes but behave greedily (prioritizing immediate rewards) oftentimes, which can lead to sub-optimal performance. It was shown in that the lane changing performance can be improved with an A inspired approach, but the formulation was limited to constant speed.\nSuch an approach is unable to assess the benefits of speed adjustment 1 University of Maryland, College Park, MD, USA. Email: {mftariq,baras}@umd.edu. 2 Honda Research Institute, San Jose, CA, USA. Email: {disele,sbae}@honda-ri.com. Research supported by Honda Research Institute, USA. in minimizing overall travel time.\nAs will become apparent in Section IV, it may be necessary at times to sacrifice on shortterm benefits to gain long-term performance improvements. In such a scenario, an approach with speed adjustment coupled with long planning horizon has the foresight to deliver significantly better results. Moreover, the inclusion of speed adjustment in the decision making process inhibits the risk of incurring trajectory infeasibility as the environment conditions may prevent the ego vehicle from traveling at a constant reference speed and the low-level planner may be unable to handle such a discrepancy.\nTherefore, in this work, we propose a low complexity receding horizon optimization based approach that outputs the lane change maneuvers coupled with speed adjustments for long planning horizons (> 15s) while guaranteeing safety. The long horizon strategic decision making gives ego vehicle the ability to proactively anticipate and handle challenging driving situations.\nLiterature review: In the literature, speed and lane changing decisions are generally considered from a motion planner's perspective , which allows for a simultaneous determination of target lanes and waypoints to perform the maneuver. The motion planning methods present in the literature can broadly be categorized into sampling-based, learning-based and optimization-based approaches.\nIn regards to the sampling-based approaches, single-query methods, in particular the different variants of RRT, are preferred over multi-query methods, like roadmap-based methods, due to the faster execution time and their ability to incorporate non-holonomic constraints . Even though these methods are able to incorporate safety guarantees by sampling feasible trajectories from a reachable safe set , the overall driving experience is often rather uncomfortable due to the concatenation of individual trajectories.\nMoreover, the asymptotic optimality guarantees availed by arXiv:2303.00861v1 [cs.RO] 1 Mar 2023 these methods do not help with real-world implementation in complex driving scenarios since they tend to have high sample complexity . In terms of the learning-based methods, the preferred approach seems to be the variations of Reinforcement Learning techniques applied in a simulated environment , , , , .\nThese approaches, although seeming to work well in simulation, have concerns regarding real-world implementation due to the large amount of training data that they require, the exploration of unsafe behaviors during training, and a general inability to handle edge cases. They mainly utilize neural networks as function approximators which yields low computational complexity but also results in a lack of explainability and safety guarantees.\nLastly, the optimization-based approaches, especially the derivatives of optimal control methods, are abundant in the literature. In contrast to the potential-field based approaches that yield decent collision avoidance performance but are unable to accommodate vehicle dynamics, the optimal control methods , especially the derivatives of Model Predictive Control (MPC) approach , , , yield excellent collision avoidance performance while accommodating vehicle dynamics.\nHowever, this performance comes at a cost of high computational complexity, arising mainly from the non-convex collision avoidance and the non-linear dynamics constraints. This, in turn, restricts the planning horizon to merely a few seconds. The key requirements for the algorithmic design of an autonomous vehicle include real-time operation, safety guarantees, optimality with respect to some metric(s), and accounting for the behavior variability of on-road agents.\nConsidering these requirements, we propose an optimization-based behavioral planning framework that enables autonomous vehicle maneuvering on multilane highways. While having the benefits of optimizationbased approaches, our method achieves a low computational complexity by employing a binary representation of the decoupled lane indicator dynamics in lieu of lateral dynamics, and utilizing algorithmic modifications to aid numerical computations.\nSpecifically, our method provides: • optimality with respect to travel time and comfort; • safety and feasibility guarantees; • real-time applicability for a long planning horizon; and • modularity in design, which enables the integration of external trajectory prediction modules. The proposed method fills in the research gap by meeting all the key algorithmic requirements while simultaneously gaining the foresight to make strategic decisions that yield long-term performance benefits, as verified in Section IV.\nIn this section, we present the algorithmic pipeline and formalize the road, observation and vehicle dynamics models that will be utilized in the subsequent sections.\n\nAlgorithmic Pipeline\n\nFig. illustrates the algorithmic pipeline of the proposed navigation architecture, in reference to the various existing . Algorithmic pipeline of the proposed navigation architecture. The raw sensory input data is processed by the Perception, and Simultaneous Localization and Mapping (SLAM) modules to place the autonomous vehicle relative to the various environmental entities in a unified frame of reference.\nThis information is then passed on to the navigation stack, composed of the behavioral planning, motion planning, and vehicle control modules. The output of the navigation module is passed down further in terms of actuation commands (brake, throttle and steering) to the actuators. algorithmic modules deployed on an autonomous vehicle.\nThe taxonomy of the various components of the navigation stack (highlighted by the dotted rectangle) is borrowed from . This pipeline essentially improves the pipeline introduced in by adding a speed advisory system. Our main focus is the development of the behavior planning module, highlighted as SLAS in Fig. . SLAS outputs the target lane and reference speed which are utilized by the motion planning module to generate a reference trajectory for the ego vehicle.\nThe vehicle controllers compute the throttle and steering commands to track the trajectory accordingly. For the motion planning module, we adopt the Neural Networks integrated Model Predictive Control (NNMPC) due to its ability to accommodate the behaviors of neighboring vehicles in the trajectory generation process.\nIn our approach, we assume that the perception (of other vehicles) and the localization (of ego vehicle) are known without any uncertainty, for simplicity, but the modular architecture avails us the ability to integrate any perception or SLAM module in the overall framework. Throughout the manuscript, Z will denote the set of integers and R the set of real numbers.\nFor some a, c ∈ Z and a < c, we will write For some e, g ∈ R and e < g, we will write\n\nRoad Model\n\nThe physical road structure is modeled as a continuous multi-lane highway with negligible curvature and unidirectional traffic flow. The lanes on the highway are clearly demarcated and at any given time k, the number of available lanes for the vehicles to travel on is denoted by N l (k) while the road speed limit is denoted by V l .\nTherefore, the set of lanes available for traveling at a given time instant k is denoted by . We work with the Frenet coordinate system where the distance along the road is denoted by the longitudinal displacement (s) and the distance perpendicular to the road is defined by the lateral displacement (d).\nEach lane is assigned a lane indicator variable l. The leftmost lane, with respect to the direction of traffic flow, is assigned a value of l = 0 while each subsequent lane is assigned an increasing integer value for l, as depicted in Fig. .\n\nVehicle Model\n\nSince we aim to have real-time computations for a long planning horizon (> 15s), we model the vehicle dynamics with a linearized decoupled dynamical system. For the highway driving scenario, where the road curvature is typically small, it is reasonable to assume a decoupling between the lateral and the longitudinal dynamics , especially for the behavior planning layer.\nTherefore, we utilize a linear constant acceleration model for the longitudinal dynamics and abstract out the lateral dynamics with a lane indicator variable. For the lane change dynamics, we use a moving average filter coupled with a rounding function to model the time required by the ego vehicle to change lanes.\nThis is compactly represented as: where s 0 (k), v 0 (k), l 0 (k) and L(k) denote the ego vehicle's longitudinal displacement, speed, lane indicator and target lane, respectively, at time instant k; the subscript i indexes the vehicles on the road with 0 being reserved for the ego vehicle; T s denotes the discretization time step; and N corresponds to the number of time steps required to change lane.\nThe state (x 0 (k)) and control input (u 0 (k)) to the system at time instant k are defined as: where V m denotes the maximum speed of the ego vehicle.\n\nObservation Model\n\nFor practical considerations, we restrict the ego vehicle's visibility range to the sensory perception limit, denoted by R v . Then, the set of vehicles in ego vehicle's visibility range at time instant k, represented by O(k), is defined as: where s i (k) corresponds to the longitudinal displacement of the observed vehicle.\nRemark 1: For the multi-lane highway driving scenario, occlusion does not play a prominent role so we do not account for it in the existing formulation. However, the proposed framework can easily accommodate occlusion and measurement uncertainties since the receding horizon approach bases its decision on the most up-to-date information available at any given time, as demonstrated in .\nIn this section, we describe the prediction model to generate the predicted future trajectories of observed vehicles and present a discussion on the proposed receding horizon optimization-based behavioral planning module.\n\nTrajectory Prediction\n\nReliable behavior and trajectory prediction of other traffic participants is crucial for safe maneuvering of autonomous vehicles. The algorithm proposed in Section III-B is able to incorporate any generic prediction module available in the literature as long as it can provide a deterministic predicted future trajectory for a given vehicle.\nIn this work, we formulate a low-complexity prediction model that highlights the flexibility and efficiency of our proposed approach. For an observed vehicle i ∈ O(k), the future speed profile is predicted using a piece-wise linear function while the lane profile is assumed to stay constant for the duration of the prediction horizon.\nAt a given time step k, the estimated acceleration (ā k i ) and the estimated speed (v k i ) parameters are obtained through linear regression with mean-squared error on the past o k i > 1 speed observations. Based on the estimated parameters, we predict the future speed and longitudinal displacement as follows:\nHere, H a corresponds to the acceleration horizon while vk i (j) and ŝk i (j) respectively represent the predicted speed and longitudinal displacement for vehicle i, j time steps into the future starting from the current time instant k. Remark 2: Due to the modular nature of the proposed framework, the behavior planning module detailed in Section III-B can work with advanced maneuver-based (e.g.\nMarkov Chain ) and interaction-based (e.g. Social Generative Adversarial Networks ) trajectory prediction modules, allowing for interactive maneuvering behaviors.\n\nSpeed and Lane Advisory System\n\nThe goal of our behavior planning module, Speed and Lane Advisory System or in short, SLAS, is to determine a sequence of speed and lane change commands that would enable the ego vehicle to maximize its speed, thus minimizing the travel time, while accounting for driver comfort and abiding by its dynamical, actuator, and safety limits.\nThe output of this module is a relatively smooth speed and lane change profile which is then passed on to a motion planner. It is necessary to incorporate the dynamical and actuator limits in the behavioral planning module so as not to provide the motion planner with goals that are not reachable, and jeopardize the safety of the overall system as a result.\nIn the subsequent discussion, we provide a formulation of the optimization problem for SLAS; highlight the modifications necessary to improve the computational complexity; and, present safety and feasibility analysis. 1) Optimization Problem with Integer Constraints: SLAS is posed as an optimization problem, with the objective to maximize speed while minimizing frequent lane changes and abrupt changes in speed.\nThe output of SLAS, at time instant k, is the control input u 0 (k + 1), as defined in . The optimization problem is formulated as follows: Objective Function: In the formulation above, the optimization variables are the ego vehicle's speed (v k (j)) and target lane (L k (j)), j step into the future, starting from time instant k.\nHere, H corresponds to the planning horizon. The scalarization parameters γ 1 , γ 2 and γ 3 in the objective function account for a relative tradeoff between maximizing speed, minimizing lane changes and minimizing abrupt changes in speed respectively. Increasing γ 1 yields a more aggressive behavior with the priority placed on maximizing speed while γ 2 and γ 3 combine to place an emphasis on maximizing passenger comfort by reducing lane and speed changes respectively.\nDynamical Constraints: These constraints are put in place to ensure the dynamical feasibility of the solution. The constraints ( ), and serve to initialize the longitudinal displacement, speed and target lane respectively for the optimizer, based on the values observed at time instant k. The constraints and ( ) bound the ego vehicle's speed by the speed limit and the acceleration limits of the vehicle respectively.\nThe ego vehicle's speed is then used to calculate the projected longitudinal displacement in . The target lane values at any planning step (j) are restricted to the set of reachable values by ( ), ( ) and . Here, restricts the target lane to the set of available lanes (L(k)), ensures that the lane change, if needed, is made to the adjacent lane only and (17) models the time steps (N ) required for a lane change.\nThe flooring function can easily be transformed into a couple of linear constraints by the introduction of an auxiliary integer variable, as shown in the Appendix. Finally, l k (j) is merely the internal representation of the lane the ego vehicle is projected to travel on at planning step j. Safety Constraint: The safety constraint ensures that the ego vehicle maintains a minimum safe distance (L s i (j)) to the nearest vehicle i, in its projected lane of travel (l k (j)), at planning instant j.\nWe borrow the definition of this safe distance from , where the authors provide a formalization, based on the clause from Vienna Convention on Road Traffic that states that \"A vehicle [...] shall keep at a sufficient distance [...] to avoid collision if the vehicle in front should suddenly slow down or stop.\"\nFurthermore, the absolute value constraint can be decomposed into linear constraints by the application of big-M method and the introduction of an auxiliary variable, as shown in the Appendix. Remark 3: The proposed formulation can accommodate arbitrary number of lanes at any given time instant k. This means that if at any given time, the number of available lanes for traveling either increases or decreases, the proposed formulation will still continue to hold.\nThis is an important consideration since many a times on highways, some lanes are blocked due to various unanticipated situations such as road accidents, roadwork, narrowing of road etc. 2) Computational Complexity Reduction: This section details the optimization problem reformulation with binary variables, optimization warm start technique and lazy constraint implementation, all of which combine to improve the computational complexity of our SLAS module.\nBinary Variables: The proposed formulation in Section III-B has relatively high computation complexity (computation time of ∼ 2s in the worst case scenario -slow moving traffic blocking all the lanes) due to the integer decision variables yielding a mixed-integer optimization problem . To circumvent the computational overload, we reformulate the problem with binary variables that replace the integer variables, as follows:\nwhere the Lk (i, j) represents the modified target lane variable, indexed by the lane (i) as well as the planning step (j) and Lk (a, b) = 1 represents the choice of lane a ∈ L as the target lane at planning step b ∈ Z . Then, some of the constraints from the SLAS formulation in Section III-B are modified as follows:\nHere, initializes the target lane, (21) restricts the target lane at any planning step to the set of available lanes, restricts the lane change between consecutive planning steps to the adjacent lanes, and ( ) represents the augmented safety constraint. The implication ( =⇒ ) in ( ) can easily be transformed into a linear constraint (see Appendix).\nThe augmented minimum safety distance ( Ls i (j)) incorporates the time required to execute the lane change maneuver (N ) from ( ) into the following unified safety constraint: where L l is the width of the lanes (see Fig. ), δ(k) is the signed lateral deviation of the ego vehicle from the previous target lane's boundary at time step k, and γ d (δ k (j)) is the dynamic cost of deviation from the previous target lane (L(k − 1)).\nMoreover, in the cost function , we take L k (0) = L(k − 1). These costs are introduced to prevent the swerving (canceling of lane switch before completion) behavior, unless absolutely necessary (for safety purposes). Remark 4: Since the ego vehicle is considered to have changed lane once it crosses a lane boundary, the deviation δ k (j) is considered from the lane boundary instead of the center of the target lane to maintain the continuity of γ d (δ k (j)) with respect to the lateral displacement of the ego vehicle.\nSpecifically, δ k (j) > 0 if the ego vehicle has crossed the previous target lane boundary and 0 otherwise. This is an important consideration since a discontinuity in γ d (δ k (j)), upon completion of lane change, may lead to infeasibility. Remark 5: The swerving behavior is suppressed but not completely eliminated with a hard constraint since such a behavior is necessary at times to react to the environment's unpredictability.\nThis reactive strategy, which is a distinctive feature of our approach, avails the algorithm the ability to proactively 'change its mind' in case something unanticipated happens in the environment that can jeopardize safety. Optimization Warm Start: To aid the optimizer in finding an initially feasible solution, we provide the solution from the previous time step as a reference.\nFormally, This doesn't imply that the solution from time step k − 1 will hold exactly at time step k, owing to the unmodeled disturbances, but providing this reference aids the optimizer in finding an initially feasible solution in the vicinity of the reference solution. This observation is rooted in the premise that the solution for the long planning horizon is not expected to change significantly between time steps, given the sampling time is not too large, and the predicted behavior of on-road agents does not alter significantly.\nIt is also worth pointing out that the priority here is quickly finding a feasible solution that obeys the safety constraints and actuator limits, and recursively improving it rather than excessively iterating to reach at a global optimum. In our experiments, it was observed that a suboptimal solution was qualitatively not significantly different from the optimal one.\nTherefore, we utilize the cutting planes method for optimization , which first looks for a feasible solution, using our provided reference, and then recursively updates it until either the globally minimal solution is found or the time limit is reached. Lazy Constraints: To further enhance the computational efficiency, we introduce a lazy implementation of the lane changing constraints .\nIt was observed in our experiments that a feasible solution without the lane changing constraints ( ) can be found several order of magnitude (∼ 10×) quicker than if we include these constraints so we decided to have a lazy implementation for them. With a lazy implementation , the solver finds a set of feasible solutions without the inclusion of these constraints and then determines the feasibility of those solutions from the reduced problem with respect to the lazy constraints.\n3) Feasibility: By an argument similar to the one presented in , it is a relatively straightforward proof for recursive feasibility of the problem, i.e. the optimization problem will continue to stay feasible, if initially feasible, with the trivial solution being matching the speed of the leading vehicle and not changing lanes.\nIn this section, we detail our experimental setup, demonstrate the performance of SLAS, and report a qualitative as well as a quantitative comparative analysis. The baselines in our comparative analysis are set to: Extended-Astar (EA ) , MOBIL , and no lane-change model (No-Change).\n\nExperimental Setup\n\nThe implementation setup, depicted in Fig. , is composed of the CARLA simulator (Version 0.9.11) , SLAS module (Section III-B), and the planner and controller module . To solve the optimization problem for SLAS, we use Gurobi Optimizer (Version 9.1.1) . The simulations are performed on a computer equipped with an Intel Xeon(R) CPU E5-2643 v4 @ 3.40GHz × 12 and NVIDIA Titan XP, running Ubuntu 20.04 LTS.\nOn average, the time required for each optimization step is ∼ 0.096s, while the maximum time limit for the optimizer is set to 0.2s, indicating the strong potential for real-time applicability.\n\nCase Study\n\nFigure illustrates the test case scenario for our comparative analysis. The scenario is composed of a highway segment with four lanes and the rightmost lane reserved for merging vehicles. The ego vehicle is initialized to follow a slow moving vehicle in lane 1 and has even slower moving traffic to its right in lane 2. Thus, the only option for it, in order to minimize travel time, is to switch left to lane 0 with faster moving traffic and greater headway.\nOnce it moves to lane 0, and overtakes the slow moving vehicle in lane 1, it has two options: either to keep traveling in lane 0 without making any lane change decisions until getting close to the lead vehicle or proactively exploiting the gap in lane 2 to switch to lane 3 in anticipation of traffic buildup in lanes 1 and 2. A strategic decision maker with foresight will choose to take the later option and make the decision proactively for a greater overall benefit.\nThe evaluation metrics for the comparative analysis include: travel time, lateral displacement, headway and distance to the closest vehicle. As for the simulation parameters, the simulation step size is set to 0.05s (simulation frequency of 20Hz); the velocities of vehicles in lanes 0, 1 and 2 are set to 8, 5 and 2 m/s respectively while the speed limit V l is set to 15m/s; the length of the highway patch is set to 350m while the width between the lanes is set to 3.5m; and the sensor visibility range is set to R v = 50m.\nThe parameters for SLAS are set as follows: T s = 0.4s, H = 40, N = 3, A min = −5m/s 2 , A max = 3.5m/s 2 , γ 1 = 1, γ 2 = 0.1 and γ 3 = 0.01. The values of these parameters can be tuned to yield an aggressive or defensive behavior of the algorithm. 1) Travel Time: The left plot in Fig. depicts the travel time as a function of longitudinal displacement for the four algorithms.\nAs seen in the plot, our method (SLAS) maintains a lower overall travel time as compared to the other methods. Quantitatively speaking, SLAS outperforms EA , MOBIL and No-change methods by 12.72%, 23.52% and 54.34% respectively in terms of the time required to complete the simulation scenario. This shows that our method's foresight compensates for its apparent conservativeness arising from the need to preserve passenger comfort.\n2) Lateral Displacement: To identify the differences in lane changing behaviors between the four approaches, the relationship between lateral and longitudinal displacements over the course of the simulation is highlighted in the center plot of Fig. . In the plot, the lateral displacement of 0 corresponds to the center of lane 0 while the center of each following lane is 3.5m away.\nComparing the performance of the four algorithms, we see SLAS and EA showing relatively similar performances, resulting from proactive decision making. In contrast, since MOBIL only assesses the advantage of switching to the adjacent lanes, it is unable to see the benefit of proactively switching to lane 2. This explains why EA and SLAS start outperforming MOBIL in terms of travel time (left plot) at around the 130 [m] mark for longitudinal displacement.\nAs for a direct comparison between SLAS and EA , the benefits of having speed advisory system become apparent in this center plot. Due to speed control, SLAS is able to constantly maintain a greater headway (right plot) without having to brake significantly upon getting too close to the lead vehicle. This results in a smooth lateral displacement profile which allows the vehicle to change lanes with minimal jerk (quantitative analysis to follow in Section IV-C) and deliver better overall timing performance (left plot).\n3) Headway: The right plot in Fig. shows the headway maintained by the ego vehicle over the course of the simulation. In accordance with our prior discussion, MOBIL cruises behind the front vehicle, maintaining a relatively low headway until a sufficient space in the adjacent lane is found to perform the lane-change maneuver.\nOn the other hand, EA and SLAS show a comparable headway trajectory, however, SLAS maintains a greater headway throughout and achieves the maximum headway prior to EA . Quantitatively, SLAS maintains on average 9.43%, 36.57% and 113.17% more headway than the EA , MOBIL and No-change approaches respectively.\nThis strong performance by SLAS can be attributed to its incorporation of safety guarantees coupled with its consideration for passenger comfort. 4) Distance to closest vehicle: Finally, we compare the distance that ego vehicle maintains from the closest vehicle throughout the simulation. On average, SLAS maintains 9.28%, 32.01%, and 22.84% more distance in comparison to EA , MOBIL and No-change approaches respectively.\nThese numbers are a testament to the strength of our approach resulting from consideration of long planning horizon coupled with speed control.\n\nMonte Carlo Simulations\n\nTo demonstrate the long-term performance of the three approaches (SLAS, EA and MOBIL), we run a series of Monte Carlo simulations on scenarios with randomized initial positions (within a range of 8m) and velocities (within ranges of 8, 5 and 2 m/s assigned to each of the three lanes randomly) of traffic participants.\nThe result from 50 simulations is presented in Table . In this table, the columns represent the different evaluation metrics, the rows identify the three algorithms, and the values highlighted in green represent the best result with respect to each evaluation metric. The evaluation metrics, going from left to right in the table, are completion time (s), brake (R [−1,0] ), brake jerk (R [−1,0] ), throttle (R [0,1] ), throttle jerk (R [0,1] ), angular acceleration ( • /s 2 ) and angular jerk ( • /s 3 ).\nApart from completion time, the remaining metrics, based on the commands passed to the vehicular actuators (Fig. ), are used to model passenger comfort. In terms of average performance, SLAS greatly outperforms the other methods when it comes to passenger comfort since it explicitly accounts for comfort in the formulation.\nHowever, it does so at a cost of slightly reduced performance in regards to travel time, when compared to EA , since SLAS tries to strike a balance between minimizing travel time and maximizing passenger comfort. SLAS also secures the lowest standard deviation, for each of the evaluation metrics, when compared to the other methods, which points to the consistency in its long-term performance.\n\nCONCLUSION\n\nWe propose a novel behavior planning module for the multi-lane highway maneuvering scenario, that outputs strategic target lane and reference speed commands, and incorporate it with a state-of-the-art motion planning and control framework. We formulate the approach as a receding horizon mixed integer optimization with the goal to minimize travel time while accounting for passenger comfort for a long planning horizon.\nIn order to reduce the computational overload, we reformulate the problem by replacing integer variables with binary ones and further incorporate various modifications to aid numerical computations. We also carry out a detailed comparative analysis to demonstrate the performance of our approach on the CARLA simulator.\nOur future work includes incorporating various delays and uncertainty measures in the perception, localization and prediction modules to evaluate the robustness properties of our approach.\n\nFlooring Constraint\n\nFor y ∈ Z and x ∈ R, the constraint y = x can be represented by the following linear constraints: y ≤ x, y + 1 ≥ x + where > 0 accounts for the feasibility tolerance. where M 0 (big-M) and > 0 accounts for numerical errors (chosen to be 0.1 in our implementations).\n\nAbsolute Value Constraint\n\nFor ∆s, L s ∈ R, the constraint |∆s| − L s ≥ 0 can be represented as: This can further be generalized, as done in our implementation, to have different forward and rear safety margins as: ∆s ≥ L f s ∨ ∆s ≤ −L r s where L f s and L r s are forward and rear safety margins respectively. This can be represented with the following linear constraints:\ns ≥ 0 where M 0 (big-M) and c ∈ {0, 1} is responsible for making a choice between the two constraints.", "answers": ["Reduced computational complexity."], "length": 5466, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "0e9a32e0989483442381f0195e620fea8c5538e42f130bdb"} {"input": "How can you level up in the early levels?", "context": "What is this game all about? (short version) Do you like the board game RISK®? Then chances are you’ll like QONQR. Your job is to join a faction and help your faction (team) take over the world. QONQR is an artificial intelligence that appeared on the Internet. We don’t know where it came from, or its purpose, but we know it is powerful. The Legion want to destroy QONQR because they believe it will enslave and exterminate humanity. The Swarm believe QONQR will advance the human race, and we should protect it. The Faceless don’t care, and want to steal the technology for their own uses. Pick a side, recruit your friends, and help your faction capture the towns and cities in which you live, work, and play. What is this game all about? (long version) Right now an invisible war is raging all around you. At stake: the Past, Present, and Future. A rogue Artificial Intelligence has been detected infiltrating the world’s networked infrastructure. Initial hacking of the source code has revealed incredible new technology. It is not certain whether this AI seeks to enhance or destroy humanity. It is only certain that it is here, and that is has a name: QONQR.Those who first detected the presence of QONQR on the global networks have argued fiercely over its intentions. They have split into viciously rival Factions, each co-opting the known QONQR technologies for their own ends. Even now the Factions battle over the entire globe seeking to gather resources and prepare for the coming power struggle. Whether you accept it or not, the war is here. Your survival, prosperity, and even glory depend on the choices you make and the skill you demonstrate from this point forward. You will be asked to join one of three Factions. The choice should not be made lightly. Your Faction Alignment will define your place in the war over QONQR. THE LEGION unite under the shared goals of destroying QONQR and saving humanity by crushing the nascent AI before it can mature. They are led by AGENT SUNDAY, a former commander of the NSA's Turing Task Force which has been valiantly stamping out dangerous AIs for years. THE SWARM are convinced that QONQR promises an era of unprecedented technological advancement and human prosperity. Nanobot weaponry expert KIMYO NAGUMO leads this faction in the battle to defend QONQR and assemble its future tech, accelerating humanity's path into the future. THE FACELESS are a loosely organized faction of militant hackers who want QONQR's technology for their own ends, but want to prevent the unavoidable nightmare of human slavery they believe it portends. When they choose to communicate, they do so through an anonymous vigilante who goes by the name PROMETHEUS. . What do I do first? Create a base, then launch. Launch nanobots until your fingers hurt. How do I create a base? On iOS there is a Base icon in the menu bar. For Windows Phone, you will find a bases button on the home screen. These take you to the Base Screen where you can see how many bases you have available to you. Once you create a base, be sure harvest often by returning to the list of your bases. You can see how full each base is by checking the fill percentage icon. Bases stop collecting once they are full. What is the point of creating bases and harvesting resources? You need bases to earn money. Bases collect rare elements over time which you can then harvest for your faction in exchange for qredits. Qredits can be used to purchase ordinance (like nanomissiles) and upgrades which will help you capture and hold battle zones more easily. What do you mean, “launch nanobots?” Nanobots are the invisible soldiers generated by your device (which has been transformed into “Scope” by advanced QONQR technology). Nanobots fight for control of the battle zones around you.. From the home screen click “Current Zone”. Once you have selected your zone, you will be able to deploy nanobots there.. Initially, you are just a level 1 recruit. You are only going to get a small attack formation with a limited range.. Other solders have to practice with rifles before they get tanks; you are no different. Once you prove your mettle, you’ll get access to bigger weapons. Soon you’ll be lobbing missiles hundreds of miles. How do I capture a zone? If you play for the Legion and are launching nanobots into a zone controlled by the Swarm, you will capture the zone for the Legion as soon as you have destroyed enough of the enemy that your nonbots outnumber theirs. If you are the person who causes the zone to change control to the Legion, you will be listed as the Capturer of the Zone. The Person with the most nanobots in the zone is the zone leader. What is my current zone? How do you know? Your current zone is determined by your proximity to the nearest zone center. So, while you might be inside the governmental boundaries of a city, your scope (phone) could tell you your current zone is a different city, if that city’s center is closer. So I just keep deploying? Yes, in the early levels of the game, just keep deploying and harvesting your bases. You will earn XP (experience points) proving your loyalty to your Faction.\tYou will level up quickly and soon have access to many more options. So all I can do is just attack? You only get assault bots to start. They are the most basic type of nanobot formation. As you level up you will get many more options, including bots that are good at defense, energy attacks, long-range deployments, formations that will buff the shields for all your faction-mates in the zone, and many more. My faction already controls this zone should I still attack? Yes, assault bots won’t attack your friends. You will increase the bot counts for that zone, which will deter opponents. Attack bots can defend your zone; they just aren’t very good at it. As you level up you will unlock defensive formations that are better for deploying if your faction already holds the zone. I’ll never knock out my enemy at this rate! In the first few levels your impact might feel minimal, but every deployment helps you gain experience. It won’t take long to level up if you keep at it. If you are unlucky enough to be in a zone where someone has already built up huge defenses, you may be in for a long fight. But remember, your scope moves with you. Go explore the world and find softer targets. Once you level up, it won’t seem like a toy gun against a battleship. You’ll get your big weapons once you prove yourself. We have already seen operatives brag about taking down 1,000,000 nanobots in just a couple days. Nothing is impossible. How do I attack a different zone? As you level up you will unlock more and stronger formations. Those new formations will have range. While at the early levels you can only attack nearby zones, as you level up your attacks will go 10-20 miles (roughly 15-30 Km) and you will eventually gain access to nanomissiles that can go hundreds of miles. What should I buy in the depot first? The smallest thing to buy is a refresh. We give you some of these as you level up so you can try them out. Refreshes will refill (or partially fill depending on the size of your tank) your bot bar or your energy bar. But after that, it depends on your goals. There is much to choose from. Do you want to be able to deploy more nanobots on every launch? Do you want to boost your offensive or defensive bots? Do you want to be able to launch missiles into towns far away? All of these things are possible. Look through the depot and see what you like. Most of the time you will need to buy an upgrade before you can buy the ordinance. For example, missiles are fairly inexpensive, but you need to buy the MX Rack Missile Launcher before you can launch them. Buy the upgrade first. What is the difference between qredits and cubes in the depot? A qredit (aka: credit, which looks like the child of a Q and € ) is the type of currency you earn in the game by harvesting your bases. Cubes (aka: power cubes) are purchased with real money in the Bank section of the Depot. We want everyone to be able to do everything in the game for free, by earning qredits, but for those who want to move a bit faster, you can purchase cubes to speed things along. Purchasing cubes is how QONQR makes money. We very much appreciate your support. Every purchase you make helps us to keep making improvements in the game. Future enhancements will enable you to earn cubes in game. Why can’t I create another base? Additional bases become available as you level up. At the start, you will get a new base every 5 levels. If you don’t have any more bases available to build, you will need to level up. If you have a base available, but aren’t allowed to use it, it is because you already have a base in that zone. You can only have one base in a zone. Get yourself to another zone, then create your base there. My bases are collecting credits at different rates. Your bases collect resources faster if your faction controls the zone. Do your best to either put your bases in zones you can control by yourself, or find zones with strong players in the same faction and put your bases there to maximize your credit collection. The game says I’m in a town that doesn’t exist. QONQR tracks almost 3 million battle zones of varying strategic value in 250 countries. Sometimes those zones include locations that haven’t existed for over 100 years. That’s pretty cool if you ask us. If you find a zone that looks like a duplicate or is just plain wrong, however, let us know on the forums under “Zone Corrections”. How do I move my current position on the map? You might need to take a car, bus, plane or train depending on which zone you are trying to get to, but if you want to move on the map, you need to move in the real world. QONQR is a location-based game, which means you play where you are. However we don't want to make you move to play, we want you to plan when you move. QONQR goes with you as you move through the daily activities in your life. Where do I find the Strategy Guide? Here: http://community.qonqr.com/index.php?/topic/1191-the-official-qonqr-strategy-guide/ How do I win? That is for you to decide. There is still much to discover. We don’t even know if QONQR is good or evil. Why is it here? What is its purpose? Help your faction further its goals and unlock all the secrets of QONQR!\nCongratulations to the Swarm on their overwhelming victory in Atlantis in May 2015 -- taking and retaining all Atlantis zones from beginning to end is hard to argue with -- most convincing -- well done Swarm!\nRumor has it that the Duggers have 20 scopes. Some families really do have a wife and ten kids... Ha Anyways.... Multi scoping is generally not encouraged, usually if you mention that you do it on the forums or in groupme your not going to be a very liked person, even within your own faction. What tends to happen is if you start multiscoping then your enemies start doing it, then you get into a war where ever person has 6 phones and nobody is happy. I've seen or heard about situations like this in a lot of different locations.\nGreat job, faceless! It was actually an exciting Atlantis and I prefer it that way. Way to bring your A game.\nYou know what we need in this game? Nano-Nukes!!\nAttention (insert faction here), We, the (insert faction here) are tired of the way you constantly (circle one).. A. Cube rage us B. Bully us with numbers C. Talk mean to us It hurts our feelings because (circle one).. A. We don't cube B. We don't have as many allies C. We have no sense of humor and/or no backbone Please refrain from participating in the above selected actions for above selected reasons so as the game is enjoyable for (insert faction here). Regards, (insert name and faction here) There we go...this should streamline the entire complaint process of the forums. Copy and reuse as needed. You're welcome.\nSilver, you mentioned in a blog that there are levels beyond 150, is it safe to say there is no level limit anymore or is that for us to discover? Another question, I'm not sure if I'm the only one noticing this but it seems like bot decay doesn't work against Zone Assault bots. I've hit two players I know have been inactive for well over 6 months but my attack had the same effect it did yesterday before the update. However I did notice against Deflection bots I am getting 2x the kill power. Also in response to decontaminatoR. Paid gaming isn't just something for adults. When I was 13 the best options for handheld gaming was GameBoy or GameGear and the games at the time cost anywhere between $29-$39 dollars. I had a paper route to pay for my games. So, no offense, but you can afford $0.99 for a game. You don't even need a paper route, just check under your couch cushions and I'm sure you'll find a few quarters.\nI want to start by thanking Faceless. This round of Atlantis, Legion and Faceless were doubled in sized and probably spending by Swarm. I contacted some great players from the other side and put together a nonaggression pact, this pact was one of the most impressive agreements I've seen in the more than two years of playing. Hundreds of people worldwide stuck to this agreement and put past feelings behind us. It was awesome to see both sides stick so closely to each other in fighting against Swarm. I want to really thank everyone who showed honor by standing behind me and the faceless command when we suggested that, the people who really gave it a chance and then most importantly, to all the players who honored it. Faceless, thank you very much! We stood no chance of winning without you! The battle came down to literally one launch in one of three zones. I'd say that with that being said everyone fought incredibly hard, so I want to give Swarm the respect they deserve. You guys really show out and play to win. Good game, 2 against 1 is not easy, no matter how large your crew is. And Legion, we had many late, late nights, many very long days. You guys killed it this month! We didn't take home a trophy, but I would say we all have something to be proud of! The other leaders who helped me coordinate everything were awesome! So many great people kept everything moving forward 24 hours a day for the whole week. Thank you to everyone who gave it your all for the whole week even when we saw that Swarm had 4x our bots at the end of just one day. Many people would have given up, but we held in and **** near won! I've won Atlantis battles with Faceless and with Legion but I will say, nothing was as fun or incredible as this round! You guys are fantastic and I can't wait to do it all again next month! Hopefully we have more Legion and Faceless show up for the next round, but I know that even if we don't, we will figure out a way, just as warriors do. PEW! PEW! PEW!\nI must start with an announcement: Camels are not the only animal in the middle east. You have overused it already. It's saddens me that anyone would still find it funny. Get a bit of originality. For anyone not wanting to read all this drivel, skip to the end (hint: Bold stuff). At what point have I bullied anyone? When was the last aggressive or threatening message you or any of your members received from me? Never. Legion and Swarm (in the UK) have teamed up because Faceless are dominating in London. That makes sense. It's a three sided fight and if one force becomes too powerful, the other two can join forces to try and take them down. But: We are dominating in the London area while your alliance is attacking players outside of that area. Then you expect me to sit back and do nothing. You are specifically using me as an excuse for why you have teamed up but then you're attacking players outside of my reach, sometimes with Europe involved. If that is not reason enough to attack you then I'm not sure what is. You lot cube, multiscope and have multi faction accounts and still complain. I don't complain about anything you do. Play the way you want to, ill do the same. I am never rude to anyone, always polite no matter what the message is, never brag about what I do, can do or have done, never threaten anyone, try and keep in touch with the few swarm or legion who are civilised to me, listen to any message form any side and if I can help in any way, I try my best to. Formed agreements with enemy (that I didn't need to) that ended up slapping me in the face. I have even in the past taken the time to find out who some of the younger players are so I know to avoid them. When I'm in a different country I try and find who the bullies are. Those players who threaten and brag and laugh at others. Those are my targets (if they exist in those areas). I don't see how anyone can think I'm a bully. Is it your money? Its none of your business how I spend it. Either way your numbers are way off. 700 dollars a day? Did you just decide to blindly strike the number pad to come up with that figure? Best part of one of your posts is saying that the devs should worry about my welfare. What concerns you about me spending my money? Maybe I'll self harm due to overspending? I wont have enough to buy food because I bought too many cubes? I don't understand what they are supposed to worry about. I have issues? At what point did you deduce this oh mighty psychologist? Wait a minute are you my bank account manager? What do you know about how much I can or cant afford? Random guy spends money on something he enjoys. The end. Told you last time to give up on all the drama but I guess thank you for the concern. You play to make me spend more? And? What do you think that accomplishes? In fact how do you even make me spend? You don't even attack me. You sulk when I'm in the UK and when I fly out and you find out, you call in Europe to help you take an zone or two. As they say, whatever floats your boat. As for Atlantis: Please try again. I quit Atlantis when we were winning. I have on occasion involved myself after I was asked to help out but on the whole I give it a miss. The last time (months ago) was the last 10 minutes of Atlantis and Legion fought hard. We lost. I was not the only Faceless player to quit Atlantis. Quite a few of us thought it lasted too long and had too many zones to fight over. I hear the duration has been reduced. You can't honestly believe I changed my sleeping pattern for the game. I was in California for a month. I was jet lagged. My sleeping pattern was a bit off when I got back. You're not even accurate about when I deploy. Pay attention. You should know this by now: I play as and when I want, sometimes every 20 minutes and sometimes I do long stretches and sometimes I'm busy and don't deploy for hours. You wont always win, and nor will I. Try and get satisfaction form at least trying to win or putting up a good fight. About the limit on cubing. Please. I suggested that last year. Twice. Unfortunately multiscoping is allowed and so prevalent that it kind of ruins the idea. These are the facts: YOU and YOUR side threatened Faceless members who support London. One of the members threatened is actually London based. What do you expect him to do while his city is under attacked? A few of your members don't know when to keep quiet. They sent messages to us threatening specific players and telling them their zones will be dropped just because they have helped London. Basically if they help London then they get their zones wiped. And you have the cheek to call me a bully? We stood up to your members specifically because they were trying to bully. That is the reason we went strong months ago and took those big zones. How is this me bullying you? This is me answering your threats. I didn't attack those zones \"just because i can\", it's just because I should. Because of your threats. You can blame your members for the loss of those 2 or 3 big zones. These are the basics: You attack one of our zones. We look at the list of attackers and pick one of the players who deployed the most, find a zone of his or hers and attack it. We don't need to justify our attacks with \"because there are Faceless players within 30 miles\". What, every time we attack a zone we need to send some letter explaining why? You attack us or we attack you, for any reason. That's the game. When some of you were cheating and you could not win you complained, when you cube and lose you complain, when you invite all of Europe to attack and win you still complain. I just think you like to make a fuss. Last words (for now): You think I attack zones as a means of getting attention? I get enough of it form your threads. The only attention seeker here is you with your victim attitude and pity us posts.\nI've had some very angry emails today from a couple users who are upset their rival achieved the ability to switch factions freely having played for one of the factions for only 1 hour. I've been accused of doing favors, changing the rules, and various other backhanded deals. It appears it comes down reading the rules. You do not have to play for every faction for 60 days in order to earn free switching status. Here is a common scenario many players have used to achieve free switching status and avoiding playing for one faction they despise. 1. Start with Swarm 2. Switch to Legion (earn Spy) play for 60 days 3. Switch back to Swarm (earn Double Agent) play for 60 days 4. Switch to Faceless (earn Mercenary) immediately switch back to Swarm or Legion Below is the complete text on the switch nanobots screen. It is the same text that has been there from Day 1 with the exception of the level 100 rules that went into place earlier this year, where you could switch as much as you want before level 100 , but those switches don't count towards the medals. This text has been part of this description since Jan 15, 2013. \"Players that earn all three spy awards, may once again switch factions at any time as they could during the training levels 1 through 99.\" Prior to Jan 15, the text said this. \"Players that earn all three awards may be given the opportunity to switch factions more quickly in future updates (contact support for more information)\" I pulled that right out of source control, which includes the entire change history. Here is the complete text from this page. http://portal.qonqr....r/SwitchFaction WARNING: Defection has consequences! Self-destruct will be initiated on all your nanobots. Without the self-destruct, you would be required to battle against your former self to regain control of your zones. You will lose the capture and leadership of any zones you currently hold. Lifetime captures will be unaffected. If you are still completing the training levels and have not reached Level 100, you may switch as often as you like to find the faction that suits you best. Once you have reached Level 100 switching factions has rewards, but also has additional consequences beyond the self-destruct of all your nanobots. Defection will usually result in a demotion in rank. This is accomplished through awards with negative rank points. Those awards are: Spy - First switch to an opposing faction (-20 points) Double Agent - Return to a faction from which you had previously defected (-20 points) Mercenary - Become a member of all three factions (-20 points) Other Faction Change Details: You may not switch factions again until at least 60 days have passed since your last faction switch. Defection point penalties are applied only once per award Players that earn all three spy awards, may once again switch factions at any time as they could during the training levels 1 through 99. The decision to switch factions is one that must be made with strong determination. Nanobots cannot be reanimated once destroyed. You will retain your earned experience, level, formations, qredits, cubes, and upgrades. However, as far as your zones go, you will be starting over.\n...has got to be one of the funniest moments I've seen in qonqr yet lol.\nThe **** change operation was a success!\nAs a relatively new player for faceless in a region dominated by swarm i can understand the OP. However, judging from the numbers i see here on a regular basis i think you are asking a bit much. My idea would be the opposite approach. Why not add a weapon with extremely short range, let's say like 5km that acts like a bomb and make it much stronk? That would add some serious home advantage. Or alternativley make attack formations lose power over range (exclude nanos and plasma). Maybe something like that would allow newcomers to at least get a foothold in their homezones. It's just an idea, maybe i overlooked something?\nThis is by design. Some day it is possible (I said someday) we could offer skins for your scope. So we will need a uniform color scheme. You can tell the formation families based on the shape of the box. Trapezoid is attack, diamond is defense, and octagon is support. It will take some time to get comfortable with the change.\nGeophysical based game. Anyways, probably not a bad idea but, considering the issues with the three platforms and the development of blue for those platforms, I doubt the resources are available for development on a new platform. Seen the blog? Its Qonqr meets wheel of fortune!\nI am happy to announce that today I both completed the training and captured my first zone.\n^THIS so now that qonqr has been thoroughly funded, can we have blue now? Or is that not happening still lol.\nAtleast I am not legion and there for we can have this intelligent discussion rather than just compete over who has the best words XD ohhhhh someone bring the bill to legion cuz someone just served them extra double order of stir fried SNAPPPPPP If the devs wont make zone dueling for us I hope out there somewhere are those who would empty a zone and challenge one on one to a local battle. I'd like to see the transcript of deployments made / moves made as well that would be neat I think such events would be cool. I suppose if people give up on atlantis as it works now they can schedule their own tournements in empty atlantis zones.. have a team clear the zone.. put 1 vs 1 or teams vs teams.. like fisticuffs challenges.. find out what these warriors are really made of!\n@Qonqrd everyone you know must face palm every time you make a post. Its embarrassing. Mega cubers or whatever you want to call them are not great for the opposing team surrounding them but are great for the game itself (money) and for the team they are part of. Multiscopers are not not great for the opposing team surrounding them and bring nothing to the game but are great for the team they are part of. Both have a negative impact on enemy teams/players but only one benefits the game itself. Both can make people want to quit out of frustration. And that's not great for the game. @OP unlimited refresh is over powered. Its frustrating to fight against a ridiculous amount of refreshes. Unfortunately i dont see anything changing unless this game gets a lot more people playing. More people might mean more money for the company from various sources. More money from various players might mean they can limit the players who spend a ton and still generate a healthy income. The main issue i see with limiting refreshes is someone multiscoping and spending money. He now has two, three, four accounts to refresh with and gets the advantage. Its tricky.\ney dun new ho to yet it uff.\nYet you complain almost everyday here, on your website, Twitter, and YouTube channel that the game needs to change because cubing has such an impact.\nIt was fun. Swarm had me scared at first, but it turned into kind of a bullying match between us and legion. Last hour became p obvious which way it was gonna go. Legion rly stepped up their game in the end there, respect.\nWe are investigating this. Here is what we know: Several of the accounts used the same password. Most of the accounts belonged to people who knew each other personally. The accounts were all switched from the same IP Addresses. The person who logged in, got into each account on the first attempt, so they knew the password for each account. What you should know: QONQR never stores passwords, not even in the logs. Passwords are hashed (one way encrypted) and can never be decrypted When you authenticate to our servers, we hash the password you gave us and compare it to the encrypted password in the database to see if they match. Access to our database in the could is restricted tightly and we are confident no one breached the system. What you should do: Don't use the same password as other people you play with. Don't share your password with anyone.\nI heard all the French players fled to the UK after one German player accidentally shot a single missile into France.\nMost factions now use GroupMe or Line as their means of communication, the forums are too slow as a means of communication and insecure for specific faction conversations. Think of the forums are more of a gaming information resource rather than a means of communication. Contact the top players of your faction in the leader boards of your state and they will likely point you in the right direction to chatting with your local faction. The developers are also building some sort of new chat system into the game, we don't know much about it but apparently beta testing for the chat will be happening very soon (next couple of weeks) according to their timelines.\nA way to honor the dead? Nah, how bout a way to dishonor the dead.\nCould just build it up and retain their capture. Remember Bizzy, staq to the heavens.\nI just read this entire thread. I am now tuckered.\nDoes anyone know if Bot Booster has an effect on Seekers and how much dmg they do to attacking players? Also on the topic of seekers, does the amount of skeers in a battlefield have any effect on how much damage they do?\nInteractive map of real-time zone captures.\nYou know that is something I didn't factor in there. Time. The player who can consistently and constantly launch wins against the guy who casually picks up the phone on occasion or has to work away from a cell phone for 8 hours. Good point. And yes I can't argue skill doesn't factor in, it just seems like less of a factor than other games is all.\nI cant see why a closed forum, open only to registered Qonqr accounts, cant be used. **** spammers!\nGotta love synclock! I suspect linjin has a problem. Maybe the dvs should look into it.\nNo, Naamah...I clearly understood what you were trying to convey. I'll even go as far as to agree that what your facing now is, while fully allowed and deemed completely acceptable by the developers, unbalanced and wrong. However..the imbalance isn't in the game and isn't something that, from a business standpoint, is likely to be regulated. The game is fair..the advantages are provided to all players. It's the players themselves that throw the balance into chaos because, as you've said, not everyone can afford spending several thousand bucks a year in only one game. Truthfully, in my opinion, the moment you admitted to buying cubes yourself your complaint became silly..because I'm sure there's a player out there, who has bought NO cubes, who can make this same complaint about you that you are making about others here. I know these things because I've read them so...many...times in this forum. There was likely a time, ages ago when I was a young Massune, that I even posted a few myself. That was the purpose of my post..I was poking fun at addition of yet another cuber/bully/trash talk complaint on the forum. It wasn't directed at your personal plight so much as the idea that someone, yet again, finds it necessary to lobby for a spending cap on the only real way for this game to make money. As to your specific problem..you, like all those that have raised this topic before you, have few options to rectify the issue. Here are a few that seem to have worked for others..fight harder, recruit better, spend less time complaining and more time organizing, budget for more cubes or quit. I'd rather not see you opt for the latter..but to each their own.\nI agree, we should find a way to honor the dead, but I don't think keeping their towers infinitely is necessarily the solution. The game must go on. I'm pretty sure the point of bot decay was to clear the game of inactive player bots so that new players can have a chance to rise up, not to dishonor the bots of dead players.\nThe following are frequently asked questions about the new server update (so far) It still says Training Complete on my iPhone. -\tDownload the update from iTunes It still says Training Complete on my Android. -\tSorry, Android will not be updated again until the QONQR Blue beta is released My XP per launch keeps going down -\tThis is XP throttling and is intended to limit the ability for people to leve1 from 1 to 100 in a single day through heavy cubing. The XP throttle was introduced with the original version of QONQR in 2012,and the throttle formula is the same for levels above 100. The throttle resets at midnight UTC every day. How do I buy the Bot Regeneration Accelerator? -\tCurrently the Bot Regen Accelerator can only be purchased through http://portal.qonqr.com. Go to the Depot and review your scope upgrades. The new QONQR Blue clients will allow for this purchase to be made in the app using your mobile billing. I don’t have a PayPal account -\tFor users interested in purchasing the Regen upgrade, but who do not have a PayPal account, PayPal does give you the option to checkout using your billing information without creating an account. PayPal is not allowed in my country, or I don’t have a credit or debit card -\tPlease contact support@QONQR.com for alternate options Is Bot Regen Accelerator counted as part of the 100% scope upgrades? -\tYes, but there is a bug that does not increase scope upgrade percent when you purchase this upgrade, that will be fixed in the coming days. For all other questions, please read the 7 blog posts prior to 7/29/2015 for information on what was included in the update today.\nBye Fack, its been a pleasure being allied and against you.\nThe two big issues that are both killing the game slowly and keeping it from growing exponentially are cube injustice and new player ramp. The game obviously also needs to provide a consistent and growing revenue stream as well. I think Silver needs to rethink how revenue is generated if he is going to address cube injustice and new player ramp. For revenue generation I would suggest a model that doesn't give a significant combat advantages. Download and play for free from level 0-99 Pay small monthly fee to get full functionality or play for free at 50% of offensive/defensive funtionality Still buy cubes, but cubes are used for following: - Credit Boost: harvest more credits for a period of time - Range Extension: ability to use standard attack/defense formations at extended ranges - Base Share: get 100% credit attainment even in bases owned by another faction - Purchase additional ordinance - Zone Name Change - ability to customize zone names. \"Breggland\" - Faction Change with Bots - pay for the ability to keep up to 50% of your bots with faction change - Experience Boost: %increase in experienced gained while leveling - Other: anything that helps grow a player or provides enjoyment, but doesn't tip the battle capability of a scope. New Player Ramp/Integration into Game - Offer paid immediate ramp package: one price to become 100 with full upgrades - Like the changes in Blue - Create new zones in Metro areas that only 0-99 level can launch into, with statewide ranges Understand that catering to those who have money and like to use it for an advantage is a good business model and for those people it might be ok to offer very expensive options: - Shield generators: temporary energy shield that adds X% increase to defense or stops X% of damage - EMP's: turns Absorbs off for X minutes. Does not destroy, just turns off - Chain Lighting: Does damage across multiple players in a zone From a development standpoint I have no idea what is possible, easy or hard, but the general idea is to make the playing field more fair for the standard player while maintaining and growing a business revenue stream.\nYou need to come to the Northeast US. We handle our business like no other.", "answers": ["Keep deploying and harvesting your bases to earn experience points and level up quickly."], "length": 6594, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "3c8e9fef2eae8f49aae38b7314a3425b99c3ca8b051e8293"} {"input": "What is the purpose of the baseline in the layout procedure?", "context": "Probably one of the most frustrating things about building experimental aircraft, especially when starting with a minimum of pre-fabricated parts, is to start building and ending up with an unexpected result. Every builder starts a new project by wanting it to go \"perfectly.\" So when things aren't going well, especially at the beginning, the frustration can lead to an unfinished airplane.\nThis is the first article in a series dedicated to helping builders of the Rand Robinson KR series planes build a straight and true fuselage -- the first part of the construction process. Borrowing from modern boatbuliding techniques, focus will be on the KR-2S, but the principles apply to the entire lineup of KR-1 & KR-2 series planes.\nWhile building the KR-2(s) a common surprise is encountered by builders when the completed fuselage sides are laid into position to form the fuselage box section. With many hours spent building the sides flat, finding the once straight longerons that now bow up from the building surface, form a most dissatisfying \"banana\" shape. Especially when using the preformed fiberglass parts, this curve in the top longeron is not acceptable. The builder is left wondering what went wrong and no amount of clamping or brute force forming will solve the problem to any degree of satisfaction. The problem is not the builder's fault. The solution starts by understanding the three dimensional relationship of the assembled parts being built.\nFirst understand that the plans show the finished form of the plane. They show the \"projected\" form as you would expect to see it if viewing an actual plane from the top, ends and from the side. Since the sides are sloped (flared) outward, looking from the side, the distances given by measuring the profile drawing are \"foreshortened\" and don't give the proper shape for building the fuselage with a flat top longeron. What needs to be done is to \"develop\" the \"true\" distances and shape of the flat panel so that when it is curved into position, the longerons lay flat.\nSecond, understand that the dimensions called for in the plans put a twist in the sides that tends to work the panel in two directions of curvature. This twist makes the panel \"undevelopable\" meaning that that shape cannot be unrolled into an equivalent flat shape. This is important when laying out the side and bottom panels onto flat plywood. To illustrate this, try forming a piece of paper around a soda can. The paper can be formed flat around the can either straight or at a diagonal to it's length. It has only one direction of curvature and is by definition \"developable\". Now try to form the same piece of paper around a baseball. It won't lie flat on the surface without some deformation (folding, wrinkling or tearing) of the paper. The ball has curvature in more that one direction and is a \"compounded\" shape. Paper (or plywood) can only be readily formed in developable shapes as opposed to aluminum or other metal which can accept in plane deformation. A developable surface is needed to lay out a curved surface when the materials used can't be deformed with any degree of in-plane strain.\nInitially, the fuselage sides are laid out flat with reference to the top longeron measured to a straight chalk line. The bowing problem starts when the side panels are bent and sloped to form the fuselage box section. If the sides were not sloped (tumbled home), the section formed would be cylindrical and the longerons would lie flat. Since the sides are tumbled home, the section formed is now conical. When a conical shape is cut with a plane (building surface) not perpendicular to it's axis, the shape formed is elliptical -- exactly what happens with the top longeron. When it's built flat, bent to form a cylindrical section, and sloped to form a conical section, it takes on an elliptical shape firewall to tailstock.\nThis method borrows heavily from proven techniques used in the marine trades. It should be stressed at this point that although the layout procedure is not complicated, it is important to take your time. If the layout is not going well initially, start over! Better to erase layout errors now than to have them built it and cause surprises later.\nLayout to ensure a fair and true fuselage starts by drawing a reference line (baseline) on the building surface. Refer to figures 2 & 3 and use a wire guide to draw a very straight baseline. About 500 lbs. Of tension should be adequate. One could use a chalk line, but we're talking airplanes here, not house framing.\nThe main layout difference is that the baseline isn't used as a reference for the top longeron. The baseline references the mid point of the firewall for the developed (and true dimensioned) side panel. Although the baseline will still be the reference, the top and bottom longerons will be laid separately.\nLayout differences don't end there. Each of the stations (vertical members) will be laid out with a calculated separation so that when the panels are formed into position, they land on the spacing called for in the plans. Another major difference is that the bottom & side panels are applied after forming the fuselage box section. This is mainly to obtain the ability to \"fair\" the side and bottom surfaces and insure a straight and true shape.\nRefer to figure 1 for the layout of the new developed side panel. The firewall (station a) is layed out perpendicular to the baseline. Longitudinal (station) measurements are given along the length of the baseline from the firewall. Vertical dimensions are given to reference the angle and breadths of the station at the baseline.\nNotice that the top longeron is bowed outward and that the stations are spaced slightly greater than called out in the plans. When the panels are formed into the box frame section ,they will work into the dimensions specified in the plans.\nStrike a centerline, longer than is needed on the building surface using a wire guide. Draw off the firewall line perpendicular to the centerline at one end.\nUsing the distances listed in the balloons, mark them off on the centerline. Distances are measured to the nearest sixteenth of an inch. Take time to mark them off carefully. Don't mark off the distances in a cumulative fashion. Use the firewall as a common reference.\nUsing the angles listed at each station, mark off a station line longer than is needed. The angles are measured to the nearest hundredth of a degree. Take time to mark them off carefully.\nAt each station, start by marking off each short (bottom longeron) line distance from the centerline. Use your set of trammels or beam compass for doing this. Mark the intersection of the short line with the station line.\nAt each station, mark off each long (top longeron) line distance from the intersection of the short line distance and the station line. Again the trammels or beam compass is best for completing this step. Mark the intersection of the long line distance with the station line.\nUsing the longeron as a batten, trace out the inside and outside curves of the longeron. After the batten is secure, in between each station, fasten a keeper block inside and outside to preserve the shape of the longeron taking care to avoid potential future interference with the diagonal members to be installed later. The fairing blocks can be removed or left in place if they won't interfere with building. The vertical station members and their diagonals can now be measured and positioned. Remember to refer to the plans for the material thickness direction.\nAfter vertical and diagonal members are cut and fitted, take time to draw their outlines on the building surface to cut down on time and confusion when laying out the opposite side.\nFinishing the side panel is accomplished in a manner similar to that called for in the handbook with the exception that the side and bottom skin panels will be attached later.\nThe next article in the series will discuss jigging and building techniques to ensure alignment and straightness of the flat built side panels. Also covered will be building a \"strongback\" jig to assure alignment of the side panels when they are formed into their final shape.\nPart 3 in the series will cover assembly of the side panels using the jigs. Some joint details will be discussed that will ensure a stronger and more fair fuselage assembly. Also covered will be the layout & attachment of the side and bottom ply skins.\nU.S. Mail: Densmore Associates, inc.\nANSI \"D\" size, computer generated plots of all the layout drawings in this series are available from the author for $30 plus postage & handling. Full (true size) scale plots may be made available depending on demand.\n\"Scarfing\" is the practice of splicing plywood so that short pieces of plywood can be used to span long distances. On the KR, it is required on both the fuselage skins and spar webs. The angle of the splice should be 10 to 12 degrees to maintain strength across the joint. Also, joints should coincide with structural members, such as spar webs or fuselage truss members.\nThis scarfer is made by mating a regular plunge router (this one costs about $50) to a table saw. Obviously, you really only need a table saw to cut the chamfer, but it does make a nice heavy table for scarfing. You could just as easily use a large work table as the base.First, set the table saw for a 5.5 degree cut (for a 1:12 joint, or 6.5 degree cut for a 10:1 joint), and run a 1 x 6 through on edge to chamfer a corner on the board. Then drill the board for three router mounting holes (two are countersunk) and connect the assembly to the table saw with two 1/4 inch bolts. Use a long (2-3 inch) straight cutting bit to do the cutting. Adjust the bit so it doesn't interfere with your table top, and go to town. Keep pressure on the plywood to ensure contact with the table while you're scarfing. Make sure you feed your material from the same end as you would if you were sawing, or the router will take your plywood away from you and put a big dent in your garage door.\nIn the late 60's Ken Rand and Stuart Robinson were working as flight system engineers for Douglas Avionics. Ken was working as an electrical engineer, having previously worked for Sperry as an autopilots project engineer, while Stu's degree was in aeronautical engineering from Northrop University. They were two of the guys at the end of the DC-8,9, and 10 assembly lines responsible for correcting some of the nits and picks in various systems before delivery to the customer.\nThey both wanted to build a fast, inexpensive airplane which was also economical to maintain. Several designs were considered, and plans were bought first for the Jeanie's Teenie and then the Taylor Monoplane. The Monoplane was more to their liking, but would require some modification to fit their needs. A cooperative redesign effort ensued, with virtually no dimensions left untouched. Only the basic fuselage structure, airfoil, and powerplant were retained. The tail shape was Stu's, and came directly from the big DC-8s parked on the ramp outside his office window. The landing gear was designed by Ken, after seeing the gear on a Dewey Bird at Santa Paula airport.\nKen was killed in his KR2 a short time later while flying over Cajon Pass in what was apparently a bad weather / low fuel accident. Ken's wife Jeanette became owner of RR overnight, and stepped up to keep the plans and parts coming. Much of the engineering needs are handled by Bill Marcy of Denver, who's been helping out since early '79.\nTo date, almost 6000 KR1, 9200 KR2, and 760 KR2S plan sets have been sold. 1200 KR2s are estimated to be flying, with 5 KR2Ss now in the air. Much of the development work done on KR's is now done by the builders themselves. KR builders tend to be innovative, which leads to some interesting modifications. Some of the mods that work eventually creep into the plans. The KR2S is a case in point. Many builders who'd heard of the pitch sensitivity and tight cabin of the KR2 began to build an enlarged version, with the length determined by the most commonly available longeron material. The result is a KR2 that is stretched 2\" between firewall and main spar, and 14\" behind the main spar. Higher gross weights dictated more wing area, with the new standard becoming the Diehl wing skin. Those who plan to carry passengers commonly stretch the cabin width a few inches, although 1.5 inches is the limit if you still want to use RR's premolded parts.\nMike Stearns addresses the KR Forum crowd.\nThis year's KR Forum featured guest speakers Mike Stearns, Steve Trentman, and Bill Marcey. Mike Stearns spoke on several topics, including the many sources for KR and homebuilding information available on the Internet. He also mentioned KRNet, the list server devoted entirely to KR aircraft, as well as several notable World Wide Web home pages. He also brought a sample of the new Rand Robinson wing skins with him, and discussed their high temperature core prepreg construction. His KR2S will receive the first set, which is currently being installed at Hinson Composites.\nSteve Trentman spoke on his turbine installation. It uses a turbine engine which saw duty as an A7 attack jet starter engine. Total weight is about 85 pounds, while putting out around 90 horsepower. There is a small stockpile of these engines available from government surplus. sources. This engine can only be throttled back to 52% power, which leads to some pretty interesting landings. One inflight failure has been logged so far, with very little damage to the aircraft. More on this exciting development in next month's issue of KROnline.\nLes Palmer's KR2 N202LP won Best KR2, Best Engine Installation, and People's Choice awards at the 1995 KR Gathering at Columbia, TN. After researching the KR series, and reading Neil Bingham's \"A Critical Analysis of the KR2\" (Jan 88 Sport Aviation), Les decided to build his as a single seater, stretched 24\" in the tail, while maintaining a stock width firewall. His fuselage is made from Douglas fir, which weighs in at 4 lbs heavier than if constructed from spruce. It is skinned with 1/8\" birch plywood. Spars are covered with plywoood on both fore and aft sides, ala KR2S. Diehl wing skins provide the lift. Horizontal stabilizer and elevator were stretched 7\" longer on each side, while the vertical stabilizer and rudder were stretched 8\" taller. . The fuselage to cowling junction was made more graceful by adding 1.5 inches to the height of the firewall end of the fuselage sides.\nLes's canopy is a Dragonfly, using a four linkage system to swing forward when opening. The canopy frame fits snugly into a recess in the foward deck, providing an excellent wind and water seal. The fiberglass work is exemplary.\nSeating is luxurious for one.\nThe cowling is also a work of art, and uses NACA ducts for efficiency. Female molds were made for all the fiberglass parts on Les's plane, so he could proabably be persuaded to make more, if demand dictates. Les also machines a multitude of KR aluminum and steel parts which he now offers for sale.\nThe firewall was reinforced with aluminum brackets and angles bolted between the longerons in anticipation of the 200 lb Subaru EA-81 engine installation. His 100 HP Asian version is outfitted with an American Holley 5200 caburetor and manifold. It uses a PSRU of Les's own design, featuring two spur gears with a 1.69:1 reduction ratio and a toothed belt. Other than tapping the crank for larger bolts to mount the redrive, no other engine modifications were required. Also, this is probably the only air conditioned KR2 on the planet. The prop is a 60/63 Hegy.\nOriginally built as a taildragger, the fixed gear is made from 4130 steel tubing. Custom cast 6.00x6 aluminum wheels and steel rotors are mated with 6\" Cleveland calipers for braking. An early taxi test accident damaged the main gear, and prompted Les to change to tricycle gear. Again, he designed his own fiberglass main gear, and uses a Diehl nose wheel fork with a 4130 strut and 6\" wheel up front.\nEarly tests revealed cooling problems, which prompted a radiator move from the firewall to a lower cowling location.\nThe first flight was almost a disaster, as test pilot Randy Smith lost power right after takeoff. He managed a 180 with a safe downwind landing with only minor nosewheel pant damage. The culprit proved to be a spark plug with too much reach, which was quickly remedied. Subsequent flights have shown water temp to be about 210 degrees, oil temp is 220-230, and airspeed is about 180 mph.\nShopping for the Partially Built KR.\nThis story starts about twenty years ago when I first started looking at the KR-2 as the plane I'd like to build. The only problem at that time was a lack of money, lack of knowledge, and a lack of job stability. I liked the design, except for the low ground clearance of the retractable gear and that a KR was going to be a tight fit for me to fly.\nOver the past twenty years I've owned a number of planes, but still always wanted to build my own. I needed one that would fit me, my budget requirements, and have the speed and performance that I wanted. When \"KITPLANES\" published the article featuring Roy Marsh's new KR-2S, it was the first I had heard of any major modifications or improvements to the same old KR design. I believe that article and Roy Marsh's workmanship have probably been the greatest boon to Rand Robinson (RR) in the last twenty years. It certainly caught my eye! Here was the same design I had decided I wanted to build twenty years ago, with all of the improvements I wanted. It was sitting on fixed gear with some reasonable ground clearance. It had the capability to be built large enough to accommodate me. It has enough prefab parts available that it didn't have to be 100% scratch built if I decided to hurry the project along. And it had the speed I wanted. I knew that Roy's published speeds were probably not realistic expectations for the average KR, but after knocking around for the last three years in my Champ, anything over 90 mph seems pretty fast to me.\nAfter purchasing the info kit and the sales video from Rand Robinson, the next step after deciding for sure to build this plane was to order the KR-2 plans and the KR-2S addendum. I finally got my plans and was putting together my first order to start the plane, when my partner in the Champ pointed out that there was a partially completed KR-2S for sale in Trade-a-plane. My initial answer was \"No, I don't even want to look at it. I want to build my own from scratch.\" My partner insisted that for the advertised price and the fact that it wasn't too far away, I ought to at least give the guy a call and investigate it. \"No, I don't think I want to buy someone else's problems,\" I persisted. That night I went home and crunched up some numbers on the calculator and finally came to the conclusion that for the sake of my budget for the next several years, I really should give this guy a call.\nThree days later, I flew to his place about 400 miles away to take a look at his project. At this point I should probably mention that I consider myself to be fairly knowledgeable about airplane construction, although the vast majority of my experience is with tube and fabric. The rest of this article deals with what I looked for and more importantly what I missed and have had to repair in the last year since I purchased the project.\nWhen we went to the seller's house, I found that the left wing was built using the Dan Diehl wing skins and the right wing skins were leaning against the wall inside the house. Also the canopy was in the house with the canopy covered with paper and tape. I wanted to inspect the fuselage first, so off we went to the shop.\nThere I found a fuselage sitting on it's gear painted in primer gray. The first step was to inspect the quality of workmanship of what could be seen as it sat. The interior of the fuselage looked as if it had been built with a great deal of care. The fit and finish of all of the interior wood was very nice. Even the gussets looked like they had been painstakingly perfectly fitted. The glass work on the turtle back also looked very precise and clean. It was evenly faired into the vertical and horizontal stabs. The tail also appeared to be well built with the exception of a depression directly over the front and rear spars in the horizontal stabs. He explained that when he moved recently, that he had shot the plane with gray primer to protect it from the weather since he wouldn't have ready access to a shop to put it in right away. It ended up sitting out in the hot south Texas summer sun for a few weeks before he got a shop rented to work in. That caused the glass (or possibly the foam inside the horizontal stab) to swell, except that it held onto the spar, so it was slightly ballooned in front of and behind the spars. His recommendation was to fill it back smooth with micro.\nI also found a small linear crack in the lower left wing spar cap on the left wing stub. It appeared to be from over tightening the rear spar wing attach fitting bolts. His explanation was that the crack wasn't important because the rear spars only job is to keep the wings from folding back. I also noticed that the holes for attaching the outer wing to the wing stub were badly rounded out on the rear spar. He explained that the Diehl wing skins require the rear spar to be swept slightly more forward than the stock wings. This won't allow you to use the rear spar attach fittings from RR and that I would need to fabricate a new set of rear spar attach fittings.\nI also found that the aileron bellcranks were not built or installed as per plans, but found that they looked professional. I couldn't check for function since the right bellcrank and sheeve wasn't installed, the left wing also wasn't installed, and the right wing didn't exist yet.\nNext we pulled the inspection panels off of the fuselage and tail and looked at everything I could see with a good flashlight. I didn't find anything else that might be questionable about the fuselage except for a cracked elevator trim tab that was damaged when it fell off it's hanging place on the wall.\nNext we spent some time going over his builders log and builders photo album. I still hadn't seen anything that would dissuade me from buying this project.\nAt this point it was starting to get late and my ride down needed to get airborne for the flight home. I needed to make a decision about whether I wanted this project or not, but I hadn't inspected the wings and canopy yet. I took a cursory look at the left wing and saw lots on micro built up on it and some bubbles in the leading edge, but nothing that looked seriously wrong to my amateur eye. The right wing was only a set of spars in the shop and the Diehl wing skins in the house, so there wasn't much to look at there. The canopy was wrapped in paper and tape, so there wasn't much to look at there either. I decided that even if there were serious problems in the wing that was built, I would be money ahead to go ahead and buy the project. For the advertised price, I could build a new set of wings and still be way ahead financially. We negotiated a final price, shook hands, took my ride to the airport, and started off in search of a U-haul to haul the project home.\nNow, at this point, some of you are thinking about what I surely must have forgotten to inspect and why didn't I take a local A & P or EAA member along for the ride. First of all, I don't know any mechanics locally that have any experience with glass and our EAA chapter of which I am VP is woefully lacking in fiberglass knowledge. Secondly, as you will see, I missed plenty. Some by ignorance, some by just not looking close enough.\nNow for a list of the problems that I found over the last year and a few of the fixes that I came up with.\nI found that the lower set of rear spar attach fittings on the left rear spar were installed backwards with the longer spaced hole towards the fuselage. Since this is the same place that also had the cracked spar cap, it required a major change. Also in the same area he had drilled through the rear spar with a hole saw to create a place for the aileron cable to pass through and managed to cut out the second from the outside vertical brace in the spar. Then he chose to install the aileron bellcranks in front of the rear spar, and cut another hole through the rear spar for the aileron push rod. He also managed to cut out the outside vertical brace in the spar. Since the holes were already drilled through the spar, the choices were to either cut out that section of spar cap and scarf a new piece in, cut the whole rear spar carrythrough out of the fuselage including ruining the left lower wing skin, or do something else creative to reinforce the spar cap and install a custom built set of attach fittings.\nI also found that after I built and installed the right side wing stub ribs and skin that the aileron bellcrank setup would not work as installed. The cable that crosses between the two bellcranks had a sharp uphill from the sheeve to the bellcrank in the last 12 inches on either side. This combined with the radius that the bellcranks turn caused the cross cable to pull up tight when the ailerons were pushed to either end of their travel, but allowed the cables to go very slack when the ailerons were centered. Also the Aileron pushrods needed to pass directly through the lower set of rear wing attach fittings to attach to the aileron. This whole rear spar and aileron bellcrank setup was going to either have to be redesigned or cut out and built to plans. The bottom line is that the problems I observed when I inspected this part were much more serious than expected when I had to fix it.\nI decided that I had to remove the rear fittings from the left wing to be replaced with the new set that my neighborhood machinist was cutting out for me. When I put the wing on the work bench to start removing the rear fittings, I thought I had better take a closer look at the bubbles in the leading edge. I found that as I pushed on the leading edge, it delaminated between the glass lay-up on top and the upper and lower wing skin edges that were floxed together underneath. I concluded that that area had to come apart and took a belt sander to the leading edge. What I found was that the leading edge had been floxed together and glassed over, but the mold release had never been scrubbed off the leading edge of the wing. It peeled apart for rebuild quite easily.\nWhen I got back to removing the rear spar attach fittings, I noticed that the woodwork inside the wing looked awfully dull. The reason was that the wing had been closed up without varnishing any of the woodwork. This was rectified with a small hole saw, a number of extensions and a modified undercoating sprayer.\nI also found that the aluminum drain fitting in the bottom of the left wing tank had been glassed into place upside down. The tapered pipe threads were tapered the wrong way to install the draincock into the tank. Retapping the fitting the right direction seemed to be a good fix for that problem.\nWhen I finally got around to attaching the wing to the fuselage, I found that the front spar attach fittings were badly misaligned. Although they could be forced into alignment, I didn't think I needed that kind of preload on the main spar fittings. This problem was fixed by calling on my local neighborhood machinist to build me an aligning fixture and reaming the attach holes to the next larger size and ordering the new sized bolts.\nOn the fuselage I found that although it had new Cleveland wheels and brakes on it, one of the brakes had a severe wobble to it. I must complement the manufacturers for taking care of that problem. One call to the Cleveland factory and they shipped me a new set of wheels and brakes even though the receipt for this set was over four years old and in the original builders name. Their only concern was that this set had never been placed in service yet.\nI chose to sand the load of micro off the left wing to see what it was covering. When I got down to the glass, I found that there was no glass for the aft inch and a half of the underside of the wing in front of the aileron hinge. With the Diehl wing skins, you build the wings, then cut the ailerons out of trailing edge of the wing. He had mismeasured and cut too much material off the bottom side of the trailing edge in front of the aileron. It was filled by floxing a piece of spruce into the gap to fill the space between the back edge of the fiberglass and the aileron mount. I chose to wrap the trailing edge of that wing, and the other wing to match with a couple of lay-ups of glass.\nWhen I sanded the primer off the aforementioned damaged trim tab, I found that the hinge was floxed to the leading edge of the foam insides of the tab, but not the glass. I also chose to wrap the front of the trim tab with a lay-up of glass.\nI decided to pull the paper off the canopy and take a look at it before I'm ready to bolt it on and fly. The original builder had blown his own canopy and after some of the previous problems, I was beginning to have some concerns about not having looked it over closely enough. The canopy turned out to have been blow a little too large. It ended up with a little larger bubble for headroom, which I didn't object to. However, it had more headroom on the right side than the left. Yes, it was just a little bit lopsided. The main problem was that the canopy is stretched thin enough that it can be easily pushed in with one hand when the weather is warm.. My fear was that this is just thin enough that it may decide to lay on my head or in my lap when flying on a warm day. It will have to be replaced.\nI'm sure that many that are reading this could see several of the potential problems before I mentioned them, but some others may not have and I'm sure that there could have been many other problems that didn't but could have existed on this project. This is also not intended to be critical of the gentleman that started this project as many parts of it, especially the wood work are better than I could have done and much of his work is outstanding. I prefer to think that I'll end up with a better plane with his woodwork combined with my glasswork. This article is intended to feature some of the problems that you may run into in buying someone else's project.\nThe final question is, knowing what I have found over the past year, would I have still purchased this project. The answer is yes, but primarily because the price was right in that I am still money and work ahead of where I would be if I had started the project from scratch. There are a few things that I would have done differently, but nothing that I can't live with. Although I won't be able to say that I built it all from scratch, I have built and rebuild enough of the plane that I should have no problem qualifying under the 51% rule.\nYou can send comments directly to the author via e-mail at \"jscott@LANL.GOV\".\nHere is an brief explanation of how I built my turtledecks. The jig was constructed from scrap plywood and a few 1x4s that I ripped into stringers. I made two temporary bulkheads from the plywood, one for each end. Remember the forward bulkhead needs to be shaped in a way that will closely match the aft end of your canopy frame. Make an aft bulkhead by placing a straight edge at the top of your forward bulkhead and the trailing edge of your horizontal stabilizer. This will give you an idea of how tall your aft bulkhead needs to be. As far as location, I placed my aft bulkhead just forward of the lower/front of my vertical fin. I constructed the jig on the fuselage, it is glued together with automotive bondo.\nAfter the bulkheads were bondoed to the fuselage I used the stringers that I ripped from the 1x4s and bondoed them to the bulkheads. This gave me a male form to cover with thin plastic or posterboard. I stapled two layers of posterboard to the jig(thin plastic would work better). The posterboard wraps down two inches onto the fuselage. After I was satisfied with the way it looked, I then covered the entire thing with duct tape (fiberglass will not stick to duct tape) On top of this I wetout one layer of tri-ply cloth (22oz) that I had left over from an earlier project, and one layer of 8oz. bid. Remember to mask off your fuselage so you don't get epoxy on it. If you are not familiar with composite lay-ups, you should plan on razor cutting your lay-ups 4 to 6 hours after wetout while the lay-up is still soft enough to cut with a razorblade.\nAfter the lay-up cured (2 or 3 days) it was removed from the jig, and the jig was removed from the fuselage and discarded. (be careful, the bondo sticks very well to the spruce, you could splinter your wood during removal) I now have a fiberglass skin that tends to hold the shape of the jig but is still flexible enough to work with. I made two bulkheads out of 1/4 last-a-foam (AS&S) using the plywood formers from the jig as a guide. I covered these foam bulkheads with one 8oz layer of glass on each side, with a glass to glass edge on the bottom. After cure these bulkheads were bondoed into place (to the fuselage)and the fiberglass skin was pulled down tight and floxed to the bulkheads. When the flox cured the bondo joints were broken, again being careful not to harm the wood. The turtledeck was removed from the fuselage and 2 inch tapes added to the bulkheads inside and out.\nAt this point the turtledeck looked great and only weighed about 5lbs. but I noticed you could deform the skin by pushing hard on the outside. So I flipped the turtledeck over and from 1/4 inch last-a-foam, I cut two inch wide strips that would run the entire length, forward and aft inside the turtledeck. In effect these would act as composite stringers, I made enough of these two inch wide strips to make up three stringers. One down the center (sort of a backbone) and one on each side of the \"backbone\" half the distance to the edge of the turtledeck. I sanded the edge of the foam so that when covered with a layer of bid @ 45degrees there would be a nice transition from the turtledeck skin up onto the foam and then back onto the turtledeck I scuff sanded and glued the foam stringers in with micro. I covered the foam stringers with one layer of 8oz bid @ 45degrees.\nYou can also send me email at: mikemims@pacbell.net if you have any questions or want to share your ideas.\nKROnline is an online KR Newsletter devoted to sharing KR information with other builders and pilots in a timely manner. The first issue (September 96) is now available as a zipped MicroSoft Word file at http://members.aol.com/bshadr or as an html document at kronline9.html. If you'd like to submit articles or photos, email Randy Stein at BSHADR@aol.com ------------------------------------------------------------ Don't bother to email Randy though. KROnline has been retired since the KR Newsletter has improved.", "answers": ["The baseline is used as a reference for the mid point of the firewall for the developed side panel."], "length": 6340, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "33bfe67a3d40e71a5e5351ea5db4ea55df61a018d071074b"} {"input": "What did the decision to base the water rates on usage reflect?", "context": "Time to clean house in Paso Robles Home\nFront Page » Time to clean house in Paso Robles\nSeptember 5, 2010 Opinion By JIM REED\nI’d like to give you an update on the issue of our civil servants cramming hundreds of millions of dollars in spending down our throats after the people of Paso Robles voted down the water rate increase last November. The rate increase is being hung up in the courts by the City Attorney. What was supposed to be a quick issue to get in front of a judge, has been drug out as long as possible by the City Attorney.\nEven if the courts throw out the current rate increase, I expect that our civil servants will just change a couple of words in the rate increase notice and force the same old plan on us again.\nThere is a real problem with the people we have hired to work for us in Paso Robles. It seems that decisions are made based on some agenda, even if it is contrary to citizens’ wishes.\nCity Councilmen Ed Steinbeck, Nick Gilman and Mayor Duane Picanco, on August 19th, voted unanimously to hire the same law firm employed by the City of Bell. You may have heard the recent news story about the City of Bell’s corrupt city representatives.\nThis law firm allowed the elected officials and City employees to pillage the General Fund for their own benefit, contrary to the rights and interests of the citizens. We are already paying several City employees $12,000 per month with equally ridiculous benefits and pensions. What does this say about our elected representatives?\nI believe most residents are like me. We elect people we believe have our best interest in mind. Over the last few years I have seen that nothing is farther from the truth. The people we have elected have lost track of the fact that “the City” exists to protect and deliver services to the citizens. To them it is some all-important ideal they strive to cultivate and improve according to their agenda. They have forgotten that they are elected to represent the citizens.\nWe have an election coming up in November. We have the opportunity to elect some responsible, principled people to represent us. If we elect more people from within this system, we will get more of the same type of government. We need to look at where the new candidates stand. Will they lawfully represent the citizens of the city? Or, are they happy with the way things are being run?\nWe have stood together in the past and have made real significant changes in important matters that are going to affect our lives for years to come. There are several thousand citizens that made their voice heard on the water issue, more than enough votes to make a change in our city government.\nPlease come out and vote for a democratic representative governing body for Paso Robles instead of the tyrannical leadership that exists now.\nJim Reed is a longtime resident of Paso Robles.\nSubjects: Opinion Paso Robles Paso Robles City Council Vote\tRelated:\n<- Previous Next ->\tEndless Summer Nights at Edna Valley, event photos Trial postponed for Paso Robles woman accused of forgery The comments below represent the opinion of the writer and do not represent the views or policies of CalCoastNews.com. (moderator@calcoastnews.com Comment Guidelines )\n2 whatisup says:\t09/13/2010 at 9:27 pm\npasoobserver – Here is something to observe and get you going in the right direction:\nCalifornia Government Code Section 65584\n(a) (1) For the fourth and subsequent revisions of the\nhousing element pursuant to Section 65588, the department shall\ndetermine the existing and projected need for housing for each region\npursuant to this article. For purposes of subdivision (a) of Section\n65583, the share of a city or county of the regional housing need\nshall include that share of the housing need of persons at all income\nlevels within the area significantly affected by the general plan of\n(2) While it is the intent of the Legislature that cities,\ncounties, and cities and counties should undertake all necessary\nactions to encourage, promote, and facilitate the development of\nhousing to accommodate the entire regional housing need, it is\nrecognized, however, that future housing production may not equal the\nregional housing need established for planning purposes.\n(b) The department, in consultation with each council of\ngovernments, shall determine each region’s existing and projected\nhousing need pursuant to Section 65584.01 at least two years prior to\nthe scheduled revision required pursuant to Section 65588. The\nappropriate council of governments, or for cities and counties\nwithout a council of governments, the department, shall adopt a final\nregional housing need plan that allocates a share of the regional\nhousing need to each city, county, or city and county at least one\nyear prior to the scheduled revision for the region required by\nSection 65588. The allocation plan prepared by a council of\ngovernments shall be prepared pursuant to Sections 65584.04 and\n65584.05 with the advice of the department.\n(c) Notwithstanding any other provision of law, the due dates for\nthe determinations of the department or for the council of\ngovernments, respectively, regarding the regional housing need may be\nextended by the department by not more than 60 days if the extension\nwill enable access to more recent critical population or housing\ndata from a pending or recent release of the United States Census\nBureau or the Department of Finance. If the due date for the\ndetermination of the department or the council of governments is\nextended for this reason, the department shall extend the\ncorresponding housing element revision deadline pursuant to Section\n65588 by not more than 60 days.\n(d) The regional housing needs allocation plan shall be consistent\nwith all of the following objectives:\n(1) Increasing the housing supply and the mix of housing types,\ntenure, and affordability in all cities and counties within the\nregion in an equitable manner, which shall result in each\njurisdiction receiving an allocation of units for low- and very low\n(2) Promoting infill development and socioeconomic equity, the\nprotection of environmental and agricultural resources, and the\nencouragement of efficient development patterns.\n(3) Promoting an improved intraregional relationship between jobs\n(4) Allocating a lower proportion of housing need to an income\ncategory when a jurisdiction already has a disproportionately high\nshare of households in that income category, as compared to the\ncountywide distribution of households in that category from the most\nrecent decennial United States census.\n(e) For purposes of this section, “household income levels” are as\ndetermined by the department as of the most recent decennial census\npursuant to the following code sections:\n(1) Very low incomes as defined by Section 50105 of the Health and\n(2) Lower incomes, as defined by Section 50079.5 of the Health and\n(3) Moderate incomes, as defined by Section 50093 of the Health\nand Safety Code.\n(4) Above moderate incomes are those exceeding the moderate-income\nlevel of Section 50093 of the Health and Safety Code.\n(f) Notwithstanding any other provision of law, determinations\nmade by the department, a council of governments, or a city or county\npursuant to this section or Section 65584.01, 65584.02, 65584.03,\n65584.04, 65584.05, 65584.06, 65584.07, or 65584.08 are exempt from\nthe California Environmental Quality Act (Division 13 (commencing\nwith Section 21000) of the Public Resources Code).\npasoobserver says:\t09/13/2010 at 6:52 pm\nTo whatisup —- First of all, I reviewed AB 602 Assembly Bill. Thanks. I am sorry to inform you but AB 602 is not the LAW as you so stated in your blog. I contacted the Deputy Chief Council’s office in Sacramento handling AB 602 to confirm your misstatement of facts. You know,in the English language, It shouldn’t be so difficult to answer some simple questions with a “YES” or “NO” answer. Yet, you are reluctant to do so, but you go on and on with a thesis along with some rhetoric. I never talked about a court suit over the “water issue”, I asked YOU, not about waiting for a court decision. Maybe, you did with some other people. Also, I was not ranting about the wineries usage of water. My response to you on your vague question about “there are people not paying their fair share for their use of water”. I related, are you talking about the wineries? I am well aware that most of the wineries are outside the city limits using the same aquifer. You took my question out of context., nice try! You are just being a popinjay and rhetorical. Also, you didn’t answer another question about “what is the unit cost of water” in Templeton? as compared to Paso Robles.\nwhatisup says:\t09/13/2010 at 8:54 pm\nI am on a well. I am sure you are capable of doing your own homework. I also am quite sure if you really contacted the Deputy Chief Counsel’s Office you have been set straight. What I gave you is a proposed small adjustment in the wide range of laws that make up the California Housing element. I assumed you could stumble onto the facts based on what I gave you. By the way, I believe you can review the Paso Robles Housing element plan on the City’s website or at the Library. The California Housing Element Laws that all cities and counties have to follow have been in place for almost 25 years. I realize you don’t actually have a clue how to look the laws up. Either educate yourself or keep making a fool of yourself, your choice. A simple Google search of California Housing Element Laws will get you going. Good Luck!\nTO WHATISUP — I WOULD LIKE TO KNOW WHAT LAW YOU ARE REFERRING TO THAT SAYS “WE” THE PEOPLE HAVE TO SUBSIDIZE NEW DEVELOPMENT? AGAIN, FOR THE THIRD TIME, YOU FAILED TO ANSWER MY QUESTIONS POSED TO YOU IN MY PRIOR RESPONSES TO YOU ON SEPT.10TH &11TH. IS THERE A REASON WHY YOU DON’T WANT TO ANSWER THEM? YOU DO WHAT OUR ELECTED OFFICIALS DO SO WELL, AND THAT IS “IN ONE EAR AND OUT OF THE OTHER EAR” IT SEEMS TO ME THAT YOU ARE EITHER EMPLOYED BY THE CITY OR YOU HAVE OTHER DEALING WITH THE CITY, SO BE IT. IT APPEARS TO ME THAT YOU THINK THE CITY DOES EVERYTHING RIGHT. APPARENTLY, YOU PRESENT YOURSELF AS BEING VERY BIAS ON CITY DECISIONS. IT LIKE THEY CAN’T DO ANYTHING WRONG ACCORDING TO YOUR LOGIC. THEY KNOW WHAT IS BEST FOR THE CITIZENS OF PASO,THAT IS A GOOD EXAMPLE OF ARROGANCE ALONG WITH NARCISSISM.\nWHAT PEOPLE ARE YOU TALKING ABOUT THAT DOESN’T PAY THEIR FAIR SHARE OF WATER? ARE YOU REFERRING TO THE WINERIES USING THE SAME AQUIFER?\nI BELIEVE YOU RELATED THAT YOU RESIDE IN TEMPLETON, BUT YOU OWN PROPERTY IN PASO. BY THE WAY, WHAT IS THE COST PER UNIT OF WATER USAGE IN TEMPLETON COMPARED TO PASO? OF COURSE, TEMPLETON IS IN AN UNINCORPORATED AREA (COUNTY JURISDICTION).\nWELL, I GAVE YOU SOME SUGGESTIONS ON HOW TO PAY FOR THE NACIMIENTO WATER PIPELINE AND SEWER TREATMENT PLANT. ALSO, REMEMBER IT’S THE CITIZENS’ MONEY THAT IS BEING SPENT. WHAT IS MOST IMPORTANT OF ALL, IS LET THE CITIZENS OF PASO DECIDE WITH THEIR VOTE ON HOW TO FINANCE THIS HUGE CAPITAL IMPROVEMENT PROJECT EXPENDITURE. JUST BE IN COMPLIANCE WITH STATE PROPOSITION 218 AND STOP CIRCUMVENTING THE LAW.\nWOULD YOU OBJECT TO HAVING TO FINANCE SOME NEW BONDS ON YOUR PROPERTY TAX BILL AS A ” SPECIAL TAX” OR AN ASSESSMENT TAX” TO PAY FOR THE NACIMIENTO WATER PIPELINE AND SEWER TREATMENT PLANT? A PERCENTAGE OF PASO CITIZENS FINANCE LOCAL SCHOOL BONDS ON THEIR PROPERTY TAX BILL AND DON’T HAVE ANY KIDS GOING TO SCHOOL. HOW ABOUT THAT COMPARISON FOR YOU TO THINK ABOUT? WHAT SAY YOU?\nI say less CapsLock, please.\nwhatisup says:\t09/12/2010 at 11:41 pm\nI have answered your questions. I have been quite detailed in my answers and I am sorry if you can’t deal with the detail. I guess it is your inconvenient truth. You do seem to like to deflect and go around in circles. Another example, now you are ranting about the wineries using the same aquaifier as the City. Let me be clear for you, I don’t like the amount of water the wineries are using. However, the wineries are in the County, not in the City and the City can’t do anything about it. They wineries are allowed to take the water they are taking even if it drops the City’s water levels in their wells. You need to complain to Sacramento. It sounds like you just don’t want to pay anything for the infrastructure because you really just don’t want it built.\nSeveral of your observations of my opinions are bizarre considering I have stated several times I believe the Courts need to decide if Paso Robles has, or has not followed the rules as to funding the infrastucture. Obviously, as I have stated before, if the City loses the lawsuit the infrastructure will have to be paid out of the City’s General Fund until a new method of payment is voted on by the Citizens of Paso Robles. Pretty clear.\nYour idea of charging based on a special assesment rather than the amount of water a property uses means that people who use little water, but live on a more expensive property will pay more than their share, based on their water usage. In addition, how do you deal with a rental unit where the renter is supposed to pay the water bill? Your idea is inherantly unfair, but my guess is it will favor you, so you don’t care if it is unfair and other people would pay part of your share. You also have decided that since I have alternative ideas to yours I must work for, or have business with the City of Paso Robles, another attempt to deflect from the issue. However, once again, I have never worked for the City or have ever done business with the City and don’t expect to ever do business with the City. I do own property in the City which is why I pay attention. Finally, it turns out there needs to be a fix to the housing element laws, the existance of which you are questioning. As I understand it the fix to the housing elemnt laws is because of some lawsuit. This should give you all the information you need to educate yourself on the California Housing Element laws that every city and county in California has to follow:\nBILL ANALYSIS ————————————————————\n|SENATE RULES COMMITTEE | AB 602|\n|Office of Senate Floor Analyses | |\n|1020 N Street, Suite 524 | |\n|(916) 651-1520 Fax: (916) | |\n|327-4478 | |\n———————————————————— THIRD READING\nBill No: AB 602\nAuthor: Feuer (D), et al\nAmended: 8/20/10 in Senate\nSENATE TRANSPORTATION & HOUSING COMM : 6-3, 6/29/10\nAYES: Lowenthal, DeSaulnier, Kehoe, Pavley, Simitian, Wolk\nNOES: Huff, Ashburn, Harman\nASSEMBLY FLOOR : Not relevant\nSUBJECT : Statute of limitations on housing element\nSOURCE : California Rural Legal Assistance Foundation\nHousing California DIGEST : This bill states the intent of the Legislature\nin enacting this bill to modify the courts opinion in Urban\nHabitat Program v. City of Pleasanton (2008) 164\nCal.App.4th 1561, with respect to the interpretation of\nSection 65009 of the Government Code, and revises and\nclarifies statute of limitations and remedies for specified\nhousing related challenges.\nSenate Floor Amendments of 8/20/10 revise the statute of\nlimitations and remedies for specified housing-related\nANALYSIS : The Planning and Zoning Law requires cities\nand counties to prepare and adopt a general plan, including\na housing element, to guide the future growth of a\ncommunity. Following a staggered statutory schedule,\ncities and counties located within the territory of a\nmetropolitan planning organization (MPO) must revise their\nhousing elements every eight years, and cities and counties\nin rural non-MPO regions must revise their housing elements\nevery five years. These five- and eight-year periods are\nknown as the housing element planning period.\nBefore each revision, each community is assigned its fair\nshare of housing for each income category through the\nregional housing needs assessment (RHNA) process. A\nhousing element must identify and analyze existing and\nprojected housing needs, identify adequate sites with\nappropriate zoning to meet its share of the RHNA, and\nensure that regulatory systems provide opportunities for,\nand do not unduly constrain, housing development. The\nreviews both draft and adopted housing elements to\ndetermine whether or not they are in substantial compliance\nwith the law. The Planning and Zoning Law and the Subdivision Map Act\nalso includes a number of sections governing zoning and\nentitlements specifically related to housing, including:\n? The Housing Accountability Act, which requires a city or\ncounty to make one or more specified findings in order to\ndisapprove a particular housing development.\n? A provision requiring cities and counties, when adopting\nan ordinance which limits the number of housing units\nwhich may be constructed on an annual basis, to make\nfindings as to the public health, safety, and welfare\nbenefits that justify reducing the housing opportunities\nof the region. ? Density bonus law, which requires cities and counties to\ngrant a developer a density bonus, incentives, and\nconcessions when the developer proposes to include\nspecified percentages of affordable housing within a\ndevelopment. ? The Least Cost Zoning Law, which requires cities and AB 602\ncounties to designate and zone sufficient vacant land for\nresidential use with appropriate standards to meet\nhousing needs for all income categories and to contribute\nto producing housing at the lowest possible cost.\n? A requirement that, when determining whether to approve a\ntentative subdivision map, a city or county shall apply\nonly those ordinances, policies, and standards in effect\nas of the date the developer’s application is deemed\nPrior to a recent court decision, it was understood that\ncurrent law allowed a party to challenge the adequacy of a\ncity’s or county’s housing element at any time during a\nplanning period, provided that the challenger brought the\naction “in support of or to encourage or facilitate the\ndevelopment of housing that would increase the community’s\nsupply of [affordable] housing.” The challenging party was\nrequired first to serve the city or county with a notice\nidentifying the deficiencies in the housing element. After\n60 days or the date on which the city or county took final\naction in response to the notice, whichever occurred first,\nthe challenging party had one year to file the action in\ncourt. This process and statute of limitations also\napplied to actions brought pursuant to the housing-related\nstatutes listed above. In 2006 Urban Habitat Program brought suit to challenge the\nCity of Pleasanton’s housing policies, including the city’s\nannual cap on housing permits and the city’s cap on the\naggregate number of permissible housing units, both of\nwhich Urban Habitat claimed were insufficient to allow the\ncity to meet its RHNA obligation. In 2008, the First\nDistrict California Court of Appeals issued an unpublished\ndecision in the case of Urban Habitat Program v. City of\nPleasanton allowing the case to proceed with respect to\nsome causes of action, but ruling that the challenge to the\nhousing element itself was time-barred. The court stated:\nAlthough the statute does not specify the time within\nwhich [a deficiency] notice must be given, it is our\nconclusion that the statute must be interpreted as\ncontaining a time limit within which this requirement\nmust be met? In sum, a party bringing a challenge AB 602\ngoverned by section 65009, subdivision (d), has 90\ndays from the date a legislative action is taken or\napproval is given to notify the local land use\nauthority of any claimed deficiencies in such an\naction or approval. Its claim then accrues 60 days\nafter it gives this notice.\nIn other words, instead of being able to initiate a\nchallenge to a deficient housing element at any time during\nthe planning period, housing advocates and other interested\nparties may now only initiate such a challenge by\nsubmitting a deficiency notice within 90 days of the\nhousing element’s adoption.\n1.Removes from the current list of city or county actions\nwhich may be challenged pursuant to Government Code 65009\nnotice and accrual provisions those actions related to\nthe Housing Accountability Act, the Subdivision Map Act,\nand the application of a Density Bonus ordinance to a\nparticular project, all of which are project-specific\nactions. The bill maintains the ability to use these\nnotice and accrual provisions to challenge the adequacy\nof a city’s or county’s density bonus ordinance\n2.Extends lengthening the time in which a deficiency notice\nmay be served to cover all remaining city or county\nactions described in this section of law, as opposed to\njust housing element challenges. In other words, the\namendments apply the longer timeframe to serve the\ndeficiency notice to actions relating to the Least Cost\nZoning Law, annual limits on housing permits, and the\nadequacy of a density bonus ordinance, in addition to\nhousing element law. 3.Provides that an entity challenging such an action in\nsupport of affordable housing may serve the deficiency\nnotice up to five years after the city’s or county’s\naction. After 60 days or the date on which the city or\ncounty takes final action in response to the notice,\nwhichever occurs first, the challenging party has one\nyear to file an action in court, except that the lawsuit AB 602\nmay not be filed more than five years after the city’s or\ncounty’s action. In other words, the entity must file\nthe lawsuit within one year of the expiration of the\ndeficiency notice or within five years of the city’s or\ncounty’s action, whichever occurs first.\n4.Provides that a housing element from a prior planning\nperiod may not be challenged if the city or county has\nadopted a revised housing element for the new planning\nGovernment Code 65755 . Current law requires a court, if it\nfinds any portion of a general plan, including a housing\nelement, out of compliance with the law, to include within\nits order or judgment one or more of the following remedies\nfor any or all types of developments or any or all\ngeographic segments of the city or county until the city or\ncounty has complied with the law:\n? Suspend the authority of the city or county to\nissue building permits.\ngrant zoning changes and/or variances.\ngrant subdivision map approvals.\n? Mandate the approval of building permits for\nresidential housing that meet specified criteria.\n? Mandate the approval of final subdivision maps for\nhousing projects that meet specified criteria.\n? Mandate the approval of tentative subdivision maps\nfor residential housing projects that meet specified\nThis bill clarifies that in any action or proceeding\nbrought pursuant to the notice and accrual provisions of\nGovernment Code Section 65009 described above, neither the\ncourt remedies described above nor any injunction against\nthe development of a housing project shall abrogate,\nimpair, or otherwise interfere with the full exercise of\nthe rights and protections granted to an applicant for a\ntentative map or a vesting tentative map under specified\nprovisions of the Subdivision Map Act or to a developer\nunder a specified provision relating to development AB 602\nUnder current law, HCD operates a number of grant programs\nto which cities and counties may apply. In many cases, the\nlaw requires a city or county to have an HCD-approved\nhousing element in order to be eligible for funding. This bill provides that if a third-party challenges the\nadequacy of a housing element in court and the court finds\nthat the housing element substantially complies with all of\nthe requirements of housing element law, the element shall\nbe deemed to be in compliance for purposes of state housing\nThe statutory language interpreted by the court and at\nissue in this bill was added to statute by AB 998 (Waters),\nChapter 1138, Statutes of 1983, a bill sponsored by the\nLeague of California Cities and the California Building\nIndustry Association. AB 998 created a short statute of\nlimitations period for land use decisions generally but\nprovided a specific exception to protect the ability to\nchallenge deficient housing elements. The Senate Housing\nand Land Use Committee and the Senate Third Reading\nanalysis of the bill stated that the bill:\nSpecifies that for challenges in support of low- and\nmoderate-income housing requirements, the petitioner\nshall notice local government 60 days prior to filing\naction. The [one-year] statute of limitations then\nbegins on the first day the legislative body fails to\nIn the intervening 25 years prior to the Urban Habitat\nruling, housing advocates filed and successfully settled at\nleast ten cases in which the 60-day deficiency notice was\nsent more than 90 days after adoption of the city’s or\ncounty’s housing element. In none of these cases was the\ntimeliness on the advocates’ suit contested. Likewise, six\nbills amended other portions of this statute during those\nintervening years, and there was never any controversy\nsurrounding the lack of a deadline for housing advocates to\nserve a deficiency notice nor any attempt to change the AB 602\nstatute in this regard. Current level of housing element compliance . According to\nHCD’s website as of June 7, 2010, only 46 percent of cities\nand counties have adopted an HCD-approved housing element\nfor the current planning period that began in 2005 for the\nSan Diego region, 2008 for the Southern California, Fresno,\nKern, and Sacramento regions, and the summer of 2009 for\nthe remaining areas of the state. Unlocking the private market . The purpose of housing\nelement law is to create opportunities for the private\nhousing market to function. Builders cannot build without\naccess to appropriately zoned land, and current land use\nplans in many cities and counties in California fail to\nprovide sufficient opportunities to accommodate projected\npopulation growth. The San Diego Association of\nGovernments’ Regional Comprehensive Plan describes this\ntypical California paradox in the following way:\nUnder current plans and policies, more than 90 percent\nof [the San Diego region’s] remaining vacant land\ndesignated for housing is planned for densities of\nless than one home per acre, and most is in the rural\nback country areas dependent upon scarce groundwater\nsupplies. And of the remaining vacant land planned for\nhousing in the 18 incorporated cities, only about\nseven percent is planned for multifamily housing. When\ntaken together, the current land use plans of the 19\nlocal jurisdictions do not accommodate the amount of\ngrowth anticipated in our region. SANDAG’s population\nforecast, which reflects the current adopted local\nland use plans in the region, projects that while\npopulation will increase by 37 percent by 2030,\nhousing will grow by just 30 percent. The forecast\nshows that if local plans are not changed, demand for\nhousing will continue to outpace the supply, just as\nHousing element law addresses this problem directly by\nrequiring cities and counties to zone land at appropriate\ndensities to accommodate the projected housing needs of all\nincome groups and to remove constraints that prevent such\nsites from being developed at the allowed densities. AB 602\nCities and counties, however, are not required to build\nhousing because that is the role of private developers.\nThe law holds cities and counties accountable only for that\nwhich they control: zoning and land use entitlements.\nWithout the ability to enforce housing element law, the\nmarket’s ability to meet housing demand may well remain\nlocked up.\nFISCAL EFFECT : Appropriation: No Fiscal Com.: No\nSUPPORT : (Verified 8/23/10)\nCalifornia Rural Legal Assistance Foundation (co-source)\nHousing California (co-source)\nAdvocates for Affordable Homes in Fremont\nCalifornia Coalition for Rural Housing\nCommunity Housing Improvement Program\nCommunity Housing Works\nEden Housing\nFair Housing of Marin\nGrassroots Leadership Network of Marin\nKennedy Commission\nPublic Advocates, Inc\nSan Diego Housing Federation\nSelf-Help Enterprises\nSierra Club of California\nAmerican Planning Association, California Chapter\nJA:nl 8/23/10 Senate Floor Analyses SUPPORT/OPPOSITION: SEE ABOVE\npasoobserver says:\t09/11/2010 at 11:17 pm\nTo whatisup — Thank you for your response to my comments. However, you failed to answer some of my questions that I mentioned to you. It’s almost like dealing with some City officials. They just let the public vent at their bimonthly council meetings. In my opinion, it’s difficult to deal with narcissism and arrogance. Over the years, there has been some very good input to our elected officials on how to proceed on the Nacimiento water pipeline,but it fell on deaf ears. You wanted me to answer some of your questions,but you did not answer some of my questions. Again, are you willing to subsidize new development?,Yes?or No?, are you willing to pay for a commodity that you are not receiving? Yes?or No? and another question for you. Are you willing to pay over 300% on your water bills within the five (5) year plan that the City has proposed? Also, the water rates will be subject to later increases too. By the way, I do concur with the city’s plan of “you pay for the amount of water units you use”. (748 gal=one unit). However, the higher water rates are not good for our senior citizens on fixed incomes and other struggling families in our community. My first suggestion years ago was desalination. The response was it was too expensive. Of course, now it is more expensive. I would suggest that our elected officials recall the existing bonds (The bonds can be recalled early). The City council can explain to the citizens in detail with financing of new bonds at a lower interest rate as of now for the sewer plant and Nacimiento water pipeline and present their new proposal in compliance with Proposition 218. Let the citizens of Paso VOTE on the financing bonds for their approval. Most of the citizens,that I had spoken to were not happy with the way our City Council handled the Nacimiento water pipeline project. The citizens of Paso didn’t give our City Council a “BLANK CHECK” for $176 million to spend without voter approval. I would suggest that it be a “special tax” or “an assessment” be levied on our property taxes. A percentage of those bonds can be deducted on Federal Income taxes. As it is now, a” fee” on a capital funding project is not deductible. Of course, there are homeowners would not go for this suggestion due to our poor economy. My analogy mentioned above would be, you would get something back on a “special tax” or an “assessment” verses nothing on a “fee”. What say you?\nwhatisup says:\t09/12/2010 at 9:02 am\nUnfortunately the law says we have to subsidize new development in California. I don’t like it, but it is the law. I know paying using the property taxes was bandied about. The argument against it was it would mean some would be paying for water they aren’t using and others could be big water users, but pay a small special assessment on their property taxes. I think the decision that was made to base it on usage was out of fairness. It seems to me if people are using water and not paying their share of the costs it is not fair. The Senior issue is very difficult. If someone is retired for twenty years is it realistic to think prices don’t go up during the 20 years of retirement. Think what prices were in 1990 compared to today. Should Seniors never have to pay for capital improvements? Paso Robles also had very low water rates. Rates that are no longer possible given the circumstances. Desalination will happen eventually. California is out of water. If you want to pay $1,000,000 a gallon there is no more allotable water of any consequence in California. The expense will be tremendous — still have to build a desalination plant, still have to build a pipeline. I don’t know if the plant has to be built along the ocean or if the salt water could be piped over to Paso Robles. If it has to be built along the ocean, Paso Robles doesn’t own land on the ocean and, in any case, the environmentalists will keep it in courts for years as they have done so for other proposed desalination plants in Southern California. Eventually necessity will force desalination past the environmentalists, but not yet.\npasojim says:\t09/13/2010 at 7:46 am\nWhatisup – On one of your previous post you made the comment you haven’t heard any of the legal suggestions for the water issue, But you obviously have. That is a good thing. So we can move the discussion ahead.\nOnce, again this was handled incorrectly by our city custodians from the beginning. And now here we are. The public is not supporting this very expensive, very limited benefit project. As you said, until a plan is developed that the public can support, things don’t look good.\nAll this discussion about the water issue has only reinforced my opinion the issue hasn’t been about water, only how the plan should be paid for. Or more specifically, to what extent do we allow our elected custodians and our un-elected GOD tzar decide which laws they will follow and which laws they will ignore. When the City GOD tzar tell citizens at a council meeting if we don’t agree with the City’s plan, then we should just sue him, and when the City Attorney explains to a citizen at a City Council meeting that she does have to respond to their questions because she does NOT work for them. When the project is voted down by the citizens and the council brings it right back up, it is clear that our elected representatives are not doing their job providing direction to their employees and listening to and representing the CITIZENS.\nThe subject of the original post was the need to elect different representation. I think with all the conversation made on this post, as well as the post on Cal Coast about the hiring of the new legal firm you were involved in, Supports my original opinion.", "answers": ["Fairness."], "length": 5701, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "8fbf0a6531d9250e6bcda0c7ba456441f6d4073bf08de826"} {"input": "Besides the Boeing C-17, what other transport aircraft is the IAF considering for acquisition?", "context": "Transport Aircraft for IAF - Page 67 - Bharat Rakshak\nTransport Aircraft for IAF\nRe: Transport Aircraft for IAF\nPostby abhik » 17 Nov 2014 05:55\n+1, Air India recently sold their entire fleet of Boeing 777s.\nafaik the A330 MRTT does not make any structural mods or add anything internally in cargo or passenger cabin. it just relies on the intrinsic 110 tons of fuel. external refueling pods are added and internally the control station and cameras for the operator i guess.\nso its a easy conversion from a passenger layout to the AAR mode - mostly ripping out the passenger cabin of all extra stuff and retuning the FCS for any changes in COG.\nthis should have been pursued years ago\nthe IL78 adds a palletized drum tank system inside its cargo bay due to paucity of intrinsic fuel but it can be removed and a/c converted back to cargo hauling or send off to russia for Phalcon structural mods if we want it that way. they will however need to change engines to PS90 as they have the old engines\nhttp://www.airplane-pictures.net/images ... 7/5616.jpg\nthe RAF is already gone that route in 2011\nhttp://www.defensenews.com/article/2011 ... -Refuelers\nLONDON - Airbus Military has delivered the first of 12 A330-200 airliners due to be converted into in-flight refueling planes for the British Royal Air Force by Cobham Aviation Services.\nThe aircraft, part of an order of 14 jets, will be modified with aerial refueling pods and other equipment at Cobham's newly refurbished facility in Bournemouth, England. The first two aircraft have already been converted by Airbus in Spain.\nThe multirole tanker aircraft are being provided to the RAF under a private finance initiative service deal led by Airbus parent EADS.\nSeven of the planes will be operated full time by the RAF. The remainder will be available for lease in the third-party market, with the proviso that they can be returned to British military service to meet any surge in demand.\nAll of the aircraft, to be known as the Voyager in RAF service, will be fitted with two wing-mounted refueling pods, while half the fleet will also be fitted for, but not necessarily with, a center-line mounted unit. The refueling units are being supplied by Cobham.\nThe first aircraft will become operational in a passenger and freight transport role by the end of this year to start relieving pressure on the RAF's hard-pressed assets.\nDespite the increasing fragility of current RAF in-flight refueling operations, the new capability is not contracted to start being used in this role until 2015.\nAll 14 Voyagers are scheduled to be available for RAF operations by the middle of the decade. The A330 will replace the increasingly ancient Tristar and VC-10 refuelers now in service.\nPush the 6 Il-476 from refueler to AEW duty. Phalcon them up\nNot sure if that is a good path to follow. For one they all should be sent to pasture in about 8 years. Then if the are to be phalconed up - the requires major structural changes. Not worth that cost.\nWhatever happened ot the two new ones that were supposed ot be ordered?\nthe IL78 can be easily converted back to IL76 cargo hauling. only the fuel tank inside cargo bay needs removal...infact that was even mentioned in initial days as swing role fuel/cargo.\nPostby Cybaru » 17 Nov 2014 07:55\nI am talking about the new il78 that we ordered recently in refueling role. Sorry for the mix up. They are the same platform, that I why i used 476 or 76 to identify it.\n777 carries more internal fuel than the A330. We suck!\nFrom the KC-777 program.\nhttp://www.globalsecurity.org/military/ ... kc-777.htm\n\"the KC-777 would be 209 feet long with a wingspan of 212 feet, 7 inches. That's the same size as the 777-200LR commercial jet. The KC-777 would be able to carry far more fuel, cargo and passengers than either the KC-767 or the Airbus A330 tanker. The KC-767 offers more operational flexibility, while the KC-777 would be better suited for long-range strategic missions in which more cargo needs to be delivered. The KC-777 would be able to carry more than 350,000 pounds (160,000 kilograms) of fuel and offload more than 220,000 pounds (100,000 kg) of it on a mission of 500 nautical miles (900 kilometers). On the other hand, the KC-767 can lift off with more than 200,000 pounds (90,000 kg) of fuel and offload more than 130,000 pounds (60,000 kg) in a similar mission. The KC-777 would be able to deliver 200 percent more fuel after flying 1,000 nautical miles than older Air Force KC-135s. The KC-777 could carry up to 37 pallets of cargo, compared to the 19 pallets for the KC-767.\"\nPostby Cosmo_R » 18 Nov 2014 04:31\nViv S wrote: From Ajai Shukla's article -\nHAL points out that, since each Avro flies barely 350 hours every year, most of them have a residual life of about 80,000 hours. In a request for information (RFI) released on August 15, HAL has proposed replacing the aircraft’s engines (Rolls Royce Dart) with “modern fuel efficient engines”.\nSo, the IAF's Avros have a residual life of 228 years at the current rate of usage. Ain't life grand?\nAt zero up time, it could reach infinity.\nRelax Cy. Kc777 has no client. Usaf is going with kc767 and almost everyone else with a330.\nWe don't have the number of heavies and long missions of usaf else I would say convert an124.\nKC777 will be extremely expensive given the demand/backlog for the 777 and the 777x. Any buyer would have to virtually pay for the increase in capacity.\nI think the 767 production line is closed. so the proposed KC767 Boeing is supposed to deliver 18 by 2017..that can be managed from mothballed and cargo hauler airframes on the market.\nbut to meet the final order of around 180 will they not have to open the production line unless such a huge number were available on the market?\nI do get the spider feel this program again will be cancelled in favour of a in-production plane like the 777X ?\nI wasn't suggesting we get the KC777. All I was doing was comparing what possibly the 777 could offload compared to A330. It carries 171000 liters of fuel versus 130000 liters that the A330 carries. If we had older 777s in stock, we could have quite easily converted them to this config. The cost to us would be miniscule just the refurbishing cost vs acquiring a new type.\nSingha wrote: I think the 767 production line is closed. so the proposed KC767 Boeing is supposed to deliver 18 by 2017..that can be managed from mothballed and cargo hauler airframes on the market.\nThe Line is open, they have a backlog of around 50 (All Fed ex), with Fed Ex placing a small order this year. The Pegasus order is for all new builds, and so will the follow on order. The only reason for any nation to buy the 767 tanker is going to be because of the ability to hard bargain with Boeing given that the commercial future of the 767 is dead. This also allows a potential buyer to purchase cheap spares from the open market, or club its logistical and inventory purchase with that of the USAF. Other than that and perhaps availability (which would be doubtful once USAF pushes through a larger order) there is really no technical reason to purchase the this tanker over the A330 which by all accounts is a superior tanker in addition to being a much much better airliner in general.\nIAI is doing conversations for the 767 and its called the 767 MMTT\nhttp://www.iai.co.il/sip_storage/FILES/1/38471.pdf\nCybaru wrote: I wasn't suggesting we get the KC777. All I was doing was comparing what possibly the 777 could offload compared to A330. It carries 171000 liters of fuel versus 130000 liters that the A330 carries. If we had older 777s in stock, we could have quite easily converted them to this config. The cost to us would be miniscule just the refurbishing cost vs acquiring a new type.\nThe cost of converting a commercial airliner to a tanker, certifying it and running a full fledged test program is by no means small. There is absolutely no justification for that sort of cost over and above the capability that that A330 provides. If it were a certified and tested conversion, that would be a different matter.\nPostby Kartik » 21 Nov 2014 12:27\nCybaru wrote:\nWhy? If the airframe can handle more flight hours, why not?\nbecause it is a very very old airframe as is. Maintenance spares won't be available easily even as of now, then imagine how it'll be 20-30 years from now.. and as things stood anyway, the HS-748 offered very little in terms of payload and range versus a C-295 class aircraft. The C-295 offers a very credible light transport, whereas the HS-748's role in the IAF was more akin to a transport trainer and for communication duties with little operational use. Having seen a dozen or so HS-748s parked at Vadodara airport all through my childhood, I never once saw one in the air. They just seemed to be stored out in the open. Upon asking an IAF transport pilot who was my friend's father, he remarked \"zyaada kaam ke nahi hain yeh\".\nWhy would you expend more capital on what is essentially an obsolete airframe, even if theoretically it had not yet reached its service life? You'd have to re-engine it, put new avionics on board and even that wouldn't suffice for para dropping requirements..it was operationally never suitable for para dropping, which is an important mission for transport aircraft and had deficiencies in hot and high climes as well.\nUnfortunately, the 748 was never meant to be a military transport. At the request of IAF, its door was enlarged to enable larger cargo items to be loaded and to allow para dropping without hitting the tail plane. However, to load a jeep in it, a 30-ft long ramp was required. The jeep would drive in and insert its front wheels into the aircraft. Then it had to be manually lifted and turned to get it in. Unloading it was just as difficult. Para dropping of troops or cargo even from the aircraft with the enlarged door was considered too dangerous with the risk of hitting the tail plane. The aircraft's performance at hot and high airfields was hopelessly inadequate. Eventually IAF acquired the tail-loading An-32s which were powered specifically for IAF's need for operating in the Himalayas.\nBRF article -Avro in IAF service\nNow unless you want to overcome all these through a costly, time consuming engineering re-design program, that too without access to original documents since this airplane was designed in the 1960s, there is no question of keeping them going for another 40 years. By which time the original design would be over 80 years old and with no one on earth but the IAF as an operator and HAL as the agency supporting it. Hardly a situation anyone would want.\nabhik wrote: +1, Air India recently sold their entire fleet of Boeing 777s.\nOnly 5 of the Boeing 777-200LR, to Etihad Airways, which IMO was a bad decision..they could have reconfigured the airplanes with just 2 classes and continued to fly them to the US, non-stop.\nThe remaining 3 777-200LR were offered for lease but are still a part of AI's fleet since they didn't find any takers. This particular model hardly sold much and was developed for ultra-long range flights..it was the least successful 777 model and clearly AI goofed up on the configuration by going for these in place of the 300ER. The economics however didn't make too much sense for AI eventually.\nthere are 13 777-300ER as a part of their fleet ahd their economics is much better.\nGovt. to decide tomorrow on whether to go ahead and allow the IAF to verify the technical details of the C-295 bid by Tata-Airbus instead of scrapping the tender due to single vendor situation.\nThe government will decide on Saturday whether to press ahead with the Rs 13,000 crore mega project for the private sector to supply 56 medium transport aircraft to the IAF despite only a single bidder, the Tata-Airbus consortium, being in the fray.\nThough the defence acquisitions council (DAC) chaired by Manohar Parrikar will take the final decision, MoD sources on Tuesday said the \"emerging dominant view\" is that green signal should be given to the crucial project designed to promote Indian private sector's entry into the domestic aerospace arena with foreign collaboration.\n\"The Tata-Airbus technical and commercial bid is a credible offer submitted in a competitive environment. The other seven contenders backed out for one reason or the other,\" said a source.\nIAF has now sought the clearance of the DAC -- the first such meeting to be chaired by Parrikar after becoming defence minister on November 10 -- to begin technical evaluation of the C-295 aircraft offered by Airbus Defence & Space and Tata Advanced Systems.\nThough it has become a single-vendor situation, the DAC can approve it if it wants as per existing procurement procedures. Of the eight foreign aviation majors that got the global tender, American Boeing and Lockheed-Martin as well as Brazilian Embraer said they did not manufacture the class of aircraft being sought by IAF.\nRefusing to take part in the tender, Russian Rosoboronexport said it wanted a fresh design and development project. Antonov of Ukraine wanted yet another extension of the bid submission deadline due to the ongoing conflict in Crimea. Swedish Saab said it had shut down its assembly line for such aircraft.\nThen, Alenia Aermacchi was linked to Italian conglomerate Finmeccanica, which has been slapped with \"a partial ban\" after the infamous VVIP helicopter scandal. \"All this left only the European consortium Airbus. The DAC will have to take a call since re-tendering may lead to the same situation,\" said the source.\nIncidentally, it was the Modi government's first DAC in July -- then headed by Arun Jaitley - which revived the Avro replacement project after it was put on hold by the UPA-2 regime last year due to strong opposition from the powerful PSU lobby and ministers like Praful Patel, as reported by TOI earlier.\nApart from the critical need to encourage the private sector to enter defence production in a big way, especially in the aerospace arena where Hindustan Aeronautics enjoys a monopoly, its felt the defence PSU's order books are already overflowing with projects.\nFingers crossed. Hopefully sense will prevail.\nWhy was lr got? Er is capable of Dubai to sfo nonstop.\nLr is overkill unless we want Delhi to Peru .\nSingha wrote: Why was lr got? Er is capable of Dubai to sfo nonstop.\nthey wanted it for non-stop routes from India to the west coast of the US. But with fuel prices going higher and with the lower seat count on the 777-200LR, the seat mile costs grew too high. A 3 class configuration only made matters worse. A higher density configuration with more economy class seats and just 12-15 Business class seats would have been better perhaps, especially if they didn't have very high First Class load factors.\nLR and ER is better if you want to have a better payload down below for long haul. Ultimately, the best bet is going to come form the 787's that take a fewer people (so you can do the longer routes) with still a competitive CASM, and the B and F class folks will pay good money for newer aircraft.\nPostby Kartik » 04 Dec 2014 12:55\nLets see if there is any forward movement on the stalled MTA project once Putin arrives in New Delhi\nMajor defence deals to be signed during Putin-Modi summit\nIn this connection, it is expected that during the summit, Russia and India may ultimately resolve several long-delayed agreements on military-technical cooperation projects between the two countries and sign them finally for their implementation. These agreements, above all, include joint Fifth Generation Fighter Aircraft (FGFA) project and joint development of Multi-role Transport Aircraft (MTA).\nA final deal on FGFA for production has been delayed because the Indian Air Force (IAF) did not approve the design and work-share. Now Russia has reportedly agreed that the jet would be a two-seat design, not a one-seater. India’s work-share would also be increased from18 percent to 25 percent, and even up to 40-50 percent in the near future, in view of the steady development of the Indian aviation industry.\nDefence and SecurityAccording to the agreement, India’s stealth air-to-air missile “Astra” along with Indo-Russian BrahMos supersonic cruise missile will be mounted on the FGFA.\nThe preliminary design agreement on FGFA had been signed in 2010 between Indian HAL and Russian Sukhoi Design Bureau to build the jet for the use by both countries. The final design contract was to be signed in July-August 2012. But the deadline has already passed. According to the Indian media reports, under the programme, India is expected to build 200 fighter jets at the cost of $30 billion.\nFGFA is not the only Indo-Russia joint project. The two countries also signed an agreement on the joint development of MTA in 2007, based on Il-214 Russian plane. The cost of the $600 million project is being equally shared by the two countries. The MTA, when developed, will have ready market for 205 aircraft - 45 for the Indian Air Force, 100 for the Russian Air Force, and 60 more for exporting to friendly countries. The international market for MTA is estimated at 390 planes. Under the agreement, thirty percent of the annual production of planes could be exported to third countries.\nThe MTA was expected to go in service with the Russian and Indian Air Forces in 2015. But the project faced a number of problems, delaying the development of the MTA. The project got into rough weather after India felt there was nothing much for Indian engineers and scientists to do in the design and development of the MTA.\nHowever, all the issues related to the project were resolved with the Russians when the HAL undertook to carry out design and development of its work-share of MTA at Aircraft R&D Centre at Bangalore. Russian Ilyushin Design Bureau and the Irkut Corporation and HAL are participating in the project. The first flight is expected to take place in 2017-18.\nThe MTA would replace the AN- 32 aircraft being used by the IAF. It will be used for both cargo and troop transportation, para-drop and air drop of supplies, including low-altitude parachute extraction system.\nBrahMos missile exports a challenging proposition\nAnother key deal expected to be signed during the summit, is for the development of “BrahMos mini missile” by the Indo-Russian joint venture BrahMos Aerospace which manufactures supersonic cruise missile. BrahMos’ new CEO Sudhir Mishra recently said he was hopeful that a deal to develop the mini version of the missile will be signed during Putin’s summit with Modi.\n“We are hoping to sign a tripartite agreement between DRDO, NPOM lab and BrahMos Aerospace during the planned visit of Russian President in December,” Mishra said.\nHe said that the new missile will have a speed of 3.5 mach and carry a payload of 300 km up to a range of 290 km. In size, it will be about half of the present missile, which is around 10 metres long. The missile can be integrated with different platforms, including submarines and FGFA. It is planned to be inducted into service by 2017.\nModi-Abbott to upgrade defence ties\nA new dimension:\nIn a first, India and Australia will also set up a mechanism to discuss “synergies in integrating defence system”, including research and development cooperation on integrating defence equipment that both countries currently purchase, for example, U.S’s C-17 Globemaster III, according to officials.\n^^That report about MTA is fishy. First it says that India has nothing to learn from an existing design (duh) and then says the issue has been resolved. How? Next it says India's need is 45 planes to replace over 100 An-32s. It also speculates about the export potential which may be nonexistent unless we sell it for peanuts.\nThis is a scam which only aims to create screwdriver jobs at HAL, stall any attempt to introduce private players into the aviation market and continue the Russian gravy train. My fear is the Russkies have our testiments in a firm grip with key components of Brahmos, nuke subs, Su30mki etc and we may be jerked around.\n(They need to be more definitive about \"MTA\" - Multirole vs. Medium)\nThe Indians had not selected an engine (among other things) for the MTA with the Russians. Perhaps that has been resolved now.\nOn export numbers, IIRC, it was the responsibility of Rosoboronexport. ?????\nKartik wrote: The MTA would replace the AN- 32 aircraft being used by the IAF. It will be used for both cargo and troop transportation, para-drop and air drop of supplies, including low-altitude parachute extraction system.\nPardon my ignorance. The Avro and An-32 have different upgrade paths. How are the replacements for these venerable aircraft different in terms of use cases in IAF. Cannot one platform replace both these types? (Either MTA or C-295)\nIn this case, I feel they should have just gone with screwdrivergiri (production tech) and got to market first. There is no jet-powered transporter in this range! Just license produce the IL-214 with the PD-14M, glass cockpit and a state-of-the-art COTS avionics computer.\nIn my view, it was a low hanging fruit, which they completely messed up! They could have learnt on how to adopt the plane for the 160-200 seater.\nindranilroy wrote: They could have learnt on how to adopt the plane for the 160-200 seater.\nYes, the MTA project should fold the Avro, An-32 and the regional transport role and become a conversion project rather a development one. The driving numbers will come from the regional transport (thousands in India itself) rather than the Avro or medium transport roles (max 300 between them). This changes the ball game and introduces all kinds of possibilities. But I'm pretty sure that the Il-214/MTA is not the way to go because it will take a decade or more to arrive. A good possibility was another Antonov, the An-148 but it has some mechanical glitches apparently besides being bogged down in the Ukraine mess. Maybe the Russians can \"relocate\" the aircraft to Russia? The other possibility is the BAe-146 which is ironically another Avro. We should remember that both the HS-748 \"Avro\" and An-32 were regional airliners that were converted to military use, not the other way around. HAL or a private firm will pick up a lot of experience in the conversion process itself.\nThe Sukhoi Superjet is already in production/orders,with over 100+ for Russian and intl. customers. It is ideal for regional transport,perfect for flights to smaller Tier-2/3 cities from metros. If we really want a regional jet this is the fastest way toi go,we can set up a manufacturing unti here for the same at an HAL unit.\nPostby shaun » 05 Dec 2014 15:24\nIts an international projects, with components outsourced from different international vendors . Over 30 foreign partnership companies are involved in the project and partly financed by Italy.\nSukhoi is good for passenger use but wont be suitable for military, rough field use. The shoulder wing jets like the An-148 have slower speeds and better ground clearance. The Bae-146 was usedby Druk Air in Bhutan so it should do OK in the ALGs. If we don't fold our requirements then we should go with something like the Superjet which we will at least be able to make in India and also modify to stretched versions. Unless we have a clear path to operational clearance within 10 yrs for the RTA project vetted by our top industrial houses, it is pie-in-the-sky and should be dropped. The RTA will be big enough to keep 2-3 factories humming and leapfrog our capabilities. If we don't get our act together almost immediately, we will miss the boat, just like our trainer fiascos.\nI don't think Superjet fits into our scheme of things. We should think as a country and see to it that our programs don't trample on each other.\nFirst, the more certain ones:\n1. Mahindras NM5 and Airvans can care of the low-cost but sturdy 5,8,10 and 18-seater section.\n2. Saras had such great potential for being the high performance 14-18 seater. But I have almost given up on it. This section will most probably be taken up by the Tata-built Do-228 NG.\n3. We should standardize the C-295 as the Avro/An-32 replacement and create a 70-80 seater variant out of it.\nAnd then the more wishful ones:\n1. If the RTA is going to be a jet, then make it a 100-130 seater. I don't expect the first prototype to take the sky before 2025. I feel it is too big of a jump where we don't even have a base. With LCA, at least we were at least license producing other fighters.\n4. Building on the IL-214, the MTA was on a more sure footing. But, I can't see how the first prototype can to take to the sky before 2019(more than 10 years since MTAL was formed)! If the transport plane materializes, then one can imagine making a civilian 150-200 seater version of the same. But this program needs a push. Will Putin's visit be able to galvanize this into the next symbol of Indo-Russian cooperation. Probably not!\nPostby GeorgeWelch » 12 Dec 2014 23:39\nhttp://www.ctvnews.ca/canada/defence-de ... -1.2144472\nThe Defence Department intends to purchase a Boeing C-17 Globemaster III, a large military transport plane that comes with a price tag of just under $200 million, CTV News has learned\nIt's difficult to get a good count, but by some sources, if this and the 4 Australia planes go through, there will only be 5 left.\nX-Posting from FGFA thread.\nDespite Putin’s visit, two pacts on military aircraft still in doldrums\nPresident Vladimir Putin may have come and gone but stalemate largely persists over two key long-pending India-Russian defence projects, the fifth-generation fighter aircraft (FGFA) and military multirole transport aircraft (MTA).\nThe deadlock over the MTA, which were initially envisaged to gradually replace IAF's ageing fleet of the medium-lift AN-32 aircraft, seems to be much more serious. India now wants to ascertain the cost viability of the twin-engine transport aircraft in comparison to similar planes available in the market.\nThere are also questions about the MTA's \"predicted timelines for delivery\" as well as its failure to meet the high-altitude requirements, which need to be answered before India even thinks of inking the full-scale contract for the project, said sources.\nPostby Gyan » 13 Dec 2014 12:29\nindranilroy wrote: I don't think Superjet fits into our scheme of things. We should think as a country and see to it that our programs don't trample on each other.\n1. Mahindras NM5 and Airvans can care of the low-cost but sturdy 5,8,10 and 18-seater section. Righto\n2. Saras had such great potential for being the high performance 14-18 seater. But I have almost given up on it. This section will most probably be taken up by the Tata-built Do-228 NG. We need future extended variants of presurrized aircraft like 30 seater Saras and say 30 seater unpressurized Do-328 NG.\n3. We should standardize the C-295 as the Avro/An-32 replacement and create a Civilian turboprop pressurized cabin 70-80 seater variant out of it.\n1. If the RTA is going to be a jet, then make it a 100-130 seater. Agreeeeeed I don't expect the first prototype to take the sky before 2025. I feel it is too big of a jump where we don't even have a base. With LCA, at least we were at least license producing other fighters. Though I think that we should participate in Russian MS-21 and also the wide body follow on.\n4. Building on the IL-214, the MTA was on a more sure footing. But, I can't see how the first prototype can to take to the sky before 2019(more than 10 years since MTAL was formed)! If the transport plane materializes, then one can imagine making a civilian 150-200 seater version of the same. Though I think that we should participate in Russian MS-21 and also the wide body follow on. But this program needs a push. Will Putin's visit be able to galvanize this into the next symbol of Indo-Russian cooperation. Probably not!\nAbsence of any specifics on Sukhoi Superjet, MS-21, Wide body aircraft, Mi-38, MRTA, FGFA, even after Putin visit is very disappointing.\nFlightGlobal- Boeing sitting on 8 unsold C-17s\nBy: Dan ParsonsWashington DCSource: Flightglobal.com\nThis story is sourced from Flightglobal.com 12 hours agoBoeing has sold two more C-17 transports to an undisclosed customer, but it will likely end the year with eight unsold white tails.\nThere are 10 Boeing C-17 airlifters in various stages of assembly at the company’s Long Beach, California, production facility.\nTwo of the aircraft are spoken for by an unnamed customer, Boeing says. Boeing is trying to sell off the other eight white tails, which will be the last produced before the factory is shuttered sometime in the summer of 2015.\nThe 279th – and final – C-17 fuselage will be mated to its wings in January or February, programme spokeswoman Tiffany Pitts tells Flightglobal. The operation is California’s last remaining aircraft production line and the lone widebody military aircraft production line in the USA, according to Boeing.\nAt least two countries – Australia and Canada – have publicly announced an intention to purchase a C-17, though neither factor into Boeing’s future planning, Pitts says. Until contracts are finalised, the number available remains eight, she says. The Royal Canadian Air Force already has four C-17As, according to Flightglobal’s World Air Forces 2014 directory.\nCanadian news outlets reported earlier in December that the air force would buy one C-17 with money left over at the end of 2015.\nAustralia is further along with its bid to purchase C-17s. The US Defense Security Cooperation Agency in November announced Australia was approved to buy up to four C-17s and support equipment for $1.6 billion.\nBoeing has plans to store any unsold C-17s following closure of its production line, Pitts says.\n“I’m hoping they all will be sold before then, but we’ve had plans in place for a very long time to store and maintain the aircraft if that doesn’t happen,” she says.\nthe IAF will need to factor in the demand vs availability of C-17s and stock up with a follow-on order quickly. The initial plan to have 16 C-17s may not fructify, considering that there are just 8 left now, with Australia having announced plans to buy 4 more.\nwhy are they closing the line if it has demands ???\nReal estate sales tactics probably. Buy now last 8 3bhk flats Saar.\nkrishnan wrote: why are they closing the line if it has demands ???\nIt requires 3 years lead time to order raw materials/parts from all of its sub-vendors. All current firm orders have been fulfilled, and no new orders have come. Anticipating a need for a few more aircrafts, they produced 10 extra (self-funded) units before production winded down. Bottom line is they don't make money keeping an idle plant around with all its employees and infrastructure. At most what they will likely do is keep a limited infrastructure around for a few more years in case a bunch of new orders come. They can then see if it makes business sense to re-open the plant.\nPostby Aditya_V » 17 Dec 2014 12:19\nWish this can be brought to the notice of Journos/ Poster when slamming LCA/ Arjun and other indigenous projects. If there are no orders there will be no efficiency.\nDec 10, 2014 :: Russia launches Il-76MDM upgrade programme\nRussia's Ilyushin has started to upgrade a first Russian Air Force (VVS) Ilyushin Il-76MD 'Candid' military transport aircraft to Il-76MDM standard, company officials have told IHS Jane's . The main features of the upgrade include refurbished engines and upgraded avionics.\nThe modernisation is being conducted at the VVS's Military Transport Aviation (MTA) maintenance facility based at the Ilyushin division in Zhukovsky city near Moscow.\nA senior Ilyushin official told IHS Jane's that the upgrade of the first aircraft will be finished in 18 months. Subsequent aircraft will take less time to complete the process, however. When the modernisation is finished the initial Il-76MDM will undergo state trials. The upgrade process for subsequent aircraft will begin when the trials programme is completed.\nIHS Jane's was previously told by a VVS senior official that the modernisation of 41 MTA Il-76MDs is planned by 2020. While the Il-76MDM upgrade retains the old D-30KP engine (compared with the PS-90A engine equipping the new Il-76MD-90A/Il-476), the modernisation effort should match the aircraft's onboard electronics with those of the newbuild Il-76MD-90A. This and other efforts mean the cost of modernising the Il-76MD to Il-76MDM is only a third of that of a newbuild Il-76MD-90A.\nThe existing D-30KP engines are to be enhanced to increase their service life. The overall aircraft's service life will be extended by 15 years.\nThe upgrade works are planned to be conducted in an aviation repair factory or in the MTA's aircraft maintenance facility. As a result, the Ulyanovsk-based Aviastar-SP plant, which is building the Il-76MD-90A, is not involved in the Il-76MD to Il-76MDM modernisation programme.\nUsers browsing this forum: Jaeger, Manish_Sharma, rajkumar, VikramA and 43 guests", "answers": ["The IAF is considering the acquisition of the Airbus A330 MRTT (Multi-Role Tanker Transport) besides the Boeing C-17."], "length": 5660, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "8d72c6709ee1da38a65cdfb4f9d90e7348fd8e356c0a8165"} {"input": "What does the new Iraqi Body Count organization do?", "context": "Ann's Mega Dub: 12/19/10 - 12/26/10\nGot o have a penis to be an expert\nThursday on NPR's Fresh Air, Terry Gross wanted to talk film and music. Since women don't know a thing about either and aren't interested in either, Terry had to find men who were 'experts.'This is C.I.'s \" Iraq snapshot Friday, December 24, 2010. Chaos and violence continue, Nouri's incomplete Cabinet continues to receive criticism, a father offers an 'excuse' for killing his own daughter, and more.Marci Stone (US Headlines Examiner) reports, \"Friday afternoon, Santa is currently in Baghdad, Iraq and on his next stop is Moscow, Russia, according to the 2010 NORAD Santa Tracker. The North American Aerospace Defense Command (NORAD) has been tracking Santa as he makes his annual journey throughout the world.\" Gerald Skoning (Palm Beach Post) quotes Santa saying, \"We send our special wishes for peace and goodwill to all. That includes the people of Iraq, Afghanistan, Iran and North Korea.\" Please note that this is Santa's seventh trip to Iraq since the start of the Iraq War and, as usual, his journey was known in advance. No waiting until he hit the ground to announce he was going to Iraq -- the way George The Bully Boy Bush had to and the way US President Barack Obama still has to. In the lead up to Santa's yearly visit, many 'authorities' in Iraq began insisting that Christmas couldn't be celebrated publicly, that even Santa was banned. Gabriel Gatehouse (BBC News) quotes Shemmi Hanna stating, \"I wasn't hurt but I wish that I had been killed. I wish I had become a martyr for this church, but God kept me alive for my daughters.\" Shemmi Hanna was in Our Lady of Salvation Church in Baghdad when it was assaulted October 31st and she lost her husband, her son, her daughter-in-law and her infant grandson in the attack. The October 31st attack marks the latest wave of violence targeting Iraqi Christians. The violence has led many to flee to northern Iraq (KRG) or to other countries. Zvi Bar'el (Haaretz) notes, \"This week the Iraqi legislature discussed the Christians' situation and passed a resolution in principle to help families who fled. However, the parliament does not know where the Christians are, how many are still in Iraq, in their homes, and how many have found asylum in Iraqi Kurdistan.\" John Leland (New York Times) reports:The congregants on Friday night were fewer than 100, in a sanctuary built for four or five times as many. But they were determined. This year, even more than in the past, Iraqi's dwindling Christian minority had reasons to stay home for Christmas. \"Yes, we are threatened, but we will not stop praying,\" the Rev. Meyassr al-Qaspotros told the Christmas Eve crowd at the Sacred Church of Jesus, a Chaldean Catholic church. \"We do not want to leave the country because we will leave an empty space.\" Raheem Salman (Los Angeles Times) reports, \"Rimon Metti's family will go to Christian services on Christmas Day, but his relatives will be praying for their own survival and wondering whether this is their last holiday season in Baghdad. If they had any grounds for optimism about the future of their faith in Iraq, it vanished this year amid repeated attacks on fellow believers.\" Shahsank Bengali (McClatchy Newspapers) adds, \"Nearly two months after a shocking assault by Islamist militants, Our Lady of Salvation Catholic Church will commemorate Christmas quietly, with daytime mass and prayers for the dead, under security fit more for a prison than a house of worship. It is the same at Christian churches across Baghdad and northern Iraq, where what's left of one of the world's oldest Christian communities prepares to mark perhaps the most somber Christmas since the start of the Iraq war.\"Meanwhile Taylor Luck (Jordan Times) reports on Iraqi refugees in Jordan:Although the calendar will say December 25, for Theresa, Saturday will not be Christmas. There will be no cinnamon klecha cooling on the dining room table, no outdoor ceramic nativity scene, no readings of hymns with relatives. The 63-year-old Iraqi woman has even refused to put up Christmas lights in the crowded two-room Amman hotel apartment she has called home since fleeing Baghdad last month.\"There is no holiday spirit. All we have is fear,\" she said.This holiday will instead mark another year without news from her 46-year-old son, who was kidnapped outside Baghdad in late 2006.From Turkey, Sebnem Arsu (New York Times -- link has text and video) notes the increase in Iraq refugees to the country since October 31st and quotes Father Emlek stating, \"I've never seen as many people coming here as I have in the last few weeks. They also go to Lebanon, Jordan and Syria but it seems that Turkey is the most popular despite the fact that they do not speak the language.\" Jeff Karoub (AP) reports on the small number of Iraqi refugees who have made it to the US and how some of them \"struggle with insomnia, depression and anxiety.\"One group in Iraq who can openly celebrate Christmas are US service members who elect to. Barbara Surk (AP) reports that tomorrow Chief Warrant Officer Archie Morgan will celebrate his fourth Christmas in Iraq and Captain Diana Crane is celebrating her second Christmas in Iraq: \"Crane was among several dozen troops attending a Christmas Eve mass in a chapel in Camp Victory, an American military base just outside Baghdad.\" Marc Hansen (Des Moines Reigster) speaks with six service members from Iowa who are stationed in Iraq. Sgt 1st Class Dennis Crosser tells Hansen, \"I certainly understand from reading the paper what's going on in Afghanistan and the attention definitely needs to be on the troops there. But everyone serving here in Operation New Dawn appreciates a little bit of attention as we finish this up.\"Today Jiang Yu, China's Foreign Minister, issued the following statement, \"We welcome and congratulate Iraq on forming a new government. We hope that the Iraqi Government unite all its people, stabilize the security situation, accelerate economic reconstruction and make new progress in building its country.\" James Cogan (WSWS) reports:US State Department official Philip Crowley declared on Wednesday that Washington had not \"dictated the terms of the government\". In reality, constant American pressure was applied to Maliki, Allawi, Kurdish leaders and other prominent Iraqi politicians throughout the entire nine-month process to form a cabinet. The US intervention included numerous personal phone calls and visits to Baghdad by both President Barack Obama and Vice President Joe Biden.The key objective of the Obama administration has been to ensure that the next Iraqi government will \"request\" a long-term military partnership with the US when the current Status of Forces Agreement (SOFA) expires at the end of 2011. The SOFA is the legal basis upon which some 50,000 American troops remain in Iraq, operating from large strategic air bases such as Balad and Tallil and Al Asad. US imperialism spent billions of dollars establishing these advanced bases as part of its wider strategic plans and has no intention of abandoning them.Cogan's only the second person to include the SOFA in his report. Some are impressed with the 'feat' of taking nearly ten months to form a government, stringing the country along for ten months while no decisions could go through. The editorial board of the Washington Post, for example, was full of praise yesterday. Today they're joined by Iran's Ambassador to Iraq, Hassan Danaiifar. The Tehran Times reports that Danaiifar was full of praise today hailing the \"positive and final step which ended the 10-month political limbo in Iraq.\" However, Danaiifar was less pie-in-the-sky than the Post editorial board because he can foresee future problems as evidenced by his statement, \"We may witness the emergence of some problems after one and half of a year -- for example, some ministers may be impeached.\" Of course, there are already many clouds on the horizon, even if Iranian diplomats and Post editorial boards can't suss them out. For example, Ben Bendig (Epoch Times) noted the objection of Iraq's female politicians to Nouri al-Maliki's decision to nominate only one woman (so far) to his Cabinet: \"Some 50 female lawmakers went to the country's top leadership, the United Nations and the Arab League to voice their concern and desire for increased representation.\" BNO notes that protest and also that a group of Iraqi MPs are alleging that Iraqiya bought seats in the Cabinet via money exchanged in Jordan. UPI adds, \"Maliki, a Shiite who has a long history of working with Tehran, has named himself acting minister of defense, interior and national security, three most powerful and sensitive posts in the government he is stitching together. Although Maliki appears to be bending over backward to accommodate rivals among Iraq's Shiite majority as well as minority Sunnis and Kurds in his administration in a spirit of reconciliation, he is unlikely to relinquish those ministries that dominate the security sector.\" DPA reports, \"Sheikh Abdel-Mahdi al-Karbalaei, a confident of influential Shiite spiritual leader Ayatollah Ali al-Sistani, said that the new cabinet is 'below the standards' Iraqi citizens had hoped for and suggested it could prove to be weaker than the previous government.\" Ranj Alaaldin (Guardian) also spots clouds on the horizon:Lasting peace and stability depends on resolving outstanding disputes with the Kurds on oil, revenue-sharing, security and the disputed territories (Kirkuk in particular). The Kurds, rather than exploiting their kingmaker position to take a stronger proportion of ministries in Baghdad (they are taking just one major portfolio – the foreign ministry), are instead banking on guarantees from Maliki to implement their list of 19 demands that includes resolving the above disputes in their favour.They may have been naive, though. With their historical and federalist partners, the Islamic supreme council of Iraq in decline, the Kurds may be isolated in the new government – a government dominated by the nationalistic and centrist characteristics of the INM, the Sadrists and indeed State of Law.Maliki may, therefore, turn out to be unable to grant concessions even if he wanted to and could use Osama Nujayfi, the new ultra-nationalist speaker of parliament and Kurdish foe, to absorb the Kurdish criticism and insulate himself from any attacks.AP reports that Iraqi police sought out a 19-year-old woman because of rumors that she was working with al Qaida in Mesopotamia only to be greeted with the news that her father allegedly killed her and the father showed the police where he buried the woman . . . last month. The story begs for more than it offers. The most obvious observation is: what does it say that a woman's allegedly killed by her father and no one says a word for over a month? After that, it should probably be noted that there are many men in Iraq killing women who, no doubt, would love to also be able to pin the blame on al Qaida. In other violence, Reuters notes a house bombing in Haswa which claimed the life of Mohammed al-Karrafi, \"his wife, two sons and a nephew\" -- as well as injuring four more people, and a Samarra roadside bombing which claimed the lives of 2 police officers. DPA notes it was two homes bombed in Haswa and that the Samarra roadside bombing also injured four Iraqi soldiers. Jomana Karadsheh (CNN) reports, \"Another policeman was wounded in Baghdad Friday night when a roadside bomb detonated by a police patrol, an Interior Ministry official told CNN.\"And we'll close with this from Peace Mom Cindy Sheehan's latest Al Jazeera column:The recent repeal of the US military policy of \"Don't ask, don't tell\" is far from being the human rights advancement some are touting it to be. I find it intellectually dishonest, in fact, illogical on any level to associate human rights with any military, let alone one that is currently dehumanising two populations as well as numerous other victims of it's clandestine \"security\" policies.Placing this major contention aside, the enactment of the bill might be an institutional step forward in the fight for \"equality\"; however institutions rarely reflect reality.Do we really think that the US congress vote to repeal the act and Obama signing the bill is going to stop the current systemic harassment of gays in the military?While I am a staunch advocate for equality of marriage and same-sex partnership, I cannot - as a peace activist - rejoice in the fact that now homosexuals can openly serve next to heterosexuals in one of the least socially responsible organisations that currently exists on earth: The US military.It is an organisation tainted with a history of intolerance towards anyone who isn't a Caucasian male from the Mid-West. Even then I'm sure plenty fitting that description have faced the terror and torment enshrined into an institution that transforms the pride and enthusiasm of youth into a narrow zeal for dominating power relations.And we'll close with this from Francis A. Boyle's \"2011: Prospects for Humanity?\" (Global Research):Historically, this latest eruption of American militarism at the start of the 21st Century is akin to that of America opening the 20th Century by means of the U.S.-instigated Spanish-American War in 1898. Then the Republican administration of President William McKinley stole their colonial empire from Spain in Cuba, Puerto Rico, Guam, and the Philippines; inflicted a near genocidal war against the Filipino people; while at the same time illegally annexing the Kingdom of Hawaii and subjecting the Native Hawaiian people (who call themselves the Kanaka Maoli) to near genocidal conditions. Additionally, McKinley's military and colonial expansion into the Pacific was also designed to secure America's economic exploitation of China pursuant to the euphemistic rubric of the \"open door\" policy. But over the next four decades America's aggressive presence, policies, and practices in the \"Pacific\" would ineluctably pave the way for Japan's attack at Pearl Harbor on Dec. 7, 194l, and thus America's precipitation into the ongoing Second World War. Today a century later the serial imperial aggressions launched and menaced by the Republican Bush Jr. administration and now the Democratic Obama administration are threatening to set off World War III. By shamelessly exploiting the terrible tragedy of 11 September 2001, the Bush Jr. administration set forth to steal a hydrocarbon empire from the Muslim states and peoples living in Central Asia and the Persian Gulf under the bogus pretexts of (1) fighting a war against international terrorism; and/or (2) eliminating weapons of mass destruction; and/or (3) the promotion of democracy; and/or (4) self-styled \"humanitarian intervention.\" Only this time the geopolitical stakes are infinitely greater than they were a century ago: control and domination of two-thirds of the world's hydrocarbon resources and thus the very fundament and energizer of the global economic system – oil and gas. The Bush Jr./ Obama administrations have already targeted the remaining hydrocarbon reserves of Africa, Latin America, and Southeast Asia for further conquest or domination, together with the strategic choke-points at sea and on land required for their transportation. In this regard, the Bush Jr. administration announced the establishment of the U.S. Pentagon's Africa Command (AFRICOM) in order to better control, dominate, and exploit both the natural resources and the variegated peoples of the continent of Africa, the very cradle of our human species. This current bout of U.S. imperialism is what Hans Morgenthau denominated \"unlimited imperialism\" in his seminal work Politics Among Nations (4th ed. 1968, at 52-53): The outstanding historic examples of unlimited imperialism are the expansionist policies of Alexander the Great, Rome, the Arabs in the seventh and eighth centuries, Napoleon I, and Hitler. They all have in common an urge toward expansion which knows no rational limits, feeds on its own successes and, if not stopped by a superior force, will go on to the confines of the political world. This urge will not be satisfied so long as there remains anywhere a possible object of domination--a politically organized group of men which by its very independence challenges the conqueror's lust for power. It is, as we shall see, exactly the lack of moderation, the aspiration to conquer all that lends itself to conquest, characteristic of unlimited imperialism, which in the past has been the undoing of the imperialistic policies of this kind…. On 10 November 1979 I visited with Hans Morgenthau at his home in Manhattan. It proved to be our last conversation before he died on 19 July 1980. Given his weakened physical but not mental condition and his serious heart problem, at the end of our necessarily abbreviated one-hour meeting I purposefully asked him what he thought about the future of international relations. iraqbbc newsgabriel gatehousethe new york timesjohn lelandhaaretzzvi bar'elthe jordan timestaylor luckthe associated pressjeff karoubthe los angeles timesraheem salmancnnjomana karadsheh\nTerry thinks she's a man\nYesterday on NPR's Fresh Air the hour went to a male TV critic. It's always a man with Terry. Always. And somebody tell her that a snotty, snooty TV critic really doesn't make for good programming.This is C.I.'s \"Iraq snapshot:\" Thursday, December 23, 2010. Chaos and violence continue, Iraqi women make clear their displeasure over the Cabinet make up, Daniel Ellsberg and Veterans for Peace get some recognition, and more. Last Thursday a protest held outside the White House. One of the organizers was Veterans for Peace and Pentagon Papers whistle blower Daniel Ellsberg participated and spoke. Juana Bordas (Washington Post) advocates for both of them to be named persons of the year: Veterans for Peace and Daniel Ellsberg should be this year's person of the year because of their courage and bravery to stand up for all of us who believe that \"war is not the answer.\" Moreover in a time of economic recession, the war machine is bankrupting our country. As John Amidon, a Marine Corps veteran from Albany asked at the White House protest, \"How is the war economy working for you?\"While unemployment rates hover near 10 percent, there is no doubt that the U.S. economy and quality of life is faltering. Worldwide we are 14th in education, 37th in the World Health Organization's ranking on medical systems, and 23rd in the U.N. Environmental Sustainability Index on being most livable and greenest benefits. There is one place we take the undeniable world lead. The US military spending accounts for a whopping 46.5 percent of world military spending--the next ten countries combined come in at only 20.7 percent. Linda Pershing (Truthout) reports, \"Responding to a call from the leaders of Stop These Wars(1) - a new coalition of Veterans for Peace and other activists - participants came together in a large-scale performance of civil resistance. A group of veterans under the leadership of Veterans for Peace members Tarak Kauff, Will Covert and Elaine Brower, mother of a Marine who has served three tours of duty in Iraq, sponsored the event with the explicit purpose of putting their bodies on the line. Many participants were Vietnam War veterans; others ranged from Iraq and Afghanistan war veterans in their 20s and 30s to World War II vets in their 80s and older. They were predominately white; men outnumbered women by at least three to one. After a short rally in Lafayette Park, they formed a single-file procession, walking across Pennsylvania Avenue to the solemn beat of a drum. As they reached the police barricade (erected to prevent them from chaining themselves to the gate, a plan they announced on their web site), the activists stood shoulder to shoulder, their bodies forming a human link across the 'picture postcard' tableau in front of the White House.\" Maria Chutchian (Arlington Advocate) quotes, participant Nate Goldshlag (Vietnam veteran) stating, \"\"There was a silent, single file march around Lafayette Park to a drum beat. Then we went in front of the White House,. There were barricades set up in front of white house fence. So when we got there, we jumped over barricades and were able to get right next to the White House fence.\" Participant Linda LeTendre (Daily Gazette) reports: At the end of the rally, before the silent, solemn procession to the White House fence, in honor of those killed in Iraq and Afghan wars of lies and deceptions, the VFP played taps and folded an American flag that had been left behind at a recent funeral for the veteran of one of those wars. Two attendees in full dress uniform held and folded the flag. I had the image of all of the people who stood along the roads and bridges when the bodies of the two local men, Benjamin Osborn and David Miller, were returned to the Capital District. I thought if all of those people were here now or spoke out against war these two fine young men might still be with us.I was blessed enough to be held in custody with one of those in uniform; a wonderful young man who had to move from his hometown in Georgia because no one understood why as a veteran he was against these wars. Even his family did not understand. (He remains in my prayers.)Our plan was to attach ourselves to the White House fence until President Obama came out and talked to us or until we were arrested and dragged away. I don't have to tell you how it ended.Mr. Ellsberg was one of 139 people arrested at that action. We've noted the protest in pretty much every snapshot since last Thursday. If something else comes out that's worth noting on the protest, we'll include it. We will not include people who don't have their facts and it's really sad when they link to, for example, Guardian articles and the links don't even back them up. It's real sad, for example, when they're trashing Hillary (big strong men that they are) and ripping her apart and yet Barack? \"Obama's inaccurate statements\"??? What the hell is that? You're inferring he lied, say so. Don't be such a little chicken s**t. It's especially embarrasing when you're grandstanding on 'truth.' Especially when you're the little s**t that clogged up the public e-mail account here in the summer of 2008 whining that you were holding Barack to a standard, then admitting that you weren't, then whining that if you did people would be mean to you. Oh, that's sooooooo sad. Someone might say something bad about you. The horror. You must suffer more than all the people in Iraq and Afghanistan combined. While the action took place in DC, actions also took place in other cities. We've already noted NYC's action this week, Doug Kaufmann (Party for Socialism & Liberation) reports on the Los Angeles action: Despite heavy rain, over 100 people gathered in Los Angeles on the corner of Hollywood and Highland to demand an end to the U.S. wars on Afghanistan and Iraq. People came from as far as Riverside to protest, braving what Southern California media outlets have dubbed the \"storm of the decade.\" The demonstration, initiated and led by the ANSWER Coalition, broke the routine of holiday shopping and garnered support from activists and even passers by, who joined in chanting \"Money for jobs and education -- not for war and occupation!\" and \"Occupation is a crime -- Iraq, Afghanistan, Palestine!\" Protesters held banners reading, \"U.S./NATO Out of Afghanistan!\" and \"Yes to jobs, housing and education -- no to war, racism and occupation!\"Speakers at the demonstration included representatives of Korean Americans for Peace, ANSWER Coalition, KmB Pro-People Youth, Veterans for Peace, Party for Socialism and Liberation and National Lawyers Guild. Tuesday, Nouri al-Maliki managed to put away the political stalemate thanks to a lot of Scotch -- tape to hold the deal together and booze to keep your eyes so crossed you don't question how someone can claim to have formed a Cabinet when they've left over ten positions to be filled at a later date. One group speaking out is women. Bushra Juhi and Qassmi Abdul-Zahra (AP) report, \"Iraq's female lawmakers are furious that only one member of the country's new Cabinet is a woman and are demanding better representation in a government that otherwise has been praised by the international community for bringing together the country's religious sects and political parties.\" As noted Tuesday, though represenation in Parliament is addressed in Iraq's Constitution, there is nothing to address women serving in the Cabinet. Aseel Kami (Reuters) notes one of the most damning aspects of Nouri's chosen men -- a man is heaing the Ministry of Women's Affairs. Iraqiya's spokesperson Maysoon Damluji states, \"There are really good women who could do wel . . . they cannot be neglected and marginalized.\" Al-Amal's Hanaa Edwar states, \"They call it a national (power) sharing government. So where is the sharing? Do they want to take us back to the era of the harem? Do they want to take us back to the dark ages, when women were used only for pleasure.\" Deborah Amos (NPR's All Things Considered) reports that a struggle is going on between secular impulses and fundamentalist ones. Gallery owner Qasim Sabti states, \"We know it's fighting between the religious foolish man and the civilization man. We know we are fighting like Gandhi, and this is a new language in Iraqi life. We have no guns. We do not believe in this kind of fighting.\" Deborah Amos is the author of Eclipse of the Sunnis: Power, Exile, and Upheaval in the Middle East. Meanwhile Nizar Latif (The National) reports that distrust is a common reaction to the new government in Baghdad and quotes high school teacher Hussein Abed Mohammad stating, \"Promises were made that trustworthy, competent people would be ministers this time around, but it looks as if everything has just been divided out according to sectarian itnerests. No attention has been paid to forming a functioning government, it is just a political settlement of vested interests. I'm sure al Maliki will have the same problems in his next four years as he had in the last four years.\" Days away from the ten months mark, Nouri managed to finally end the stalemate. Some try to make sense of it and that must have been some office party that the editorial board of the Washington Post is still coming down from judging by \"A good year in Iraq.\" First up, meet the new Iraqi Body Count -- an organization that provides cover for the war and allows supporters of the illegal war to point to it and insist/slur \"Things aren't so bad!\" Sure enough, the editorial board of the Post does just that noting the laughable \"civilian deaths\" count at iCasualities. As we noted -- long, long before we walked away from that crap ass website, they're not doing a civilian count.", "answers": ["It provides cover for the war and allows supporters of the illegal war to point to it."], "length": 4467, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "048c12357868d908dfb979ef3c5c6cf7f6d85053acf1cac4"} {"input": "What did the London Directory proclaim to contain?", "context": "A Brief History of Benjamin Franklin's Residences on Craven Street, London: 1757 - 1775 - Journal of the American Revolution\nBenjamin Franklin House, 36 Craven St, London. (Photo by Elliott Brown | Wikimedia Commons)\nIf one looked into Benjamin Franklin’s time on Craven Street, they might initially believe he lived at 36 Craven Street the entirety of his two stays in London based on the plethora of articles on the internet that say so. If they dug a little deeper they might read that he lived at No. 27 Craven Street, previously numbered 7, but now numbered 36; or that he lived exclusively at No. 7 Craven Street; or that he lived in multiple residences on Craven Street; or that he moved out of No. 36 to another house on Craven Street and then moved back into No. 36 the last year of his residence. What is one to believe with all of the conflicting accounts? What does the historical record have to say about Franklin’s time on Craven Street?\nFigure 1. Spur Alley 1685. “A map of the parish of St Martins in the Fields, taken from ye last survey, with additions (1685)”. (© The British Library Board, Shelfmark: Maps Crace Port. 13.2, Item number: 2)\nBefore Craven Street existed there was Spur Alley, a narrow passageway sandwiched between the Hungerford Market to the north (now Charing Cross Station) and Scotland Yard and the Northumberland House and Garden to the south. It was flanked on both ends by major thoroughfares, the Strand on the west, connecting Westminster to London by road, and the River Thames on the east, not only connecting the two cities to each other and to Southwark on the south side of the Thames, but connecting the entire metropolis to the rest of the world. Being located in the City of Westminster, Spur Alley had escaped the devastation of the Great Fire of London in 1666 leaving its wooden structures, built in the early part of seventeenth century, intact, but also in dire need of restoration or demolition. “The ratebooks show that during the last thirty years or so of their existence the houses in Spur Alley were in a very bad condition. Few of them were rated at more than a few shillings and many of them were unoccupied.”[1] The landowner, William, 5th Baron Craven, desiring to increase the profitability of his assets, tore down the derelict structures on Spur Alley around 1730 and leased the newly established lots to builders. By 1735, twenty brick houses in the Georgian style had been built on the west side and sixteen on the east side of the way now called Craven Street.[2]\nFigure 2. Craven Street 1746. (John Rocque London, Westminster and Southwark, First Edition 1746, Motco Enterprises Limited, motco.com)\nLetters to Franklin during his residence with Mrs. Margaret Stevenson, his landlady on Craven Street, were addressed rather vaguely; “Craven Street/Strand”, “Mrs. Stevensons in Craven Street”, or “Benjamin Franklin Esqr.” are but a few examples. Letters from Franklin referenced “London,” or sometimes “Cravenstreet,” but never included a number. Despite the absence of numbered addresses in Franklin’s correspondence, there was a sense of one’s place in the neighborhood based on entries in the Westminster Rate Books (tax assessments). The Rate Books did not list house numbers during Franklin’s time there, but they did list the residents of Craven Street in a particular order that became the default numbering system for the street. Number one was associated with the first resident listed under “Craven Street” in the Rate Books and was the northernmost house on the west side of the street. The numbers increased counter-clockwise down the west side and up the east side in accordance with the list of residents. In 1748, the first year of Margaret Stevenson’s (Stevens in the Rate Books for that year) residence on Craven Street, she is listed as the twenty-seventh resident, the second house north of Court Street (later Craven Court, now Craven Passage) on the east side of the street.[3]\nIn 1766, Parliament passed the London Paving and Lighting Act (6 Geo. 3 c. 26), “An act for the better paving, cleansing, and enlightening, the city of London, and the liberties thereof; and for preventing obstructions and annoyances within the same; and for other purposes therein mentioned.”[4] One of the other purposes therein mentioned was the numbering of houses. With an aim to bring order to the chaotic numbering systems or lack thereof on London streets the Act provided that “… the said commissioners … may also cause every house, shop, or warehouse, in each of the said streets, lanes, squares, yards, courts, alleys, passages, and places, to be marked or numbered, in such manner as they shall judge most proper for distinguishing the same.”[5] This was quite an undertaking that took years to accomplish. It was a decade later before numbered addresses on Craven Street in the City of Westminster appeared in The London Directory (1776). The London Directory and its competitors were published primarily by booksellers or printers to supplement their income and were highly profitable. To say they were competitive is an understatement. “Some of the most hotly disputed struggles over copyright in the century concerned guidebooks. Many were optimistically emblazoned with a royal license and a notice that the work had been entered at Stationers’ Hall. Various struggles between rival guides intensified as the potential for profits became clear.”[6] The London Directory boldly proclaimed to contain “An ALPHABETICAL LIST OF THE NAMES and PLACES of ABODE of the MERCHANTS and PRINCIPAL TRADERS of the Cities of LONDON and WESTMINSTER, the Borough of SOUTHWARK, and their Environs, with the Number affixed to each House.”[7] Kent’s Directory made a similar proclamation: “An Alphabetical LIST OF THE Names and Places of Abode OF THE DIRECTORS of COMPANIES, Persons in Public Business, MERCHANTS, and other eminent TRADERS in the Cities of London and Westminster, and Borough of Southwark WITH THE NUMBERS as they are affixed to their Houses agreeable to the late Acts of Parliament.”[8] Mrs. Stevenson wasn’t included in the directories because she didn’t meet the criteria of being a merchant or trader, not because she was a woman. Although it is rare to see women listed in the directories, some examples do exist.[9] If Mrs. Stevenson had appeared in the directories in 1776 it would not have been on Craven Street as she had moved to Northumberland Court, a stone’s throw away, the previous year.[10] A comparison of Craven Street residents whose names and addresses do appear in the directories with the same residents as they appear in the Westminster Rate Books determines if the numbering systems were congruent. For the most part they were. For example, Joseph Bond at No. 30, William Rowles at No. 31, Samuel Sneyd at No. 32, and Jonathan Michie at No. 35 in The London Directory coincide with their places of residence in the Westminster Rate Books; however, errors did occur. The 1776 edition of The London Directory lists Brown & Whiteford, wine merchants, at No. 9 Craven Street while the Westminster Rate Books list them as the twenty-ninth residents. Obviously, it makes no sense to have Brown & Whiteford at No. 9 in The London Directory and their next-door neighbor, Joseph Bond, at No. 30. The same error appears in Baldwin’s The New Complete Guide for 1783. The New Complete Guide may have “borrowed” the error from The London Directory. It was not uncommon for the owner of one directory to copy entries from another to save both time and money. Beginning in 1778 and contrary to The London Directory, Kent’s Directory faithfully followed the numbering system of the Westminster Rate Books in all of its editions and listed Brown & Whiteford at No. 29 as did Bailey’s Northern Directory in 1781. Perhaps realizing their error, The London Directory changed their listing of Brown & Whiteford from No. 9 to No. 29 in their 1783 edition and maintained that listing thereafter.\nSometime prior to 1792, the embankment on the Thames at the south end of Craven Street had been sufficiently extended allowing for the construction of ten new houses below the original houses: “ … four houses, Nos. 21–24, were built on the west side, and six houses, Nos. 25–30, on the east side of the way.”[11] In a note in the same report, the new numbering system is explained. “The houses in the street, which had previously been numbered consecutively down the west side and up the east side, were then renumbered on the same system to include the additional houses.”[12] Because the new houses (21-24) on the west side were built below the existing houses (1-20), houses 1-20 retained their original numbering.\nFigure 4. Craven Street 1799. (Richard Horwood’s Map of London, Westminster and the Borough of Southwark 1799, Motco Enterprises Limited, motco.com)\nOne would think that the numbers of the sixteen original houses on the east side, Nos. 21 – 36, would simply increase by ten with the addition of the ten new houses, but such was not the case; they increased by nine. How could that be? The only possible explanation is that No. 21 of the original houses was demolished to make way for the construction of the northernmost of the six new houses on the east side (No. 30). Evidence of No. 21’s demolition appears in the lease granted to Charles Owen by William, 7th Baron Craven, in 1792, which describes No. 22 as: “All that messuage in Craven Street late in the occupation of Francis Deschamps undertaker … being the Southernmost house in the Old Buildings on the East Side of the said Street numbered with the No. 22.”[13] The lease describes No. 22 as being the southernmost house in the old buildings on the east side of Craven Street. Clearly the house previously at No. 21 did not exist when the lease granted to Charles Owen was written in 1792 as it used to be the southernmost house. It is also worth noting that in 1790, The London Directory listed Jacob Life at No. 21 (original numbering). In 1791-2, it listed him at No. 6. With No. 21 vacated, it would allow for its demolition and the construction of the tenth new house. By utilizing lot No. 21 for the new construction, only nine additional lots were needed to build the ten houses, hence, Margaret Stevenson’s former residence at 27 became 36 (27 + 9) in the renumbering and not 37.\nFor nearly a century and a half after Franklin departed London for America in March of 1775 the scales were tipped heavily in favor of his residence having been No. 7 Craven Street. As early as 1807 in London; Being An Accurate History And Description Of The British Metropolis And Its Neighborhood, Volume 4, one would have read: “In Craven Street is a house, No. 7, remarkable for having been the residence of Dr. Benjamin Franklin.[14] In 1815, the identical phrase appeared in The Beauties of England and Wales.[15] After 23 editions of not mentioning Franklin, his name finally appeared in the 24th edition of The Picture of London in 1826: “The house, No. 7, Craven Street, in the Strand, was once the residence of Dr. Benjamin Franklin.”[16] In 1840, Jared Sparks referred to Franklin’s Craven Street residence appearing in London guide books in his voluminous The Works of Benjamin Franklin: “In the London Guide Books, ‘No. 7, Craven Street,’ is still indicated as the house in which Dr. Franklin resided.”[17] In 1846, George Gulliver F.R.S., in his book, The Works of William Hewson, wrote: “She [Polly] had been upon terms of the warmest friendship with Dr. Franklin\nFigure 5. No. 7 Craven Street with Memorial Tablet. (Photo courtesy of British History Online, and the Survey of London)\nsince she was eighteen years of age. That eminent philosopher resided with her mother, Mrs. Margaret Stevenson, at No. 7, Craven Street, Strand, during the fifteen years of his abode in London.”[18] Guide books mentioning Franklin at No. 7 continued to proliferate throughout the century: Handbook for London; Past and Present, Volume I (1849);”[19] Handbook for Modern London (1851);”[20] The Town; Its Memorable Characters and Events (1859);”[21] London and Its Environs (1879).[22] There was an anomaly when London In 1880 Illustrated With Bird’s-Eye Views of the Principal Streets, Sixth Edition (1880) placed Franklin at 27 Craven street.[23] The anomaly lasted for six years until his place of residence was changed to No. 7 in the revised edition, London. Illustrated by Eighteen Bird’s-Eye Views of the Principal Streets (1886).[24] London Past and Present; Its History, Associations, and Traditions, Volume 1 (1891), copied the 1849 Handbook for London almost word-for-word and included, “The house is on the right from the Strand.”[25] In October of 1867, The Society of Arts in London declared that: “In order to show how rich the metropolis is in the memory of important personages and events, which it would be desirable to mark by means of tablets on houses, the Council have caused an alphabetical list to be prepared, … ”[26] Franklin had been elected a corresponding member to the Society in 1756 and was a popular choice among Council members deciding who they were to memorialize.[27] By January of 1870, a tablet honoring him was affixed to the house they believed to have been his residence while in London, No. 7 Craven Street in the Strand on the west side of the street.[28] A majority of historians writing about Franklin in the nineteenth and early twentieth century placed him at No. 7: O. L. Holley, The Life of Benjamin Franklin (1848); E. M. Tomkinson, Benjamin Franklin (1885); John Torrey Morse, Benjamin Franklin (1891); Paul Elmer More, Benjamin Franklin (1900); John S. C. Abbot, Benjamin Franklin (1903); Sydney George Fisher, The True Benjamin Franklin (1903). A notable exception is D. H. Montgomery’s His Life Written by Himself published in 1896. He has Franklin at No. 27 Craven Street. It seems then that depending upon the source, Franklin was thought to have lived at either No. 7 or No. 27, but not both, the overwhelming majority favoring No. 7. As late as 2011, Franklin is still mentioned as living at No. 7.[29]\nIn 1913, No. 7 was scheduled to be torn down. An article in the March 1914 edition of The Book News Monthly, describes the situation:\nAs is well known to informed American pilgrims, it has been possible for all admirers of the famous philosopher and statesman to pay their respects to his memory before that house, No. 7 Craven Street, just off the Strand, which was his chief home during his two sojourns in the British capital, but even as these lines are being written the London newspapers are recording that that interesting shrine is soon to be pulled down to make room for a restaurant. It is some mitigation of this misfortune to remember that at the most the Craven Street house was nothing more than a reproduction of the one in which Franklin had his suite of four rooms, for the structure has been rebuilt since Franklin’s time. When, then, some one makes a piteous plea that at least the philosopher’s bedroom shall be preserved, the soothing answer is that the apartment in question is only a replica of that in which the illustrious American enjoyed his well-earned slumbers in 1757-62 and 1764-75. The restaurant-builder, however, with an eye doubtless to possible American patronage, has assured the world that every effort will be made to preserve as much as possible of the entire structure.[30]\nConcerned with the possible demolition of Franklin’s residence, the Royal Society of Arts (formerly the Society of Arts[31]) initiated an inquiry into the matter.[32] The London County Council, having taken over the responsibility of placing memorial tablets on notable houses from the Royal Society, was charged with the investigation. It ultimately fell to Sir George Laurence Gomme, a clerk to the Council, to come up with a response. A few years earlier Sir George had discovered Margaret Stevenson residing at No. 27 Craven Street in the Westminster Rate Books. He must have wondered why No. 7 on the west side of Craven Street was being celebrated as Franklin’s residence when the evidence clearly showed otherwise.\nSir George and his staff examined the various London directories discussed earlier and came up with a novel explanation for the discrepancy. They concluded that there had been two numbering systems on Craven Street. An anonymous author echoes Sir George’s conclusion about the two numbering systems in an article in The Journal of the Royal Society of Arts:\n…an inspection of the directories of that time proves that there were at least two systems of numbering in Craven Street before the erection of the additional houses. According to one of these the numbers started from the top (Strand end) on the west side of the street, and ran down to the bottom to No. 20, then crossed over and went back to the Strand along the east side – 21 to 36. According to the other system, the east side of the street was numbered from the bottom upwards, starting at No 1. This was not apparently in general use, but there is evidence that this numbering was at all events occasionally used.\nThe evidence of these two systems of numbering, and for believing that Mrs. Stevenson’s house was first No. 7 under the oldest system, next No. 27 under the second system, and finally No. 36 under the latest and existing system, is to be found in the various directories and the Westminster rate-books.[33]\nThe “evidence” mentioned above consisted of The London Directory’s listing of Brown & Whiteford at No. 9: “The rate-books for 1781 and 1786 show the house next but one to the north of Mrs. Stevenson’s house as in the occupation of Brown and ‘Whiteford,’ while the old directories mention the business of the firm as wine merchants, and give their address as 9, Craven Street – then a little later, down to 1791, as 29, Craven Street. Curiously enough, in the years 1778 to 1780, or 1781, Lowndes gives it as No. 9, and Kent as 29.”[34] Ignoring Kent’s Directory having Brown and Whiteford as 29 and The London Directory (Lowndes) having Brown and Whiteford “a little later” as 29, and knowing that Mrs. Stevenson lived two doors south of them, Sir George concluded that her house must have been numbered 7, even though there is no listing in any of the directories of her residence ever being No. 7. He surmised that the No. 7 on the west side of Craven Street with the memorial tablet thought to have been Franklin’s residence had simply been confused with number 7 (27) on the east side. Again from The Journal of the Royal Society of Arts:\nTaking all the evidence together, there cannot be any doubt whatever that Mrs. Stevenson’s house, in which Franklin lodged, was the house two doors north from Craven Court, first numbered 7, afterwards 27, and finally 36, and consequently that the house in which Franklin lived was that now numbered 36, not the one now numbered 7, on which the tablet is placed.[35]\nA response to The Royal Society of Arts was issued: “… the London County Council … informed the Society that it had made a mistake and that No. 36 Craven street was the building that deserved commemoration.”[36] The Society accepted the Council’s conclusion, and despite assurances of preservation by the restaurant builder, No. 7 was torn down the following year.\nSir George’s assertion “that Mrs. Stevenson’s house, in which Franklin lodged, was the house two doors north from Craven Court” was correct, however, his assertion that it was “first numbered 7, afterwards 27”, was not. It was only by association with the errant entry of Brown & Whiteford at No. 9 from 1776-1782 in The London Directory that Mrs. Stevenson’s address was conjured to be No. 7. The problem with associating her address exclusively with that of Brown & Whiteford at No. 9 during those years is that, as previously demonstrated, The London Directory also listed four other Craven Street residents, Bond, Rowles, Sneyd, and Michie, who’s addresses did conform to the numbering system in The Westminster Rate Books. If Brown & Whiteford at No. 9 was indicative of a numbering system different from The Westminster Rate Books, Bond, Rowles, Sneyd, and Michie would have been listed as Nos. 10, 11, 12, and 15, respectively. So on one hand Sir George was relying on the Westminster Rate Books to establish Mrs. Stevenson at No. 27 and on the other hand he was dismissing the Westminster Rate Books to establish her at No. 7. Instead of using the anomalous listing of Brown & Whiteford at No. 9, he could have just as easily, and more logically, used the Bond et al. listings, or the post-1782 Brown & Whiteford listing in the London Directory at No. 29 to establish Mrs. Stevenson at No. 27. Even if there had been two numbering systems, his assertion that No. 27 was first numbered 7 would still be false. The earliest numbering system was the Westminster Rate Books dating from the early 1730s when the houses were constructed. Brown & Whiteford at No. 9 didn’t appear until 46 years later and then only for a brief period.\nThere is ample evidence in Franklin’s correspondence and in a memoir by Polly Hewson (Mrs. Stevenson’s daughter) that Benjamin and Mrs. Stevenson lived in not one, but two houses on Craven Street. On July 6, 1772, Polly wrote to Benjamin from her house at Broad Street North in London: “My Mother I must tell you went off last friday week, took our little Boy with her and left Mr. Hewson [Polly’s husband, William] the care of her House [27 Craven Street]. The first thing he did was pulling down a part of it in order to turn it to his own purpose, and advantage we hope. This Demolition cannot affect you, who at present are not even a Lodger [Benjamin was traveling at the time], your litterary apartment remains untouch’d, the Door is lock’d …”[37] In a memoir about her husband written after his death Polly writes: “He [William Hewson] began his Lectures Sept. 30, 1772, in Craven-street, where he had built a Theatre adjoining a house which he intended for the future residence of his family.”[38] On October 7, 1772, Benjamin wrote to his son William: “I am very well. But we [Mrs. Stevenson and I] are moving to another House in the same street; and I go down tomorrow to Lord LeDespencer’s to [stay a] Week till things are settled.”[39] To his son-in-law, Richard Bache, on the same day he wrote: “We are moving to another House in the [street] leaving this to Mr. Hewson.”[40] Writing to a friend on October 30, 1772 he explained: “I should sooner have answered your Questions but that in the Confusion of my Papers, occasioned by removing to another House, I could not readily find the Memorandums …”[41] On November 4, 1772 Benjamin informed his wife Deborah of the move. “We are removed to a more convenient House in the same street, Mrs. Stevenson having accommodated her Son-in-Law with that we lived in. The Removing has been a troublesome Affair, but is now over.”[42]\nAn agreement had been struck between the parties. Margaret and Benjamin would move to another house on Craven Street and allow Polly and William to move into No. 27, the large yard behind the house being spacious enough to accommodate the anatomy school William wished to build.[43] Perhaps the idea was inspired by Margaret’s next-door neighbor at No. 26, Dr. John Leake, a man-midwife and founder of the Westminster Lying-in Hospital, who had built a theater adjoining his residence in which he practiced anatomy and taught midwifery.[44]\nAfter Margaret and Benjamin vacated No. 27, Polly, William, their son William Jr., and William’s younger sister, Dorothy Hewson, took up residence there.[45] In the 1773 Westminster Rate Books for Craven Street, Mrs. Stevenson’s (Stephenson in the Rate Books) name has been crossed out and replaced with “William Hewson.”[46] Further proof that the Hewsons had indeed moved into 27 Craven Street has been confirmed by the discovery of human and animal remains buried in the basement of No. 36 (formerly No. 27 and now the Benjamin Franklin House), a by-product of the dissections that took place at William’s anatomy school.[47]\nSo what house on Craven Street did Mrs. Stevenson and Benjamin move into after vacating No. 27? An examination of the Westminster Rate Books for the years 1774 and 1775 reveal them living not at No. 7 on the west side of Craven Street as one might expect from the overwhelming consensus of nineteenth century guidebooks and biographies, but surprisingly at No. 1.[48] The controversy of No. 7 being torn down was all for naught as it had never been Franklin’s residence. Sir George was correct on that point. Unfortunately, No. 1 was torn down as well in the early part of the twentieth century. The first time No. 1 is mentioned as Franklin’s second residence is in the Survey of London: Volume 18, St Martin-in-The-Fields II: the Strand published by the London County Council in 1937, ironically the same County Council that had declared No. 36 as Franklin’s only residence twenty-four years earlier.\nFrom 1748 until 1772 Margaret ‘Stephenson’ occupied this house [No. 27 (36)], and it was there that Benjamin Franklin settled after his arrival in London in 1757 as Agent to the General Assembly of Pennsylvania … In October, 1772, Mrs. Stevenson and Franklin removed to No. 1, Craven Street (now demolished), and No. 36 was for the next two years occupied by William Hewson, surgeon, who had married Mary Stevenson.[49]\nIn the spring of 1774, William Hewson died unexpectedly of septicemia two weeks after cutting himself while dissecting a cadaver. Polly was left to care for their two young sons and was pregnant with a daughter she would give birth to in August of the same year. Is it possible that Margaret and Benjamin moved back into No. 27 to assist Polly after the death of her husband as suggested in The Americanization of Benjamin Franklin?[50]\nIf the Westminster Rate Books are to be believed, the answer is no. For the year 1774, the Rate Books list Margaret Stevenson at No. 1 and William Hewson at No. 27. For the year 1775, they list Margaret Stevenson at No. 1 and Magnus Falkner (Falconer/Falconar) at No. 27. Magnus was William’s assistant at the anatomy school and fiancé to William’s sister, Dorothy. On his death bed, William instructed Polly, “let Mr. Falconar be my successor.”[51] Magnus would immediately take over the running of the anatomy school and continue William’s unfinished research. Four months later, he and Dorothy would marry.[52] Essentially only two things changed at 27 Craven Street after William’s death: Polly gave birth to her daughter, and Magnus replaced William as the lease holder, so even if Margaret and Benjamin had wished to move back into No. 27, there would have been no room for them. It is also interesting to note that considering the multiple times Benjamin wrote of his move out of No. 27 (and complained of it), he never once mentioned moving back into No. 27 in any of his correspondence after Mr. Hewson’s death.\nFigure 6. No. 36 Craven Street. (Photo courtesy of David Ross, britainexpress.com)\nIn sum, based on the Westminster Rate Books[53] and Franklin’s correspondence, Mrs. Stevenson is known to have resided at No. 27 (36) Craven Street from 1748 to 1772. It follows that, aside from the two years Franklin spent in Philadelphia from 1762 to 1764, he resided there from 1757 to 1772. Franklin’s correspondence also reveals that in the autumn of 1772, he and Mrs. Stevenson moved to another house on Craven Street. The 1773 Westminster Rate Books show her name crossed off at No. 27 and William Hewson’s inserted. The following year the Rate Books list her at No. 1 Craven Street. Evidence for Mrs. Stevenson and Benjamin remaining at No. 1 after William’s death appears in the Westminster Rate Books for 1775 which have Mrs. Stevenson still residing at No. 1 and Magnus Falkner residing at No. 27. Further evidence can be construed from the lack of any mention of a move back into No. 27 in Franklin’s correspondence. Despite the many theories one could devise as to why Franklin was thought to have lived at No. 7 Craven Street by so many guide books and Franklin biographers of the nineteenth century, one thing is certain; at some point after Franklin’s departure to America in March of 1775, and no later than 1807, someone mistakenly associated him with No. 7 on the west side of Craven Street, and it soon became his de facto residence. Credit must go to D. H. Montgomery in 1896 and Sir George in 1913 for setting the record partially straight by placing Franklin at No. 27(36). In 1937, the London County Council gave us the first accurate account of Franklin’s residences on Craven Street in the Survey of London at No. 27(36) and No. 1. It has been shown conclusively that No. 27 was never previously numbered 7. It was, however, renumbered 36 in 1792 after ten additional houses were built at the southern end of the street and remains No. 36 to this day.\n[1] “Craven Street and Hungerford Lane”, in Survey of London: Volume 18, St Martin-in-the-Fields II: the Strand, ed. G H Gater and E P Wheeler (London, 1937), 27-39, Early History of the Site.\nhttp://www.british-history.ac.uk/survey-london/vol18/pt2/pp27-39\n[2] “England, Westminster Rate Books, 1634-1900,” from database with images, Craven Street – 1735, FamilySearch from database by FindMyPast and images digitized by FamilySearch; citing Westminster City Archives, London.\n[3] Ibid., Craven Street – 1748.\n[4] The Statutes at Large, From Magna Charta to the End of the Eleventh Parliament of Great Britain. Anno 1761 Continued, Vol. XXVII, ed. Danby Pickering, (Cambridge, John Archdeacon, 1767), 96.\n[6] James Raven, Publishing Business in Eighteenth-Century England, (Woodbridge: The Boydell Press, 2014), 201.\n[7] The London Directory For the Year 1776, Ninth Edition, (London: T. Lowndes, 1776), title page.\n[8] Kent’s Directory For the Year 1778, Forty-Sixth Edition, (London: Richard and Henry Causton, 1778), title page.\n[9] A listing in Kent’s Directory for the Year 1882 on p. 28 reveals, “Brown Sarah, Leather-seller, 1, Westmoreland-buildings, Aldersgate-street”, and in Kent’s Directory for the Year 1883 on p. 175, “Whiteland Mary, Wine & Brandy Mercht. Jermyn-str. St. James.”\n[10] “The Papers of Benjamin Franklin,” Sponsored by The American Philosophical Society and Yale University, Digital Edition by The Packard Humanities Institute, 22:263a.\nhttp://franklinpapers.org/franklin\nMrs. Stevenson wrote to Benjamin Franklin a letter from her new home at 75 Northumberland Court on November 16, 1775: “In this Court I have a kind friend, Mr. Lechmoen he comes and seats with me and talks of you with a hiy regard and friendship.”\n[11] Survey of London, Early History of the Site.\n[12] Survey of London, Footnotes/n 10.\n[13] Survey of London, Historical Notes/No. 31.\n[14] David Hughson, LL.D., London; Being An Accurate History And Description Of The British Metropolis And Its Neighbourhood, To Thirty Miles Extent, From An Actual Perambulation, Vol. IV, (London: W. Stratford, 1807), 227.\n[15] The Reverend Joseph Nightingale, The Beauties of England and Wales: Or, Original Delineations, Topographical, Historical, and Descriptive, of Each County, Vol. X, Part III, Vol. II (London: J. Harris; Longman and Co.; J. Walker; R. Baldwin; Sherwood and Co.; J. and J. Cundee; B. and R. Crosby and Co.; J Cuthell; J. and J. Richardson; Cadell and Davies; C. and J. Rivington; and G. Cowie and Co., 1815), 245.\n[16] John Britton, F.S.A. & Co., ed., The Original Picture of London, Enlarged and Improved: Being A Correct Guide For The Stranger, As Well As For the Inhabitant, To The Metropolis Of The British Empire Together With A Description Of The Environs, The Twenty-Fourth Edition (London: Longman, Rees, Orme, Brown, and Green, 1826), 479.\n[17] Jared Sparks, The Works of Benjamin Franklin, Vol. VII, (Philadelphia: Childs & Peterson, 1840), 151.\n[18] George Gulliver, F.R.S., The Works of William Hewson, F. R. S., (London: Printed for the Sydenham Society, MDCCCXLVI), xx.\n[19] Peter Cunningham, Handbook for London; Past and Present, Vol. I, (London: John Murray, 1849), 245.\n[20] F. Saunders, Memories of the Great Metropolis: or, London, from the Tower to the Crystal Palace, (New York: G.P. Putnam, MDCCCLII), 138.\n[21] Leigh Hunt, The Town; Its Memorable Characters and Events, (London: Smith, Elder and Co., 1859), 185.\n[22] K. Baedeker, London and Its Environs, Including Excursions To Brighton, The Isle of Wight, Etc.: Handbook For Travelers, Second Edition, (London: Dulau and Co., 1879), 133.\n[23] Herbert Fry, London In 1880 Illustrated With Bird’s-Eye Views of the Principal Streets, Sixth Edition, (New York: Scribner, Welford, & Co., 1880), 50.\n[24] Herbert Fry, London. Illustrated By Eighteen Bird’s-Eye Views of the Principal Streets, (London: W. H. Allen and Co., 1886), 40.\n[25] Henry B. Wheatley, F.S.A., London Past and Present; Its History, Associations, and Traditions, Vol. 1, (London: John Murray, New York: Scribner & Welford, 1891), 473.\n[26] The Journal of the Society of Arts, Vol. XV, No. 778, (October 18, 1867): 717.\n[27] D. G. C. Allen, “Dear and Serviceable to Each Other: Benjamin Franklin and the Royal Society of Arts,” American Philosophical Society, Vol. 144, No. 3, (September 2000): 248-249.\nFranklin was a corresponding member in 1756 because he was still residing in Philadelphia. He became an active member the following year when he moved to London.\n[28] The Journal of the Society of Arts, Vol. XVIII, No. 894, (Jan. 7, 1870): 137.\n“Since the last announcement, the following tablets have been affixed on houses formerly occupied by – Benjamin Franklin, 7 Craven-street, Strand, W.C.”\n[29] Franklin in His Own Time, eds. Kevin J. Haytes and Isabelle Bour, (Iowa City, University of Iowa Press, 2011), xxxvii.\n“Takes lodgings with Margaret Stevenson at No. 7 Craven Street.” It is unknown if the editors are referring to No. 7 on the west side of Craven Street or No. 36 on the east side using Sir George’s explanation of No. 36 being previously numbered 7.\n[30] Henry C. Shelly, “American Shrines on English Soil, III. In the Footprints of Benjamin Franklin,” in The Book News Monthly, September, 1913 to August, 1914, (Philadelphia: John Wanamaker, 1914), 325.\n[31] The Journal of the Royal Society of Arts, Vol. LVI, No. 2,880, (Jan. 31, 1908): 245.\nhttp://babel.hathitrust.org/cgi/pt?id=mdp.39015058423073;view=1up;seq=251\n“His Majesty the King, who is Patron of the Society, has granted permission to the Society to prefix to its title the term ‘Royal,’ and the Society will consequently be known in future as the ‘Royal Society of Arts.’”\n[32] Nineteenth Annual Report, 1914, of the American Scenic and Historic Preservation Society, (Albany: J. B. Lyon Company, 1914), 293.\nhttp://babel.hathitrust.org/cgi/pt?id=wu.89072985302;view=1up;seq=4;size=150\n[33] The Journal of the Society of Arts, Vol. LXII, No. 3,183, (Nov. 21, 1913): 18.\nhttp://babel.hathitrust.org/cgi/pt?id=mdp.39015058422968;view=1up;seq=26\n[36] Allen, “Dear and Serviceable,” 263-264.\n[37] Papers of Benjamin Franklin, 19:20.\n[38] Thomas Joseph Pettigrew, F. L. S., Memoirs of the Life and Writings of the Late John Coakley Lettsom With a Selection From His Correspondence, Vol. I, (London: Nichols, Son, and Bentley, 1817), 144 of Correspondence.\n[39] Papers of Benjamin Franklin, 19:321b.\n[40] Ibid., 19:314.\n[41] Ibid., 19:353a.\n[43] Simon David John Chaplin, John Hunter and the ‘museum oeconomy’, 1750-1800, Department of History, King’s College London. Thesis submitted for the degree of Doctor of Philosophy of the University of London., 202.\n“Following Falconar’s death [1778] the lease [27 Craven Street] was advertised, and the buildings were described as:\nA genteel and commodious house, in good Repair, with Coach-house and Stabling for two Horses…consisting of two rooms and light closets on each floor, with outbuildings in the Yard, a Museum, a Compleat Theatre, and other conveniences. (Daily Advertiser, 27 August 1778)”\n[44] Simon Chaplin, “Dissection and Display in Eighteenth-Century London,” in Anatomical Dissection in Enlightenment England and Beyond: Autopsy, Pathology and Display, ed. Dr. Piers Mitchell, (Burlington: Ashgate Publishing Company, 2012), 108.\n“Given that a nearby building at 35 [ No. 26 in Franklin’s time] was occupied by the man-midwife John Leake, who advertised lectures – including lessons in the art of making preparations – at his ‘theatre’ between 1764 and 1788, it is possible that some facilities were shared. In both cases, however, the buildings [Leake’s residence at No. 26 and Hewson’s residence next door at 27] served a dual function as domestic accommodation and as sites for lecturing and dissection.”\n[45] George Gulliver, F.R.S., The Works of William Hewson, F. R. S., (London: Printed for the Sydenham Society, MDCCCXLVI), xviii.\n[46] Westminster Rate Books, Craven Street – 1773, courtesy of the City of Westminster Archives.\n[47] S.W. Hillson et al., “Benjamin Franklin, William Hewson, and the Craven Street Bones,” Archaeology International, Vol. 2, (Nov. 22, 1998): 14-16.\nhttp://dx.doi.org/10.5334/ai.0206\n[48] Westminster Rate Books, Craven Street – 1774, 1775, courtesy of the City of Westminster Archives.\n[49] Survey of London, Historical Notes/No. 36, Craven Street (not sourced).\n[50] Gordon S. Wood, The Americanization of Benjamin Franklin, (New York: The Penguin Press, 2004), 261.\n[51] Pettigrew, Memoirs, 146 of Correspondence.\n[52] http://founders.archives.gov/documents/Franklin/01-22-02-0178, note 7. “Falconar married Hewson’s sister five months after the Doctor’s death; most of the Craven Street circle attended the wedding, and BF gave away the bride: Polly to Barbara Hewson, Oct. 4, 1774, APS” (American Philosophical Society); “England Marriages, 1538–1973 ,” database, FamilySearch (https://familysearch.org/ark:/61903/1:1:V52W-TGS : accessed September 15, 2015), Magnus Falconar and Dorothy Hewson, September 12, 1774; citing Saint Martin In The Fields, Westminster, London, England, reference ; FHL microfilm 561156, 561157, 561158, 942 B4HA V. 25, 942 B4HA V. 66.\n[53] I chose to rely on the Westminster Rate Books for the numbering system on Craven Street. The books were consistent throughout the eighteenth century in the ordering of residents on the street and were used as the basis for the 1792 re-numbering. For the most part, commercial directories aligned with them as well. If by chance a directory didn’t initially align, it would inevitably produce future editions that did.\nBenjamin Franklin, Benjamin Franklin House, London\nMore from David Turnquist\nIf one looked into Benjamin Franklin’s time on Craven Street, they might...\nI think it’s very ironic that on the street maps included in your excellent article, Craven Street is so close to Scotland Yard. Because following the back and forth juxtapositions of numbers 7, 27 and 36 Craven Street (throw in 75 Northumberland Court and 1 Craven Street, too) was a case that could confound Sherlock Holmes.\nExcellent job of deciphering street renumbering material spanning sixty years, including that of a wrong house number (# 7) being erroneously identified and then perpetuated in subsequent street map printings. It’s gratifying at least to know that the present day #36 Craven Street is the correct house for Ben Franklin tourists to visit. Except for #1 Craven Street for the last three years Franklin was in London, but we won’t get into that.\nAgain, excellent article, David!", "answers": ["An alphabetical list of names and places of abode of the merchants and principal traders of the cities of London and Westminster, the Borough of Southwark, and their environs, with the number affixed to each house."], "length": 6567, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "c39f97567cff21a8591f02117632ea71e7b566f65d2d62ab"} {"input": "What is the rationality coefficient used in the observation model?", "context": "Paper Info\n\nTitle: Incorporating Human Path Preferences in Robot Navigation with Minimal Interventions\nPublish Date: 16 Mar 2023\nAuthor List: Oriana Peltzer, Dylan Asmar, Mac Schwager, Mykel Kochenderfer\n\nFigure\n\nHyperplane arrangement of a twodimensional space containing two obstacles (colored in gray).The robot is located inside the pink polytope, surrounded by three adjacent obstacle-free polytopes.Each hyperplane on the boundary of the robot's polytope corresponds to one of the nonredundant constraints in eq.(4).(b)Graph derived from the hyperplane arrangement.The nodes on the graph designate polytopes, and edges designate transitions to adjacent polytopes.To estimate the human's preference, the robot updates a posterior over the goal and over which of the graph transitions φ 1 , φ 2 and φ 3 is preferred by the human.(c)Example preference defined over the graph.The location of the goal is indicated in yellow in the lower right polytope.For each node, the outgoing pink arrow designates the edge on the graph corresponding to the preferred transition between polytopes.\nSimple, 10 × 10, 8 polytopes.(b) Map 2: Office, 10 × 10, 56 polytopes.(c) Map 3: Classroom, 20 × 20, 73 polytopes.(d) Sampled observations and robot's executed trajectories.\nFig.5: Maps used for simulating the robot navigation problem with path preferences.In (d), the heading angles observed are indicated with arrows.The goal is indicated with a pink circle, and the orange robot corresponds to the starting location.The blue robot follows a policy that accounts for path preference, while the green robot does not.The opacity of the robots increases with time.\nMap 1 problem setup and example realizations for goal-only (green) and path preference (blue) solution methods.The robot starts at the lower left corner of the environment, and the goal of the task (pink circle) is in the upper left area.The robot does not know which goal, among 10 options (shown in light blue squares), is the correct goal.The human provides noisy observations, indicated by arrows, at each iteration.The green robot selects actions according to the goal-only baseline, and the blue robot uses our proposed method to infer path preferences.The polytopes composing G are drawn in blue.Probability of correct goal.WLPHVWHS +J (c) Entropy of goal distribution g.\nFig. 6: Probability of the correct goal, fig.6b, and entropy of the goal belief distribution P (g), fig.6c, for the same problem setup, fig.6a.In this problem instance, the human's preference is to go to the goal by passing on the right side of the obstacle.Results are averaged over 50 runs and the area filled represents one standard deviation above and below the mean value.The goal-only baseline shows an over-confident prediction (shown by the strong reduction in belief entropy) that the correct goal is less likely, making it more difficult to reach the correct goal compared to a method that accounts for path preference.\nSuccess rates in the simple environment (Map 1).The results are averaged over 6 randomly sampled problem instances (start location, goal location, and goal possibilities), and over 50 runs per problem instance.∆T is the number of time steps separating two consecutive human inputs.The robot's mission time is Tmax = 30 time steps.We selected γ h = 1.5, corresponding to relatively noisy human inputs and making the problem more difficult to solve for the robot.\nComputation times for Goal Only and Path Preference methods on Map 1 (fig.5a),Map 2 (fig.5b), and Map 3 (fig.5c),averaged over 100 runs with randomly sampled problem instances.The 95 % confidence interval is provided with the mean.We evaluate computation time at the first iteration of each run (where the search depth takes on its highest value Tmax).\n\nabstract\n\nRobots that can effectively understand human intentions from actions are crucial for successful human-robot collaboration. In this work, we address the challenge of a robot navigating towards an unknown goal while also accounting for a human's preference for a particular path in the presence of obstacles.\nThis problem is particularly challenging when both the goal and path preference are unknown a priori. To overcome this challenge, we propose a method for encoding and inferring path preference online using a partitioning of the space into polytopes. Our approach enables joint inference over the goal and path preference using a stochastic observation model for the human.\nWe evaluate our method on an unknown-goal navigation problem with sparse human interventions, and find that it outperforms baseline approaches as the human's inputs become increasingly sparse. We find that the time required to update the robot's belief does not increase with the complexity of the environment, which makes our method suitable for online applications.\n\nINTRODUCTION\n\nCollaboration between humans and robots has become increasingly important and one key aspect of this collaboration is the ability for robots to adapt to human decisions. In many scenarios, such as a robot navigating through a busy room to deliver an item, it is important for the robot to take into account human preferences.\nFor instance, humans may prefer a specific path that would allow their colleagues to notice the item being delivered, but this preference may change dynamically based on various factors such as changes in the environment or unforeseen circumstances. While some preferences can be incorporated into the path-planning process, accommodating dynamic user preferences in real-time remains challenging.\nIn this paper, we propose a way to enable robots to adapt to human preferences dynamically by leveraging real-time feedback to inform decision-making. In this work, we tackle the problem of robot navigation in which the robot cannot observe the goal or the preferred path to the goal, but must make navigation decisions that are influenced by humans through recommended actions.\nPrior work has explored how to adapt to a human's preference through feedback, but such approaches often require a high level of intervention, which can be time-consuming and impractical in real-world scenarios. To optimize the use of human input and quickly infer the human's preference, Fig. : An autonomous robot navigates in a simulated classroom towards a goal location (pink circle).\nAt the start of its mission, it receives direction indications (arrows) from a human that indicate which path it should take to get to the goal. In this scenario, the human wants the robot to go around the desks on the right side of the classroom. A robot that does not reason over path preferences (green) will take the shortest path to the goal regardless of the human's input.\nOur method (blue) infers the human's path preference from these indications and adapts to their recommendations. we propose an approach that leverages probabilistic representations of human preference and incorporates real-time feedback. Previous research by Bajcsy et al. considered an online adaptation problem in a manipulation task, where the person can apply forces to the robot to indicate their preferences.\nBy allowing the robot to continue its task while taking into account a probabilistic representation of human preference, their approach does not require frequent inputs. Building on this idea, we adopt a similar approach to adapt to a human's preference in the context of a robot autonomously navigating through a known environment, such as a cluttered office space.\nSpecifically, we focus on allowing the human to influence the robot's trajectory with respect to obstacles, by providing guidance on preferred routes or paths, while the robot continues to execute its task. Paths can be represented using homotopy classes . However, homotopies can pose computational challenges when used to encode and infer human preferences.\nWhen the robot maintains a belief over homotopy classes, the inference problem can become exponentially complex with the number of obstacles in the space. Additionally, when the goal is unknown, the number of variables increases with the number of candidate destinations. This complexity can render the decision-making problem intractable.\nOur solution is to encode path preference based on a partitioning of the environment into polytopes . This representation allows path preferences to be expressed as sets of preferred transitions between adjacent polytopes. Paths belonging to different homotopy classes correspond to different sequences of transitions.\nBy leveraging conditional independence assumptions, we can make the Bayesian inference problem tractable. These assumptions exploit the fact that human actions provide information about the path in a piece-wise manner. For example, indicating a preference for navigating around a particular obstacle only provides information about the local area and not the entire path.\nFinally, after updating its belief representation over the human's preference, the robot can adapt to indications by replanning online. Our contributions are as follows. • We formulate the human-robot collaboration problem as a Partially Observable Markov Decision Process (POMDP) where both the goal of the task and the human's path preference are unknown random variables.\n• We propose an encoding of a human's path preference using a partitioning of the environment into polytopes, along with conditional independence assumptions that make the Bayesian inference problem tractable to infer the task goal and path preference online. • Through simulations in two environments of different sizes and complexity, we show that our method is effective for solving problems where the robot must reach a goal that is unknown a-priori while simultaneously adapting to a human's indications.\nOur method shows higher success rates compared to baseline approaches when the human inputs are sparse. Our approach enables a robot to make effective navigation decisions in collaboration with a human, even when the goal and path preference are not known in advance, and with minimal human input. In recent years, there has been a growing interest in shared autonomy and interactive systems, where humans and robots work together to accomplish tasks.\nSeveral approaches have been proposed to address the challenge of enabling effective collaboration between human and robot agents while still achieving high task performance. Losey et al. and Jeon, Losey, and Sadigh propose a framework where a human operator is given control of a task-relevant latent action space while an autonomous system handles the rest.\nDragan and Srinivasa present a formalism for arbitrating between a user's input and a robot's policy when both human and robot share control of the same action space. Cognetti et al. [7] provide a method for real-time modifications of a path, . . . Fig. : We model the intent inference problem with the above diagram.\nAt each step in time, the robot receives an observation ot from the human conditioned on its current location st, the intended goal g, and the human's path preference θ. The robot updates its belief over g and θ and transitions to a next location st+1. while Hagenow et al. present a method that allows an outside agent to modify key robot state variables and blends the changes with the original control.\nHowever, a common challenge of these approaches is the high level of intervention required from humans. Best and Fitch propose a method for predicting an agent's intended trajectory from observations. Rather than maintaining a belief over the agent's future path, they infer the agent's intended goal among a set of candidate locations at the boundary of the space.\nThis approach provides information on where the agent is heading and generates a distribution of candidate future trajectories for the agent. Inferring the goal of the task among a discrete set of candidates is also relevant to the area of shared autonomy. Javdani, Srinivasa, and Bagnell propose a formalism for shared control of a robotic arm, where the robot must assist the human in picking up an object but needs to infer which object the human has chosen from joystick inputs.\nPlanning with homotopy class constraints is useful in problems where the robot's requirements are given with respect to obstacles, and Yi, Goodrich, and Seppi consider topological constraints provided by human operators. Bhattacharya propose an efficient algorithm for solving pathplanning problems under homotopic constraints.\nHowever, the number of homotopy classes for a given problem can be infinite, and as the robot changes location and updates its representation of the world, carrying out inference over homotopy classes in a dynamic environment requires recomputing the set of homotopies at every iteration, making the belief update challenging.\nPrior work has addressed the challenge of shared autonomy by considering how robots can infer a human's intended goal, or how they can infer the preferred path to a goal. However, we argue that inferring the goal and the path as separate problems can lead to over-confidence in incorrect beliefs about the user's preferences.\nTo illustrate this point, consider the following scenario: a robot and a human are collaborating to move an object from one end of a room to Fig. : Using the hyperplanes composing the H-representation of each obstacle, we construct a hyperplane arrangement of the obstacle-free space (a). We define the human's preference for the robot's one step action choices as the posterior distribution (given all human input up to that point) over transitions from the current to the neighboring polytopes, i.e. edges on the graph.\nEach time the robot transitions to a new polytope, the set of neighbor polytopes and the distribution over human preferences are updated. another, but there is an obstacle in the way. The human would like the robot to take a path around the obstacle on the left, even though the goal is on the right. If the robot only infers the goal from the human's inputs, it may incorrectly assume that the goal is on the right, and become over-confident in this belief.\nOn the other hand, if the robot only infers the preferred path, it may mistakenly assume that the goal is on the left, leading to a failure in completing the task. To overcome these challenges, our work proposes a joint inference approach that considers both the human's intended goal and their preferred path to that goal.\nSpecifically, we model the human's preference over different homotopy classes and leverage a conditional independence assumption to provide a tractable solution. In our approach, we assume that the human's inputs are noisily rational conditioned on both the goal and the preference. By jointly inferring the goal and path preference, we can avoid over-confidence in incorrect beliefs about the user's preferences, leading to improved system performance.\nWe consider the problem of robot navigation in a known environment to an unknown destination, where a human can intervene and provide a heading direction to the robot using a joystick or force cues. The human also has a preference on which path the robot should take with respect to obstacles, and our objective is for the robot to understand the human's intentions and execute the task with minimal interventions.\nLet g be a discrete random variable denoting the goal of the task, belonging to a set of candidates Ω g , and let θ be a discrete-valued random variable representing the human's path preference, belonging to a set of possible preferences Θ. The physical location of the robot at time index t is denoted by s t ∈ R 2 , and the robot's action at time index t, belonging to some action space A, is denoted by a t .\nThe transition model T (s t+1 | s t , a t ) is deterministic, meaning the robot has full control over its future location. At any time step, the human may provide an observation to the robot. When the human intervenes, the robot receives a direction (heading angle) that can be mapped to a future location in space.\nMore specifically, we map the direction to an intended location, which is the resulting robot location after advancing in the indicated direction for one time step. For simplicity, we consider that the robot directly makes an observation o t of the location indicated by the human. We assume that the robot has a stochastic observation model for the human P (o t | s t , g, θ) that is conditioned on both the goal of the task g and the human's preferred path θ.\nWe further assume that having chosen a goal and path preference, the human takes actions to noisily minimize a cost function C g,θ that measures the cost of moving from the robot's current location to the goal along the preferred path. For example, C g,θ (s t , o t ) can be the length of the shortest path from location s t to the goal g after taking a first step to o t , and constrained by path preference θ.\nWe use C g,θ to induce a probability distribution over observations, given by: where γ h is a hyperparameter that designates the rationality coefficient. This model assumes the human will pick the lowest cost action with the highest probability and the likelihood of an action decreases exponentially with the increase in cost .\nOur inclusion of the path preference θ sets our approach apart from . The model is shown in fig. represented as a Bayesian Network.\n\nInference\n\nAt each time step where the human provides an observation, the posterior P (g, θ) is given through the Bayesian update We note that the number of Bayesian updates required at each iteration to update the belief is equal to the cardinality of Ω g × Θ. In addition, each Bayesian update involves computing C g,θ ( .\n, . ) in eq. ( ), which involves solving an optimization problem (such as a shortest path problem). In section IV, we propose a specific encoding of preference θ for resolving eq. ( ), while ensuring the number of computations of the cost C g,θ (., .) per update does not grow exponentially with the number of obstacles.\n\nDecision Making\n\nWe consider a navigation problem where the robot receives reward according to the model R(s t , g, θ, a t ). We wish to find the optimal policy π that maximizes the expected discounted sum of future rewards, with discount factor γ. The above problem is a Partially Observable Markov Decision Process (POMDP) .\nIn this section, we propose an encoding of human's path preference θ for computing the posterior in eq. ( ). Devifrom the concept of homotopy classes, we define the preference according to a partitioning of the environment into polytopes, as shown in fig. , creating a hyperplane arrangement of the space.\nHyperplane arrangements have been used by Vincent and Schwager in the context of Neural Network verification. In our setting, we leverage this representation to define path preferences as preferred transitions between adjacent regions of the space.\n\nHyperplane Arrangement\n\nWe assume a two-dimensional environment composed of m polytopic obstacles, each defined by their half-space representation (H-representation) where A i ∈ R di×2 and b i ∈ R di , and where d i is the number of edges (hyperplanes) composing polytope i. Let n = i d i be the total number of hyperplanes. We leverage each obstacle's H-representation to construct a hyperplane arrangement of the environment as shown in fig.\n.e. a partitioning of the space into polytopes. More specifically, each location in space belongs to a polytope j for which we can write an H-representation of the form where α j i ∈ {−1, 1} di is a vector specific to polytope j and obstacle i corresponding to the relative position of any point in the set with respect to each hyperplane in O i .\nFig. : Intent inference model in a hyperplane arrangement of the obstacle free space. We spatially decompose the preference θ into a set of preferred neighboring polytopes per region of the space. Within each polytope j, the human preference pj is a discrete distribution over the preferred neighbor in N (j).\nWe assume that for a location st belonging to polytope j, and given goal g and preference pj, the observation ot and any other preference p i,i =j are conditionally independent. Concatenating elements from each obstacle's Hrepresentation, we can write polytope j's H-representation as where Some of the constraints in eq. ( ) (corresponding to rows of A, b and α j ) are redundant, i.e. the set P j does not change upon their removal.\nWe can further reduce the Hrepresentation of a polytope to include only non-redundant constraints. By removing the rows corresponding to redundant constraints, we obtain new matrices A j e , b j e and α j e such that we can write the polytope's reduced H-representation as The non-redundant constraints correspond to edges of the polytope.\nIn other words, as the robot continually moves in space, the first hyperplane that it will cross upon exiting the polytope will correspond to one of the polytope's nonredundant constraints. Vincent and Schwager outline an iterative method for removing redundant constraints by solving n linear programs.\nWe use this method in practice for computing α j e for each polytope. We can now characterize each polytope by a vector α j e ∈ {−1, 1} n j e , where n j e ≤ n is the number of essential constraints of the polytope. The polytopes P j partition the environment into a hyperplane arrangement.\n\nPath Preference\n\nIn this section, we provide a definition of preference θ according to a graphical representation of the environment based on the hyperplane arrangement. Under this representation, a path preference corresponds to a set of preferred transitions. In other words, for each polytope in the space, the human will have a preference to which neighboring polytope they wish to transition.\nLet G := (V, E) be an undirected graph, where vertices are obstacle-free polytopes, and edges connect two adjacent polytopes. Each polytope is described by a unique vector α j as defined in eq. ( ). Two polytopes are adjacent if they share non-redundant constraints (rows in eq. ( )) corresponding to the same hyperplane (i.e. they are on opposite sides of the hyperplane).\nLet N (v) be the set of neighbors of a vertex v. For each vertex, we denote p v the discrete-valued random variable describing which edge in N (v) the human intends to transition to. Using this formalism, we define a path preference as the set of preferred transitions over all nodes in the graph, Let m θ = v∈V |N (v)| be the cardinality of Θ, and m g = |Ω g | the number of possible goals.\nA priori, the number of Bayesian updates required to update the belief at every iteration should be m θ × m g . Now, let us assume the conditional independence relationships described by the new problem diagram in fig. . More specifically, we introduce the assumption that conditioned on a robot location s t , the goal g, and the preference for the corresponding vertex p v in the graph, the observation o t and the preference for any other vertex are conditionally independent.\nIn other words, the observations the human provides can be defined conditioned only on the robot location, the goal, and the human's preference for its current vertex p v . By introducing this assumption, each update step only requires updating the joint (p v , g), reducing the number of cost computations to |N (v)| × m g .\nWe can notice that by introducing this assumption, we removed the direct relationship between the number of polytopes in the environment and the complexity of the Bayesian update in eq. ( ). In practice, components of θ are not mutually independent. For example, if the human preference at a vertex v 1 is\n, it is unlikely that the human will also prefer p v2 = (v 2 , v 1 ) (turning back). We can improve our model by assuming a dependent relationship between preferences for adjacent edges, which does not significantly increase the complexity of the inference problem. An interesting property of our encoding is that any two paths that belong to different homotopy classes will cross different sequences of polytopes, i.e. they correspond to a different sequence of edges on G.\nThis can be proved by contradiction. Let us suppose that two continuous trajectories ξ 1 and ξ 2 , with the same start and end points and that do not intersect any obstacle, traverse the same regions in G in the same order. From the construction of the hyperplane arrangement, each polytope that the paths traverse through is obstacle-free.\nTherefore, within each polytope, there is no obstacle in the area located in between the portions of ξ 1 and ξ 2 that belong to the region. A smooth transformation of ξ 1 into ξ 2 can be obtained by transforming each portion of ξ 1 belonging to the polytopes it intersects into the corresponding portion of ξ 2 for the same polytopes, where the extremities of the trajectory portions are connected to one another along the polytope's edges (where the same edge is crossed by both paths).\nAlong this transformation, the paths do not intersect any obstacle, and therefore ξ 1 and ξ 2 belong to the same homotopy class.\n\nEXPERIMENTS\n\nWe evaluate our model on a simulated navigation task where the robot must reach a goal that is unknown a priori while respecting the path preferences indicated by a human. The robot navigates in a grid world containing obstacles. The transition model is deterministic: the robot selects an adjacent location on the grid to reach at the next time step.\nThe robot is also allowed to take diagonal actions. Each location s t in the map can be mapped to a vertex v t ∈ G. Therefore, the actions leading to locations mapped to different vertices correspond to edges on the graph. We note f (s t , a t ) the edge crossed by taking action a t from location s t .\nThe robot is given a mission time limit T max for reaching the goal. In this problem, we assume that the human selects actions to noisily minimize a cost function C g,θ , where θ is defined as per eq. ( ), corresponding to the length of the shortest path to the goal constrained by the preference (where the robot is only allowed to make transitions on G along preferred edges).\nMore specifically, where δ(s t , g | o t , p vt ) designates the length of the shortest path from s t to g passing by o t and constrained by preference p vt . This is a slight variant of the cost function proposed by Best and Fitch , where we add in a conditioning on the path preference. We compute costs by running the A path planning algorithm on the environment maps (grid worlds with diagonal actions) and impose preference constraints by pruning invalid transitions from the search tree.\nReward model. At each step in time, the robot receives a reward which is a sum of three components: a goal-specific reward a preference-specific reward or penalty We compute solutions to the POMDP defined in section III-B with the online solver POMCP , and with the particularity that within the rollouts, the robot does not expect to collect human inputs.\nEach time a solution is computed, the robot takes an action and may receive an observation. If it does, it updates its belief distribution over the unknown problem variables and resolves the POMDP over a receding horizon.\n\nBaselines\n\n• Goal only. The robot solves the POMDP while ignoring the effects of path preference. Similarly to , we assume the human is taking action to minimize a goaldependent cost C g (s t , o t ) = δ(s t , g | o t ), where the conditioning on the preference is removed. We also omit the path preference's contribution to the reward R pref .\n• Compliant. The robot complies with the human input, but does not take an initiative. If the user stops providing information, the robot continues in the last direction indicated for 5 time steps (conserving its momentum), then stops. • Blended. We designed an arbitration function to decide between our proposed policy (accounting for path preferences) and the user's recommendation when the robot receives inputs.\nOur metric to evaluate confidence in the robot's prediction for the purpose of arbitration is the entropy of the intention distribution H(g, p i ), where p i denotes the preferred neighbor for the current region. Because our representation of the world is discrete, the arbitration is given by a step function.\nDenoted by U , the action corresponding to the human's input, and P , the robot's prediction for the optimal action, we write the policy where we chose h = 1.6 as the confidence threshold.\n\nResults\n\nWhen evaluating the algorithm, we consider that a run is successful if the robot reached the goal within its allocated mission time T max and only made transitions between graph vertices corresponding to the human's preferences. We vary the time delay between human inputs, from constant guidance (∆ T = 1) to only a single observation (∆ T ≥ T max ).\nSuccess rates. Table I reports the success rates for experiments conducted over six randomly sampled problem instances and 50 runs per instance in Map 1 (fig. ). When the human provides inputs at every iteration, the compliant policy shows the highest success rates. However, as ∆ T increases, the compliant robot is not able to accomplish the task within the allotted time as it does not receive sufficient inputs to do so, and performance decreases compared to the autonomous baselines.\nWe find that in these runs, accounting for path preference consistently improves performance compared with the goal-only baseline. Results also show that blending the user's input with the robot's policy (Path Preference + Blend) when the human provides information leads to improved performance. Belief entropy.\nFigure shows a challenging problem instance where the directions the human provides do not align directly with the shortest path to the goal. By ignoring the effects of preferences in the problem model (goal only), the robot quickly infers from observations that the upper left goal is less likely than others (P (g) drops).\nThe strong decrease in entropy shows that the robot becomes overconfident in this prediction. Overconfidence in an incorrect goal will prevent the agent from finding the correct goal once the human's indications directly align with it, as it needs to correct for the wrong predictions, as shown in the path realization (fig.\n). In this realization, the goal-only method (green robot) fails to search the upper left area within the allotted time. By accounting for path preferences in its model, the blue robot's entropy over the goal distribution decreases more steadily, allowing for it to leverage the human's latest observations and reach the goal successfully.\nshows an over-confident prediction (shown by the strong reduction in belief entropy) that the correct goal is less likely, making it more difficult to reach the correct goal compared to a method that accounts for path preference. Computation time. In table II we provide the time required to solve the POMDP, and the time required to update the robot's belief as it receives new observations.\nWe compute solutions on three maps: a simple 10 × 10 grid world with 8 polytopes (fig. ), a 10 × 10 grid world with 56 polytopes (fig. ), and a 20×20 grid world with 73 polytopes (fig. ). The latter environment being larger, we increase the mission time and the depth of the search tree in POMCP from T max = 30 (Map 1 and Map 2) to T max = 60 (Map 3).\nWe do not notice an increase in the time required to update the robot's belief with an increase in problem complexity, which is consistent with our observation that the complexity of the Bayesian update should not increase with the number of obstacles or polytopes. On the contrary, the belief update time on Map 2 and Map 3, containing more obstacles, is reduced compared to the first map.\nMore obstacles result in fewer iterations when solving the constrained shortest path problem with A . Adding constraints due to the obstacles and polytopes reduces the size of the A search tree. C. Limitations Simulation environments. In our simulations, we hardcoded the preference policy over the maps (e.g. in Map 1, go around the table counter-clockwise).\nWe randomly sampled problem instances (start and goal locations, and goal options) to reduce the bias introduced by these preference choices. To best evaluate and compare the different approaches, it would be best to sample preferences among a distribution of preferences chosen by a human (for example, from benchmarks resulting from a collection of data).\nCreating such a benchmark is an interesting direction for future work. Hyperplane arrangement construction. The main limitation of our approach is that the size and geometry of each polytope depends strongly on the geometry of the obstacles, as seen in fig. . Because of this, the robot can make predictions over preferences that are too refined compared with the topology of the environment.\nA direct consequence is that when the size of the polytopes is small, the information provided by the human can be incorrectly interpreted as a preference on the robot's immediate action. Our method can be improved by changing the structure of the hyperplane arrangement so that it relies on the topology of the environment, but does not vary strongly with the geometry of the features in the environment.\nFor this purpose, topometric maps and region construction algorithms are promising directions. We presented an approach for encoding and inferring a human's path preference in an environment with obstacles. By leveraging a partitioning of the space into polytopes and a stochastic observation model, our method allows for joint inference over the goal and path preference even when both are unknown a-priori.\nOur experiments on an unknown-goal navigation problem with sparse human interventions demonstrate the effectiveness of our approach and its suitability for online applications. The time required to update the robot's belief does not increase with the complexity of the environment, which further highlights the practicality of our method.", "answers": ["γh."], "length": 5646, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "5914446861ce8a3d0d9204dfc9d41edd747de735c3c3fc36"} {"input": "What factors control the reliance of artificial organisms on plasticity?", "context": "Paper Info\n\nTitle: Environmental variability and network structure determine the optimal plasticity mechanisms in embodied agents\nPublish Date: Unkown\nAuthor List: Sina Khajehabdollahi (from Department of Computer Science, University of Tübingen)\n\nFigure\n\nFigure2: An outline of the network controlling the foraging agent.The sensor layer receives inputs at each time step (the ingredients of the nearest food), which are processed by the plastic layer in the same way as the static sensory network, Fig.1.The output of that network is given as input to the motor network, along with the distance d and angle α to the nearest food, the current velocity v, and energy E of the agent.These signals are processed through two hidden layers to the final output of motor commands as the linear and angular acceleration of the agent\nFigure4: The evolved parameters θ = (θ 1 , . . ., θ 8 ) of the plasticity rule for the reward prediction (a.) and the decision (b.) tasks, for a variety of parameters (p tr = 0.01, d e ∈ 0, 0.1, . . ., 1, and σ ∈ 0, 0.1, . . ., 1 in all 100 combinations).Despite the relatively small difference between the tasks, the evolved learning rules differ considerably.For visual guidance, the lines connect θs from the same run.\nFigure5: a.The trajectory of an agent (blue line) in the 2D environment.A well-trained agent will approach and consume food with positive values (green dots) and avoid negative food (red dots).b.The learning rate of the plastic sensory network eta p grows with the distance between environments d e c. and decreases with the frequency of environmental change.d.The fitness of an agent (measured as the total food consumed over its lifetime) increases over generations of the EA for both the scalar and binary readouts in the sensory network.e.The Pearson correlation coefficient of an evolved agent's weights with the ingredient value vector of the current environment (E 1 -blue, E 2 -red).In this example, the agent's weights are anti-correlated with its environment, which is not an issue for performance since the motor network can interpret the inverted signs of food.\n\nabstract\n\nThe evolutionary balance between innate and learned behaviors is highly intricate, and different organisms have found different solutions to this problem. We hypothesize that the emergence and exact form of learning behaviors is naturally connected with the statistics of environmental fluctuations and tasks an organism needs to solve.\nHere, we study how different aspects of simulated environments shape an evolved synaptic plasticity rule in static and moving artificial agents. We demonstrate that environmental fluctuation and uncertainty control the reliance of artificial organisms on plasticity. Interestingly, the form of the emerging plasticity rule is additionally determined by the details of the task the artificial organisms are aiming to solve.\nMoreover, we show that coevolution between static connectivity and interacting plasticity mechanisms in distinct sub-networks changes the function and form of the emerging plasticity rules in embodied agents performing a foraging task. One of the defining features of living organisms is their ability to adapt to their environment and incorporate new information to modify their behavior.\nIt is unclear how the ability to learn first evolved , but its utility appears evident. Natural environments are too complex for all the necessary information to be hardcoded genetically and more importantly, they keep changing during an organism's lifetime in ways that cannot be anticipated ; . The link between learning and environmental uncertainty and fluctuation has been extensively demonstrated in both natural ; , and artificial environments .\nNevertheless, the ability to learn does not come without costs. For the capacity to learn to be beneficial in evolutionary terms, a costly nurturing period is often required, a phenomenon observed in both biological , and artificial organisms . Additionally, it has been shown that in some complex environments, hardcoded behaviors may be superior to learned ones given limits in the agent's lifetime and envi-ronmental uncertainty ; ; .\nThe theoretical investigation of the optimal balance between learned and innate behaviors in natural and artificial systems goes back several decades. However, it has recently found also a wide range of applications in applied AI systems ; . Most AI systems are trained for specific tasks, and have no need for modification after their training has been completed.\nStill, technological advances and the necessity to solve broad families of tasks make discussions about life-like AI systems relevant to a wide range of potential application areas. Thus the idea of open-ended AI agents that can continually interact with and adapt to changing environments has become particularly appealing.\nMany different approaches for introducing lifelong learning in artificial agents have been proposed. Some of them draw direct inspiration from actual biological systems ; . Among them, the most biologically plausible solution is to equip artificial neural networks with some local neural plasticity , similar to the large variety of synaptic plasticity mechanisms ; ; that performs the bulk of the learning in the brains of living organisms .\nThe artificial plasticity mechanisms can be optimized to modify the connectivity of the artificial neural networks toward solving a particular task. The optimization can use a variety of approaches, most commonly evolutionary computation. The idea of meta-learning or optimizing synaptic plasticity rules to perform specific functions has been recently established as an engineering tool that can compete with stateof-the-art machine learning algorithms on various complex tasks ; ; Pedersen and Risi (2021); .\nAdditionally, it can be used to reverse engineer actual plasticity mechanisms found in biological neural networks and uncover their functions ; . Here, we study the effect that different factors (environ-arXiv:2303.06734v1 [q-bio.NC] 12 Mar 2023 mental fluctuation and reliability, task complexity) have on the form of evolved functional reward-modulated plasticity rules.\nWe investigate the evolution of plasticity rules in static, single-layer simple networks. Then we increase the complexity by switching to moving agents performing a complex foraging task. In both cases, we study the impact of different environmental parameters on the form of the evolved plasticity mechanisms and the interaction of learned and static network connectivity.\nInterestingly, we find that different environmental conditions and different combinations of static and plastic connectivity have a very large impact on the resulting plasticity rules. We imagine an agent who must forage to survive in an environment presenting various types of complex food particles. Each food particle is composed of various amounts and combinations of N ingredients that can have positive (food) or negative (poison) values.\nThe value of a food particle is a weighted sum of its ingredients. To predict the reward value of a given resource, the agent must learn the values of these ingredients by interacting with the environment. The priors could be generated by genetic memory, but the exact values are subject to change. To introduce environmental variability, we stochastically change the values of the ingredients.\nMore precisely, we define two ingredient-value distributions E 1 and E 2 and switch between them, with probability p tr for every time step. We control how (dis)similar the environments are by parametrically setting E 2 = (1 − 2d e )E 1 , with d e ∈ [0, 1] serving as a distance proxy for the environments; when d e = 0, the environment remains unchanged, and when d e = 1 the value of each ingredient fully reverses when the environmental transition happens.\nFor simplicity, we take values of the ingredients in E 1 equally spaced between -1 and 1 (for the visualization, see Fig. ). The static agent receives passively presented food as a vector of ingredients and can assess its compound value using the linear summation of its sensors with the (learned or evolved) weights, see Fig. .\nThe network consists of N sensory neurons that are projecting to a single post-synaptic neuron. At each time step, an input X t = (x 1 , . . . , x N ) is presented, were the value x i , i ∈ {1, . . . , N } represents the quantity of the ingredient i. We draw x i independently form a uniform distribution on the [0, 1] interval (x i ∼ U (0, 1)).\nThe value of each ingredient w c i is determined by the environment (E 1 or E 2 ). The postsynaptic neuron outputs a prediction of the food X t value as y t = g(W X T t ). Throughout the paper, g will be either the identity function, in which case the prediction neuron is linear, or a step-function; however, it could be any other nonlinearity, such as a sigmoid or ReLU.\nAfter outputting the prediction, the neuron receives feedback in the form of the real value of the input R t . The real value is computed as R t = W c X T t + ξ, where W c = (w c 1 , . . . , w c N ) is the actual value of the ingredients, and ξ is a term summarizing the noise of reward and sensing system ξ ∼ N (0, σ).\nFigure : An outline of the static agent's network. The sensor layer receives inputs representing the quantity of each ingredient of a given food at each time step. The agent computes the prediction of the food's value y t and is then given the true value R t ; it finally uses this information in the plasticity rule to update the weight matrix.\nFor the evolutionary adjustment of the agent's parameters, the loss of the static agent is the sum of the mean squared errors (MSE) between its prediction y t and the reward R t over the lifetime of the agent. The agent's initial weights are set to the average of the two ingredient value distributions, which is the optimal initial value for the case of symmetric switching of environments that we consider here.\nAs a next step, we incorporate the sensory network of static agents into embodied agents that can move around in an environment scattered with food. To this end, we merge the static agent's network with a second, non-plastic motor network that is responsible for controlling the motion of the agent in the environment.\nSpecifically, the original plastic network now provides the agent with information about the value of the nearest food. The embodied agent has additional sensors for the distance from the nearest food, the angle between the current velocity and the nearest food direction, its own velocity, and its own energy level (sum of consumed food values).\nThese inputs are processed by two hidden layers (of 30 and 15 neurons) with tanh activation. The network's outputs are angular and linear acceleration, Fig. . The embodied agents spawn in a 2D space with periodic boundary conditions along with a number of food particles that are selected such that the mean of the food value distribution is ∼ 0. An agent can eat food by approaching it sufficiently closely, and each time a food particle is eaten, it is The sensor layer receives inputs at each time step (the ingredients of the nearest food), which are processed by the plastic layer in the same way as the static sensory network, Fig. .\nThe output of that network is given as input to the motor network, along with the distance d and angle α to the nearest food, the current velocity v, and energy E of the agent. These signals are processed through two hidden layers to the final output of motor commands as the linear and angular acceleration of the agent re-spawned with the same value somewhere randomly on the grid (following the setup of ).\nAfter 5000 time steps, the cumulative reward of the agent (the sum of the values of all the food it consumed) is taken as its fitness. During the evolutionary optimization, the parameters for both the motor network (connections) and plastic network (learning rule parameters) are co-evolved, and so agents must simultaneously learn to move and discriminate good/bad food.\nReward-modulated plasticity is one of the most promising explanations for biological credit assignment . In our network, the plasticity rule that updates the weights of the linear sensor network is a rewardmodulated rule which is parameterized as a linear combination of the input, the output, and the reward at each time step:\nAdditionally, after each plasticity step, the weights are normalized by mean subtraction, an important step for the stabilization of Hebbian-like plasticity rules . We use a genetic algorithm to optimize the learning rate η p and amplitudes of different terms θ = (θ 1 , . . . , θ 8 ). The successful plasticity rule after many food presentations must converge to a weight vector that predicts the correct food values (or allows the agent to correctly decide whether to eat a food or avoid it).\nTo have comparable results, we divide θ = (θ 1 , . . . , θ 8 ) by We then multiply the learning rate η p with θ max to maintain the rule's evolved form unchanged, η norm p = η p • θ max . In the following, we always use normalized η p and θ, omitting norm . To evolve the plasticity rule and the moving agents' motor networks, we use a simple genetic algorithm with elitism .\nThe agents' parameters are initialized at random (drawn from a Gaussian distribution), then the sensory network is trained by the plasticity rule and finally, the agents are evaluated. After each generation, the bestperforming agents (top 10 % of the population size) are selected and copied into the next generation.\nThe remaining 90 % of the generation is repopulated with mutated copies of the best-performing agents. We mutate agents by adding independent Gaussian noise (σ = 0.1) to its parameters. To start with, we consider a static agent whose goal is to identify the value of presented food correctly. The static reward-prediction network quickly evolves the parameters of the learning rule, successfully solving the prediction task.\nWe first look at the evolved learning rate η p , which determines how fast (if at all) the network's weight vector is updated during the lifetime of the agents. We identify three factors that control the learning rate parameter the EA converges to: the distance between the environments, the noisiness of the reward, and the rate of environmental transition.\nThe first natural factor is the distance d e between the two environments, with a larger distance requiring a higher learning rate, Fig. . This is an expected result since the convergence time to the \"correct\" weights is highly dependent on the initial conditions. If an agent is born at a point very close to optimality, which naturally happens if the environments are similar, the distance it needs to traverse on the fitness landscape is small.\nTherefore it can afford to have a small learning rate, which leads to a more stable convergence and is not affected by noise. A second parameter that impacts the learning rate is the variance of the rewards. The reward an agent receives for the plasticity step contains a noise term ξ that is drawn from a zero mean Gaussian distribution with standard deviation σ.\nThis parameter controls the unreliability of the agent's sensory system, i.e., higher σ means that the information the agent gets about the value of the foods it consumes cannot be fully trusted to reflect the actual value of the foods. As σ increases, the learning rate η p decreases, which means that the more unreliable an environment becomes, the less an agent relies on plasticity to update its weights, Fig. .\nIndeed for some combinations of relatively small distance d e and high reward variance σ, the EA converges to a learning rate of η p ≈ 0. This means that the agent opts to have no adaptation during its lifetime and remain at the mean of the two environments. It is an optimal solution when the expected loss due to ignoring the environmental transitions is, on average, lower than the loss the plastic network will incur by learning via the (often misleading because of the high σ) environmental cues.\nA final factor that affects the learning rate the EA will converge to is the frequency of environmental change during an agent's lifetime. Since the environmental change is modeled as a simple, two-state Markov process (Fig. ), the control parameter is the transition probability p tr . When keeping everything else the same, the learning rate rapidly rises as we increase the transition probability from 0, and after reaching a peak, it begins to decline slowly, eventually reaching zero (Fig. ).\nThis means that when environmental transition is very rare, agents opt for a very low learning rate, allowing a slow and stable convergence to an environment-appropriate weight vector that leads to very low losses while the agent remains in that environment. As the rate of environmental transition increases, faster learning is required to speed up convergence in order to exploit the (comparatively shorter) stays in each environment.\nFinally, as the environmental transition becomes too fast, the agents opt for slower or even no learning, which keeps them ) and the decision (b.) tasks, for a variety of parameters (p tr = 0.01, d e ∈ 0, 0.1, . . . , 1, and σ ∈ 0, 0.1, . . . , 1 in all 100 combinations). Despite the relatively small difference between the tasks, the evolved learning rules differ considerably.\nFor visual guidance, the lines connect θs from the same run. near the middle of the two environments, ensuring that the average loss of the two environments is minimal (Fig. ). The form of the evolved learning rule depends on the task: Decision vs. Prediction The plasticity parameters θ = (θ 1 , . . . , θ 8 ) for the rewardprediction task converge on approximately the same point, regardless of the environmental parameters (Fig. ).\nIn particular, θ 3 → 1, θ 5 → −1, θ i → 0 for all other i, and thus the learning rule converges to: Since by definition y t = g(W t X T t ) = W t X T t (g(x) = x in this experiment) and R t = W c X T t + ξ we get: Thus the distribution of ∆W t converges to a distribution with mean 0 and variance depending on η p and σ and W converges to W c .\nSo this learning rule will match the agent's weight vector with the vector of ingredient values in the environment. We examine the robustness of the learning rule the EA discovers by considering a slight modification of our task. Instead of predicting the expected food value, the agent now needs to decide whether to eat the presented food or not.\nThis is done by introducing a step-function nonlinearity (g(x) = 1 if x ≥ 1 and 0 otherwise). Then the output y(t) is computed as: Instead of the MSE loss between prediction and actual value, the fitness of the agent is now defined as the sum of the food values it chose to consume (by giving y t = 1). Besides these two changes, the setup of the experiments remains exactly the same.\nThe qualitative relation between η p and parameters of environment d e , σ and p tr is preserved in the changed experiment. However, the resulting learning rule is significantly different (Fig. ). The evolution converges to the following learning rule: In both cases, the rule has the form ∆W t = η p X t [α y R t + β y ].\nThus, the ∆W t is positive or negative depending on whether the reward R t is above or below a threshold (γ = −β y /α y ) that depends on the output decision of the network (y t = 0 or 1). Both learning rules (for the reward-prediction and decision tasks) have a clear Hebbian form (coordination of preand post-synaptic activity) and use the incoming reward signal as a threshold.\nThese similarities indicate some common organizing principles of reward-modulated learning rules, but their significant differences highlight the sensitivity of the optimization process to task details. We now turn to the moving embodied agents in the 2D environment. To optimize these agents, both the motor network's connections and the sensory network's plasticity parameters evolve simultaneously.\nSince the motor network is initially random and the agent has to move to find food, the number of interactions an agent experiences in its lifetime can be small, slowing down the learning. However, having the larger motor network also has benefits for evolution because it allows the output of the plastic network to be read out and transformed in different ways, resulting in a broad set of solutions.\nThe fitness of an agent (measured as the total food consumed over its lifetime) increases over generations of the EA for both the scalar and binary readouts in the sensory network. e. The Pearson correlation coefficient of an evolved agent's weights with the ingredient value vector of the current environment (E 1 -blue, E 2 -red).\nIn this example, the agent's weights are anti-correlated with its environment, which is not an issue for performance since the motor network can interpret the inverted signs of food. The agents can solve the task effectively by evolving a functional motor network and a plasticity rule that converges to interpretable weights (Fig. ).\nAfter ∼ 100 evolutionary steps (Fig. ), the agents can learn the ingredient value distribution using the plastic network and reliably move towards foods with positive values while avoiding the ones with negative values. We compare the dependence of the moving and the static agents on the parameters of the environment: d e and the state transition probability p tr .\nAt first, in order to simplify the experiment, we set the transition probability to 0, but fixed the initial weights to be the average of E 1 and E 2 , while the real state is E 2 . In this experiment, the distance between states d e indicates twice the distance between the agent's initial weights and the optimal weights (the environment's ingredient values) since the agent is initialized at the mean of the two environment distributions.\nSame as for the static agent, the learning rate increases with the distance d e (Fig. ). Then, we examine the effect of the environmental transition probability p tr on the evolved learning rate η p . In order for an agent to get sufficient exposure to each environment, we scale down the probability p tr from the equivalent experiment for the static agents.\nWe find that as the probability of transition increases, the evolved learning rate η p decreases (Fig. ). This fits with the larger trend for the static agent, although there is a clear difference when it comes to the increase for very small transition probabil-ities that were clearly identifiable in the static but not the moving agents.\nThis could be due to much sparser data and possibly the insufficiently long lifetime of the moving agent (the necessity of scaling makes direct comparisons difficult). Nevertheless, overall we see that the associations observed in the static agents between environmental distance d e and transition probability p tr and the evolved learning rate η p are largely maintained in the moving agents.\nStill, more data would be needed to make any conclusive assertions about the exact effect of these environmental parameters on the emerging plasticity mechanisms. A crucial difference between the static and the moving agents is the function the plasticity has to perform. While in the static agents, the plasticity has to effectively identify the exact value distribution of the environment in order to produce accurate predictions, in the embodied agents, the plasticity has to merely produce a representation of the environment that the motor network can evolve to interpret adequately enough to make decisions about which food to consume.\nTo illustrate the difference, we plot the Pearson correlation coefficient between an agent's weights and the ingredient values of the environment it is moving in (Fig. ). We use the correlation instead of the MSE loss (which we used for the static agents in Fig. ) because the amplitude of the vector varies a lot for different agents and meaningful The evolved parameters of moving agents' plasticity rule for the g(s) = x, identity (a.) and the step function (Eq.\n4) (b.) sensory networks (the environmental parameters here are d e ∈ [0, 1], σ = 0 and p tr = 0.001). The step function (binary output) network evolved a more structured plasticity rule (e.g., θ 3 > 0 for all realizations) than the linear network. Moreover, the learned weights for the identity network (c.) have higher variance and correlate significantly less with the environment's ingredient distribution compared to the learned weights for the thresholded network (d.)\nconclusions cannot be drawn from the MSE loss. For many agents, the learned weights are consistently anti-correlated with the actual ingredient values (an example of such an agent is shown in Fig. ). This means that the output of the sensory network will have the opposite sign from the actual food value.\nWhile in the static network, this would lead to very bad predictions and high loss, in the foraging task, these agents perform exactly as well as the ones where the weights and ingredients values are positively correlated, since the motor network can simply learn to move towards food for which it gets a negative instead of a positive sensory input.\nThis additional step of the output of the plastic network going through the motor network before producing any behavior has a strong effect on the plasticity rules that the embodied agents evolve. Specifically, if we look at the emerging rules the top performing agents have evolved (Fig. ), it becomes clear that, unlike the very well-structured rules of the static agents (Fig. ), there is now virtually no discernible pattern or structure.\nThe difference becomes even clearer if we look at the learned weights (at the end of a simulation) of the best-performing agents (Fig. ). While there is some correlation with the environment's ingredient value distribution, the variance is very large, and they do not seem to converge on the \"correct\" values in any way.\nThis is to some extent expected since, unlike the static agents where the network's output has to be exactly correct, driving the evolution of rules that converge to the precise environmental distribution, in the embodied networks, the bulk of the processing is done by the motor network which can evolve to interpret the scalar value of the sensory network's output in a variety of ways.\nThus, as long as the sensory network's plasticity rule co-evolves with the motor network, any plasticity rule that learns to produce consistent information about the value of encountered food can potentially be selected. To further test this assumption, we introduce a bottleneck of information propagation between the sensory and motor networks by using a step-function nonlinearity on the output of the sensory network (Eq.\n4). Similarly to the decision task of the static network, the output of the sensory network now becomes binary. This effectively reduces the flow of information from the sensory to the motor network, forcing the sensory network to consistently decide whether food should be consumed (with the caveat that the motor network can still interpret the binary sign in either of two ways, either consuming food marked with 1 or the ones marked with 0 by the sensory network).\nThe agents perform equally well in this variation of the task as before (Fig. ), but now, the evolved plasticity rules seem to be more structured (Fig. ). Moreover, the variance of the learned weights in the bestperforming agents is significantly reduced (Fig. ), which indicates that the bottleneck in the sensory network is in-creasing selection pressure for rules that learn the environment's food distribution accurately.\nWe find that different sources of variability have a strong impact on the extent to which evolving agents will develop neuronal plasticity mechanisms for adapting to their environment. A diverse environment, a reliable sensory system, and a rate of environmental change that is neither too large nor too small are necessary conditions for an agent to be able to effectively adapt via synaptic plasticity.\nAdditionally, we find that minor variations of the task an agent has to solve or the parametrization of the network can give rise to significantly different plasticity rules. Our results partially extend to embodied artificial agents performing a foraging task. We show that environmental variability also pushes the development of plasticity in such agents.\nStill, in contrast to the static agents, we find that the interaction of a static motor network with a plastic sensory network gives rise to a much greater variety of wellfunctioning learning rules. We propose a potential cause of this degeneracy; as the relatively complex motor network is allowed to read out and process the outputs from the plastic network, any consistent information coming out of these outputs can be potentially interpreted in a behaviorally useful way.\nReducing the information the motor network can extract from the sensory system significantly limits learning rule variability. Our findings on the effect of environmental variability concur with the findings of previous studies that have identified the constraints that environmental variability places on the evolutionary viability of learning behaviors.\nWe extend these findings in a mechanistic model which uses a biologically plausible learning mechanism (synaptic plasticity). We show how a simple evolutionary algorithm can optimize the different parameters of a simple reward-modulated plasticity rule for solving simple prediction and decision tasks.\nReward-modulated plasticity has been extensively studied as a plausible mechanism for credit assignment in the brain ; ; and has found several applications in artificial intelligence and robotics tasks ; . Here, we demonstrate how such rules can be very well-tuned to take into account different environmental parameters and produce optimal behavior in simple systems.\nAdditionally, we demonstrate how the co-evolution of plasticity and static functional connectivity in different subnetworks fundamentally changes the evolutionary pressures on the resulting plasticity rules, allowing for greater diversity in the form of the learning rule and the resulting learned connectivity.\nSeveral studies have demonstrated how, in biological networks, synaptic plasticity heavily interacts with and is driven by network topology . Moreover, it has been recently demonstrated that biological plasticity mechanisms are highly redundant in the sense that any observed neural connectivity or recorded activity can be achieved with a variety of distinct, unrelated learning rules .\nThis observed redundancy of learning rules in biological settings complements our results and suggests that the function of plasticity rules cannot be studied independently of the connectivity and topology of the networks they are acting on. The optimization of functional plasticity in neural networks is a promising research direction both as a means to understand biological learning processes and as a tool for building more autonomous artificial systems.\nOur results suggest that reward-modulated plasticity is highly adaptable to different environments and can be incorporated into larger systems that solve complex tasks. This work studies a simplified toy model of neural network learning in stochastic environments. Future work could be built on this basic framework to examine more complex reward distributions and sources of environmental variability.\nMoreover, a greater degree of biological realism could be added by studying more plausible network architectures (multiple plastic layers, recurrent and feedback connections) and more sophisticated plasticity rule parametrizations. Additionally, our foraging simulations were constrained by limited computational resources and were far from exhaustive.\nFurther experiments can investigate environments with different constraints, food distributions, multiple seasons, more complex motor control systems and interactions of those systems with different sensory networks as well as the inclusion of plasticity on the motor parts of the artificial organisms.", "answers": ["Environmental fluctuation and uncertainty control the reliance of artificial organisms on plasticity."], "length": 5339, "dataset": "multifieldqa_en_e", "language": "en", "all_classes": null, "_id": "79ffdceb9859803e365c3de5d24c187ed06b15f04d04ae6a"} {"input": "What kind of ultracold neutral plasmas does this study focus on?", "context": "\\section{Introduction}\n\nUltracold neutral plasmas studied in the laboratory offer access to a regime of plasma physics that scales to describe thermodynamic aspects of important high-energy-density systems, including strongly coupled astrophysical plasmas \\cite{VanHorn,Burrows}, as well as terrestrial sources of neutrons \\cite{Hinton,Ichimaru_fusion,Atzeni,Boozer} and x-ray radiation \\cite{Rousse,Esarey}. Yet, under certain conditions, low-temperature laboratory plasmas evolve with dynamics that are governed by the quantum mechanical properties of their constituent particles, and in some cases by coherence with an external electromagnetic field. \n\nThe relevance of ultracold plasmas to such a broad scope of problems in classical and quantum many-body physics has given rise to a great deal of experimental and theoretical research on these systems since their discovery in the late 90s. A series of reviews affords a good overview of progress in the last twenty years \\cite{Gallagher,Killian_Science,PhysRept,Lyon}. Here, we focus on the subset of ultracold neutral plasmas that form via kinetic rate processes from state-selected Rydberg gases, and emphasize in particular the distinctive dynamics found in the evolution of molecular ultracold plasmas. \n\nWhile molecular beam investigations of threshold photoionization spectroscopy had uncovered relevant effects a few years earlier \\cite{Scherzer,Alt}, the field of ultracold plasma physics began in earnest with the 1999 experiment of Rolston and coworkers on metastable xenon atoms cooled in a magneto optical trap (MOT) \\cite{Killian}. \n\nThis work and many subsequent efforts tuned the photoionization energy as a means to form a plasma of very low electron temperature built on a strongly coupled cloud of ultracold ions. Experiment and theory soon established that fast processes associated with disorder-induced heating and longer-time electron-ion collisional rate processes act to elevate the ion temperatures to around one degree Kelvin, and constrain the effective initial electron temperature to a range above 30 K \\cite{Kuzmin,Hanson,Laha}. \n\nThis apparent limit on the thermal energy of the electrons can be more universally expressed for an expanding plasma by saying that the electron correlation parameter, $\\Gamma_e$, does not exceed 0.25, where, \n\\begin{equation}\n\\Gamma_e = \\frac{e^2}{4\\pi \\epsilon_0 a_{ws}}\\frac{1}{k_B T_e}\n\\label{eqn:gamma_e}\n\\end{equation}\ndefines the ratio of the average unscreened electron-electron potential energy to the electron kinetic energy. $a_{ws}$ is the Wigner-Seitz radius, related to the electron density by, $\\rho_e = 1/(\\frac{4}{3} \\pi a_{ws}^3)$. These plasmas of weakly coupled electrons and strongly coupled ions have provided an important testing ground for ion transport theory and the study of electron-ion collision physics \\cite{Strickler}.\n\nSoon after the initial reports of ultracold plasmas formed by direct photoionization, a parallel effort began with emphasis on the plasma that forms spontaneously by Penning ionization and electron-impact avalanche in a dense ultracold Rydberg gas \\cite{Mourachko}. This process affords less apparent control of the initial electron temperature. But, pulsed field-ionization measurements soon established that the photoionized plasma and that formed by the avalanche of a Rydberg gas both evolve to quasi-equilibria of electrons, ions and high-Rydberg neutrals \\cite{Rolston_expand,Gallagher}. \n\nEarly efforts to understand plasmas formed by Rydberg gas avalanche paid particular attention to the process of initiation. Evolution to plasma in effusive atomic beams was long known for high-Rydberg gases of caesium and well explained by coupled rate equations \\cite{Vitrant}. But, low densities and ultracold velocity distributions were thought to exclude Rydberg-Rydberg collisional mechanisms in a MOT. \n\nIn work on ultracold Rydberg gases of Rb and Cs, Gallagher, Pillet and coworkers describe the initial growth of electron signal by a model that includes ionization by blackbody radiation and collisions with a background of uncooled Rydberg atoms \\cite{Mourachko,Gallagher,Li,Comparat,Tanner}. This picture was subsequently refined to include many-body excitation and autoionization, as well as attractive dipole-dipole interactions \\cite{Viteau,Pillet}, later confirmed by experiments at Rice \\cite{Mcquillen}. \n\nThe Orsay group also studied the effect of adding Rydberg atoms to an established ultracold plasma. They found that electron collisions in this environment completely ionize added atoms, even when selected to have deep binding energies \\cite{Vanhaecke}. They concluded from estimates of electron trapping efficiency that the addition of Rydberg atoms does not significantly alter the electron temperature of the plasma. \n\nTuning pair distributions by varying the wavelength of the excitation laser, Weidem\\\"uller and coworkers confirmed the mechanical effects of van der Waals interactions on the rates of Penning ionization in ultracold $^{87}$Rb Rydberg gases \\cite{Amthor_mech}. They recognized blackbody radiation as a possible means of final-state redistribution, and extended this mechanical picture to include long-range repulsive interactions \\cite{Amthor_model}. This group later studied the effects of spatial correlations in the spontaneous avalanche of Rydberg gases in a regime of strong blockade, suggesting a persistence of initial spatial correlations \\cite{RobertdeSaintVincent}. \n\nRobicheaux and coworkers have recently investigated the question of prompt many-body ionization from the point of view of Monte Carlo classical trajectory calculations \\cite{Goforth}. For atoms on a regular or random grid driven classically by an electromagnetic field, they find that many-body excitation enhances prompt ionization by about twenty percent for densities greater than $5.6 \\times 10^{-3}/(n_0^2 a_0)^3$, where $n_0$ is the principal quantum number of the Rydberg gas and $a_0$ is the Bohr radius. They observed that density fluctuations (sampled from the distribution of nearest neighbour distances) have a greater effect, and point to the possible additional influence of secondary electron-Rydberg collisions and the Penning production of fast atoms not considered by the model, but already observed by Raithel and coworkers \\cite{Knuffman}. \n\nThe Raithel group also found direct evidence for electron collisional $\\ell$-mixing in a Rb MOT \\cite{Dutta}, and used selective field ionization to monitor evolution to plasma on a microsecond timescale in ultracold $^{85}$Rb $65d$ Rydberg gases with densities as low as $10^8$ cm$^{-3}$ \\cite{WalzFlannigan}. Research by our group at UBC has observed very much the same dynamics in the relaxation of Xe Rydberg gases of similar density prepared in a molecular beam \\cite{Hung2014}. In both cases, the time evolution to avalanche is well-described by coupled rate equations (see below), assuming an initializing density of Penning electrons determined by Robicheaux's criterion \\cite{Robicheaux05}, applied to an Erlang distribution of Rydberg-Rydberg nearest neighbours. \n\nTheoretical investigations of ultracold plasma physics have focused for the most part on the long- and short-time dynamics of plasmas formed by direct photoionization \\cite{PhysRept,Lyon}. In addition to studies mentioned above, key insights on the evolution dynamics of Rydberg gases have been provided by studies of Pohl and coworkers exploring the effects of ion correlations and recombination-reionization on the hydrodynamics of plasma expansion \\cite{Pohl:2003,PPR}. Further research has drawn upon molecular dynamics (MD) simulations to reformulate rate coefficients for the transitions driven by electron impact between highly excited Rydberg states \\cite{PVS}, and describe an effect of strong coupling as it suppresses three-body recombination \\cite{Bannasch:2011}. MD simulations confirm the accuracy of coupled rate equation descriptions for systems with $\\Gamma$ as large as 0.3. Newer calculations suggest a strong connection between the order created by dipole blockade in Rydberg gases and the most favourable correlated distribution of ions in a corresponding strongly coupled ultracold plasma \\cite{Bannasch:2013}. \n\nTate and coworkers have studied ultracold plasma avalanche and expansion theoretically as well as experimentally. Modelling observed expansion rates, they recently found that $^{85}$Rb atoms in a MOT form plasmas with effective initial electron temperatures determined by initial Rydberg density and the selected initial binding energy, to the extent that these parameters determine the fraction of the excited atoms that ionize by electron impact in the avalanche to plasma \\cite{Forest}. This group also returned to the question of added Rydberg atoms, and managed to identify a crossover in $n_0$, depending on the initial electron temperature, that determines whether added Rydberg atoms of a particular initial binding energy act to heat or cool the electron temperature \\cite{Crockett}. \n\nOur group has focused on the plasma that evolves from a Rydberg gas under the low-temperature conditions of a skimmed, seeded supersonic molecular beam. In work on nitric oxide starting in 2008 \\cite{Morrison2008,Plasma_expan,Morrison_shock,PCCP}, we established an initial kinetics of electron impact avalanche ionization that conforms with coupled rate equation models \\cite{Saquet2011,Saquet2012,Scaling,haenelCP} and agrees at early times with the properties of ultracold plasmas that evolve from ultracold atoms in a MOT. We have also observed unique properties of the NO ultracold plasma owing to the fact that its Rydberg states dissociate \\cite{Haenel2017}, and identified relaxation pathways that may give rise to quantum effects \\cite{SousMBL,SousNJP}. The remainder of this review focuses on the nitric oxide ultracold plasma and the unique characteristics conferred by its evolution from a Rydberg gas in a laser-crossed molecular beam. \n\n\n\\section{Avalanche to strong coupling in a molecular Rydberg gas}\n\n\\subsection{The molecular beam ultracold plasma compared with a MOT}\n\nWhen formed with sufficient density, a Rydberg gas of principal quantum number $n_0>30$ undergoes a spontaneous avalanche to form an ultracold plasma \\cite{Li,Morrison2008,RobertdeSaintVincent}. Collisional rate processes combine with ambipolar hydrodynamics to govern the properties of the evolving plasma. For a molecular Rydberg gas, neutral fragmentation, occurs in concert with electron-impact ionization, three-body recombination and electron-Rydberg inelastic scattering. Neutral dissociation combined with radial expansion in a shaped distribution of charged particles, can give rise to striking effects of self-assembly and spatial correlation \\cite{Schulz-Weiling2016,Haenel2017}. \n\nThe formation of a molecular ultracold plasma requires the conditions of local temperature and density afforded by a high mach-number skimmed supersonic molecular beam. Such a beam propagates at high velocity in the laboratory, with exceedingly well-defined hydrodynamic properties, including a propagation-distance-dependent density and sub-Kelvin temperature in the moving frame \\cite{MSW_tutorial}. The low-temperature gas in a supersonic molecular beam differs in three important ways from the atomic gas laser-cooled in a magneto-optical trap (MOT).\n\nThe milli-Kelvin temperature of the gas of ground-state NO molecules entrained in a beam substantially exceeds the sub-100 micro-Kelvin temperature of laser-cooled atoms in a MOT. However, the evolution to plasma tends to erase this distinction, and the two further characteristics that distinguish a beam offer important advantages for ultracold plasma physics: Charged-particle densities in a molecular beam can exceed those attainable in a MOT by orders of magnitude. A great many different chemical substances can be seeded in a free-jet expansion, and the possibility this affords to form other molecular ultracold plasmas, introduces interesting and potentially important new degrees of freedom governing the dynamics of their evolution.\n\n\n\\subsection{Supersonic molecular beam temperature and particle density}\n\nSeeded in a skimmed supersonic molecular beam, nitric oxide forms different phase-space distributions in the longitudinal (propagation) and transverse coordinate dimensions. As it propagates in $z$, the NO molecules reach a terminal laboratory velocity, $u_{\\parallel}$, of about 1400 ${\\rm ms^{-1}}$, which varies with the precise seeding ratio. \n\nThe distribution of $v_{\\parallel}$, narrows to define a local temperature, $T_{\\parallel}$, of approximately 0.5 K. The beam forms a Gaussian spatial distribution in the transverse coordinates, $x$ and $y$. In this plane, the local velocity, $v_{\\perp}(r)$ is defined for any radial distance almost entirely by the divergence velocity of the beam, $u_{\\perp}(r)$. Phase-space sorting cools the temperature in the transverse coordinates, $T_{\\perp}$ to a value as low as $\\sim 5$ mK \\cite{MSW_tutorial}. \n\nThe stagnation pressure and seeding ratio determine the local density distribution as a function of $z$. For example, expanding from a stagnation pressure of 500 kPa with a 1:10 seeding ratio, a molecular beam propagates 2.5 cm to a skimmer and then 7.5 cm to a point of laser interaction, where it contains NO at a peak density of $1.6 \\times 10^{14}$ cm$^{-3}$. \n\nHere, crossing the molecular beam with a laser beam tuned to the transition sequence, ${\\rm X} ~^2 \\Pi_{1/2} ~N'' = 1 \\xrightarrow{\\omega_1} {\\rm A} ~^2\\Sigma^+ ~N'=0 \\xrightarrow{\\omega_2} n_0 f(2)$ forms a Gaussian ellipsoidal volume of Rydberg gas in a single selected principal quantum number, $n_0$, orbital angular momentum, $\\ell = 3$, NO$^+$ core rotational quantum number, $N^+ = 2$ and total angular momentum neglecting spin, $N=1$. \n\nA typical $\\omega_1$ pulse energy of 2 $\\mu$J and a Gaussian width of 0.2 mm serves to drive the first step of this sequence in a regime of linear absorption. Overlapping this volume by an $\\omega_2$ pulse with sufficient fluence to saturate the second step forms a Rydberg gas ellipsoid with a nominal peak density of $5 \\times 10^{12}$ cm$^{-3}$ \\cite{Morrison2008,MSW_tutorial}. Fluctuations in the pulse energy and longitudinal mode of $\\omega_1$ cause the real density to vary. For certain experiments, we find it convenient to saturate the $\\omega_1$ transition, and vary the density of Rydberg gas by delaying $\\omega_2$. An $\\omega_1$-$\\omega_2$ delay, $\\Delta t$, reduces the Rydberg gas density by a precise factor, $e^{-\\Delta t/\\tau}$, where $\\tau$ is the 200 ns radiative lifetime of NO ${\\rm A} ~^2\\Sigma^+ ~N'=0$ \\cite{Carter,Hancock}.\n\n\n\\subsection{Penning ionization}\n\nThe density distribution of a Rydberg gas defines a local mean nearest neighbour distance, or Wigner-Seitz radius of $ a_{ws} = \\left(3/4 \\pi \\rho \\right)^{1/3} $, where $\\rho$ refers to the local Rydberg gas density. For example, a Rydberg gas with a density of $ \\rho_0=0.5 \\times 10^{12}$ cm$^{-3} $ forms an Erlang distribution \\cite{Torquato.1990} of nearest neighbour separations with a mean value of $ 2 a_{ws}=1.6$ $\\mu$m. \n\nA semi-classical model \\cite{Robicheaux05} suggests that 90 percent of Rydberg molecule pairs separated by a critical distance, $ r_c = 1.8 \\cdot 2 n_0^2 a_0 $ or less undergo Penning ionization within 800 Rydberg periods. We can integrate the Erlang distribution from $ r=0 $ to the critical distance $r = r_c$ for a Rydberg gas of given $n_0$, to define the local density of Penning electrons ($ \\rho_e$ at $t=0$) produced by this prompt interaction, for any given initial local density, $\\rho_0$ by the expression:\n\\begin{equation}\n\\rho_e(\\rho_0,n_0) = \\frac{0.9}{2} \\cdot 4 \\pi \\rho_0 ^2\\int_0^{r_{c}} r^2 \\mathrm{e}^{-\\frac{4\\pi}{3}\\rho_0 r^3}\\mathrm{d}r \\quad.\n\\label{eqn:Erlang}\n\\end{equation}\n\nEvaluating this definite integral yields an equation in closed form that predicts the Penning electron density for any particular initial Rydberg density and principal quantum number.\n\\begin{equation}\n\\rho_e(\\rho_0,n_0) =\\frac{0.9 \\rho_0}{2}(1-\\mathrm{e}^{-\\frac{4\\pi}{3}\\rho_0 r_c^3}) \\quad.\n\\label{Eq:PenDens}\n\\end{equation}\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[scale=0.33]{Penning_Latice.pdf}\n\\caption{Distributions of ion-ion nearest neighbours following Penning ionization and electron-impact avalanche simulated for a predissociating molecular Rydberg gas of initial principal quantum number, $n_0$, from 30 to 80, and density of 10$^{12}$ cm$^{-3}$. Dashed lines mark corresponding values of $a_{ws}$. Calculated by counting ion distances after relaxation to plasma in 10$^6$-particle stochastic simulations. Integrated areas proportional to populations surviving neutral dissociation.}\n\\label{fig:PL}\n\\end{figure}\n\nPrompt Penning ionization acts on the portion of the initial nearest-neighbour distribution in the Rydberg gas that lies within $r_c$. When a molecule ionizes, its collision partner relaxes to a lower principal quantum number, $n'