text
stringlengths
101
134k
type
stringclasses
12 values
__index_level_0__
int64
0
14.7k
Linux range of use Besides the Linux distributions designed for general-purpose use on desktops and servers, distributions may be specialized for different purposes including computer architecture support, embedded systems, stability, security, localization to a specific region or language, targeting of specific user groups, support for real-time applications, or commitment to a given desktop environment. Furthermore, some distributions deliberately include only free software. , over four hundred Linux distributions are actively developed, with about a dozen distributions being most popular for general-purpose use. Desktop The popularity of Linux on standard desktop computers and laptops has been increasing over the years. Most modern distributions include a graphical user environment, with, , the three most popular environments being the KDE Plasma Desktop, Xfce and GNOME. No single official Linux desktop exists: rather desktop environments and Linux distributions select components from a pool of free and open-source software with which they construct a GUI implementing some more or less strict design guide. GNOME, for example, has its human interface guidelines as a design guide, which gives the human–machine interface an important role, not just when doing the graphical design, but also when considering people with disabilities, and even when focusing on security. The collaborative nature of free software development allows distributed teams to perform language localization of some Linux distributions for use in locales where localizing proprietary systems would not be cost-effective. For example, the Sinhalese language version of the Knoppix distribution became available significantly before Microsoft translated Windows XP into Sinhalese. In this case the Lanka Linux User Group played a major part in developing the localized system by combining the knowledge of university professors, linguists, and local developers. Performance and applications The performance of Linux on the desktop has been a controversial topic; for example in 2007 Con Kolivas accused the Linux community of favoring performance on servers. He quit Linux kernel development out of frustration with this lack of focus on the desktop, and then gave a "tell all" interview on the topic. Since then a significant amount of development has focused on improving the desktop experience. Projects such as systemd and Upstart (deprecated in 2014) aim for a faster boot time; the Wayland and Mir projects aim at replacing X11 while enhancing desktop performance, security and appearance. Many popular applications are available for a wide variety of operating systems. For example, Mozilla Firefox, OpenOffice.org/LibreOffice and Blender have downloadable versions for all major operating systems. Furthermore, some applications initially developed for Linux, such as Pidgin, and GIMP, were ported to other operating systems (including Windows and macOS) due to their popularity. In addition, a growing number of proprietary desktop applications are also supported on Linux, such as Autodesk Maya and The Foundry's Nuke in the high-end field of animation and visual effects; see the list of proprietary software for Linux for more details. There are also several companies that have ported their own or other companies' games to Linux, with Linux also being a supported platform on both the Steam and Desura digital-distribution services. Many other types of applications available for Microsoft Windows and macOS also run on Linux. Commonly, either a free software application will exist which does the functions of an application found on another operating system, or that application will have a version that works on Linux, such as with Skype and some video games like Dota 2 and Team Fortress 2. Furthermore, the Wine project provides a Windows compatibility layer to run unmodified Windows applications on Linux. It is sponsored by commercial interests including CodeWeavers, which produces a commercial version of the software. Since 2009, Google has also provided funding to the Wine project. CrossOver, a proprietary solution based on the open-source Wine project, supports running Windows versions of Microsoft Office, Intuit applications such as Quicken and QuickBooks, Adobe Photoshop versions through CS2, and many games such as World of Warcraft. In other cases, where there is no Linux port of some software in areas such as desktop publishing and professional audio, there is equivalent software available on Linux. It is also possible to run applications written for Android on other versions of Linux using Anbox. Components and installation Besides externally visible components, such as X window managers, a non-obvious but quite central role is played by the programs hosted by freedesktop.org, such as D-Bus or PulseAudio; both major desktop environments (GNOME and KDE) include them, each offering graphical front-ends written using the corresponding toolkit (GTK or Qt). A display server is another component, which for the longest time has been communicating in the X11 display server protocol with its clients; prominent software talking X11 includes the X.Org Server and Xlib. Frustration over the cumbersome X11 core protocol, and especially over its numerous extensions, has led to the creation of a new display server protocol, Wayland. Installing, updating and removing software in Linux is typically done through the use of package managers such as the Synaptic Package Manager, PackageKit, and Yum Extender. While most major Linux distributions have extensive repositories, often containing tens of thousands of packages, not all the software that can run on Linux is available from the official repositories. Alternatively, users can install packages from unofficial repositories, download pre-compiled packages directly from websites, or compile the source code by themselves. All these methods come with different degrees of difficulty; compiling the source code is in general considered a challenging process for new Linux users, but it is hardly needed in modern distributions and is not a method specific to Linux. Netbooks Linux distributions have also become popular in the netbook market, with many devices such as the Asus Eee PC and Acer Aspire One shipping with customized Linux distributions installed. In 2009, Google announced its Chrome OS as a minimal Linux-based operating system, using the Chrome browser as the main user interface. Chrome OS initially did not run any non-web applications, except for the bundled file manager and media player. A certain level of support for Android applications was added in later versions. As of 2018, Google added the ability to install any Linux software in a container, enabling Chrome OS to be used like any other Linux distribution. Netbooks that shipped with the operating system, termed Chromebooks, started appearing on the market in June 2011. Servers, mainframes and supercomputers Linux distributions have long been used as server operating systems, and have risen to prominence in that area; Netcraft reported in September 2006, that eight of the ten (other two with "unknown" OS) most reliable internet hosting companies ran Linux distributions on their web servers, with Linux in the top position. In June 2008, Linux distributions represented five of the top ten, FreeBSD three of ten, and Microsoft two of ten; since February 2010, Linux distributions represented six of the top ten, FreeBSD three of ten, and Microsoft one of ten, with Linux in the top position. Linux distributions are the cornerstone of the LAMP server-software combination (Linux, Apache, MariaDB/MySQL, Perl/PHP/Python) which is one of the more common platforms for website hosting. Linux distributions have become increasingly common on mainframes, partly due to pricing and the open-source model. In December 2009, computer giant IBM reported that it would predominantly market and sell mainframe-based Enterprise Linux Server. At LinuxCon North America 2015, IBM announced LinuxONE, a series of mainframes specifically designed to run Linux and open-source software. Linux distributions are also dominant as operating systems for supercomputers. As of November 2017, all supercomputers on the 500 list run some variant of Linux. Smart devices Several operating systems for smart devices, such as smartphones, tablet computers, home automation, smart TVs (Samsung and LG Smart TVs use Tizen and WebOS, respectively), and in-vehicle infotainment (IVI) systems (for example Automotive Grade Linux), are based on Linux. Major platforms for such systems include Android, Firefox OS, Mer and Tizen. Android has become the dominant mobile operating system for smartphones, running on 79.3% of units sold worldwide during the second quarter of 2013. Android is also used on tables, smart TVs, and in-vehicle navigation systems. Although Android is based on a modified version of the Linux kernel, commentators disagree on whether the term "Linux distribution" applies to it, and whether it is "Linux" according to the common usage of the term. Android is a Linux distribution according to the Linux Foundation, Google's open-source chief Chris DiBona, and several journalists. Others, such as Google engineer Patrick Brady, say that Android is not Linux in the traditional Unix-like Linux distribution sense; Android does not include the GNU C Library (it uses Bionic as an alternative C library) and some of other components typically found in Linux distributions. Ars Technica wrote that "Although Android is built on top of the Linux kernel, the platform has very little in common with the conventional desktop Linux stack". Cellphones and PDAs running Linux on open-source platforms became more common from 2007; examples include the Nokia N810, Openmoko's Neo1973, and the Motorola ROKR E8. Continuing the trend, Palm (later acquired by HP) produced a new Linux-derived operating system, webOS, which is built into its line of Palm Pre smartphones. Nokia's Maemo, one of the earliest mobile operating systems, was based on Debian. It was later merged with Intel's Moblin, another Linux-based operating system, to form MeeGo. The project was later terminated in favor of Tizen, an operating system targeted at mobile devices as well as IVI. Tizen is a project within The Linux Foundation. Several Samsung products are already running Tizen, Samsung Gear 2 being the most significant example. Samsung Z smartphones will use Tizen instead of Android. As a result of MeeGo's termination, the Mer project forked the MeeGo codebase to create a basis for mobile-oriented operating systems. In July 2012, Jolla announced Sailfish OS, their own mobile operating system built upon Mer technology. Mozilla's Firefox OS consists of the Linux kernel, a hardware abstraction layer, a web-standards-based runtime environment and user interface, and an integrated web browser. Canonical has released Ubuntu Touch, aiming to bring convergence to the user experience on this mobile operating system and its desktop counterpart, Ubuntu. The operating system also provides a full Ubuntu desktop when connected to an external monitor. The Librem 5 is a smartphone developed by Purism. By default, it runs the company-made Linux-based PureOS, but it can also run other Linux distributions. Like Ubuntu Touch, PureOS is designed with convergence in mind, allowing desktop programs to run on the smartphone. An example of this is the desktop version of Mozilla Firefox. Another smartphone is the PinePhone, made by the computer manufacturer Pine64. The PinePhone can run a variety of Linux-based operating systems such as Ubuntu Touch and postmarketOS. Embedded devices Due to its low cost and ease of customization, Linux is often used in embedded systems. In the non-mobile telecommunications equipment sector, the majority of customer-premises equipment (CPE) hardware runs some Linux-based operating system. OpenWrt is a community-driven example upon which many of the OEM firmware releases are based. For example, the TiVo digital video recorder also uses a customized Linux, as do several network firewalls and routers from such makers as Cisco/Linksys. The Korg OASYS, the Korg KRONOS, the Yamaha Motif XS/Motif XF music workstations, Yamaha S90XS/S70XS, Yamaha MOX6/MOX8 synthesizers, Yamaha Motif-Rack XS tone generator module, and Roland RD-700GX digital piano also run Linux. Linux is also used in stage lighting control systems, such as the WholeHogIII console. Gaming In the past, there were few games available for Linux. In recent years, more games have been released with support for Linux (especially Indie games), with the exception of a few AAA title games. Android, a mobile platform which uses the Linux kernel, has gained much developer interest and is one of the main platforms for mobile game development along with iOS operating system by Apple for iPhone and iPad devices. On February 14, 2013, Valve released a Linux version of Steam, a gaming distribution platform on PC. Many Steam games were ported to Linux. On December 13, 2013, Valve released SteamOS, a gaming-oriented OS based on Debian, for beta testing, and had plans to ship Steam Machines as a gaming and entertainment platform. Valve has also developed VOGL, an OpenGL debugger intended to aid video game development, as well as porting its Source game engine to desktop Linux. As a result of Valve's effort, several prominent games such as DotA 2, Team Fortress 2, Portal, Portal 2 and Left 4 Dead 2 are now natively available on desktop Linux. On July 31, 2013, Nvidia released Shield as an attempt to use Android as a specialized gaming platform. Some Linux users play Windows-based games using Wine or CrossOver Linux. On August 22, 2018, Valve released their own fork of Wine called Proton, aimed at gaming. It features some improvements over the vanilla Wine such as Vulkan-based DirectX 11 and 12 implementations, Steam integration, better full screen and game controller support and improved performance for multi-threaded games. In 2021, ProtonDB, an online aggregator of games supporting Linux, stated that 78% of the top thousand games on Steam were able to run on Linux using either Proton or a native port. On February 25, 2022, Valve released Steam Deck, a handheld gaming console running Arch Linux-based operating system SteamOS 3.0. Specialized uses Due to the flexibility, customizability and free and open-source nature of Linux, it becomes possible to highly tailor Linux towards a specific purpose. There are two main methods to assemble a specialized Linux distribution: building from scratch or from a general-purpose distribution as a base. The distributions often used for this purpose include Debian, Fedora, Ubuntu (which is itself based on Debian), Arch Linux, Gentoo, and Slackware. In contrast, Linux distributions built from scratch do not have general-purpose bases; instead, they focus on the JeOS philosophy by including only necessary components and avoiding resource overhead caused by components considered redundant in the distribution's use cases. Home theater PC A home theater PC (HTPC) is a PC that is mainly used as an entertainment system, especially a home theater system. It is normally connected to a television, and often an additional audio system. OpenELEC, a Linux distribution that incorporates the media center software Kodi, is an OS tuned specifically for an HTPC. Having been built from the ground up adhering to the JeOS principle, the OS is very lightweight and very suitable for the confined usage range of an HTPC. There are also special editions of Linux distributions that include the MythTV media center software, such as Mythbuntu, a special edition of Ubuntu. Digital security Kali Linux is a Debian-based Linux distribution designed for digital forensics and penetration testing. It comes preinstalled with several software applications for penetration testing and identifying security exploits. The Ubuntu derivative BackBox provides pre-installed security and network analysis tools for ethical hacking. The Arch-based BlackArch includes over 2100 tools for pentesting and security researching. There are many Linux distributions created with privacy, secrecy, network anonymity and information security in mind, including Tails, Tin Hat Linux and Tinfoil Hat Linux. Lightweight Portable Security is a distribution based on Arch Linux and developed by the United States Department of Defense. Tor-ramdisk is a minimal distribution created solely to host the network anonymity software Tor. System rescue Linux Live CD sessions have long been used as a tool for recovering data from a broken computer system and for repairing the system. Building upon that idea, several Linux distributions tailored for this purpose have emerged, most of which use GParted as a partition editor, with additional data recovery and system repair software: GParted Live a Debian-based distribution developed by the GParted project. Parted Magic a commercial Linux distribution. SystemRescueCD an Arch-based distribution with support for editing Windows registry. In space SpaceX uses multiple redundant flight computers in a fault-tolerant design in its Falcon 9 rocket. Each Merlin engine is controlled by three voting computers, with two physical processors per computer that constantly check each other's operation. Linux is not inherently fault-tolerant (no operating system is, as it is a function of the whole system including the hardware), but the flight computer software makes it so for its purpose. For flexibility, commercial off-the-shelf parts and system-wide "radiation-tolerant" design are used instead of radiation hardened parts. , SpaceX has conducted over 76 launches of the Falcon 9 since 2010, out of which all but one have successfully delivered their primary payloads to the intended orbit, and has used it to transport astronauts to the International Space Station. The Dragon 2 crew capsule also uses Linux. Windows was deployed as the operating system on non-mission critical laptops used on the space station, but it was later replaced with Linux. Robonaut 2, the first humanoid robot in space, is also Linux-based. The Jet Propulsion Laboratory has used Linux for a number of years "to help with projects relating to the construction of unmanned space flight and deep space exploration"; NASA uses Linux in robotics in the Mars rover, and Ubuntu Linux to "save data from satellites". Education Linux distributions have been created to provide hands-on experience with coding and source code to students, on devices such as the Raspberry Pi. In addition to producing a practical device, the intention is to show students "how things work under the hood". The Ubuntu derivatives Edubuntu and The Linux Schools Project, as well as the Debian derivative Skolelinux, provide education-oriented software packages. They also include tools for administering and building school computer labs and computer-based classrooms, such as the Linux Terminal Server Project (LTSP). Others Instant WebKiosk and Webconverger are browser-based Linux distributions often used in web kiosks and digital signage. Thinstation is a minimalist distribution designed for thin clients. Rocks Cluster Distribution is tailored for high-performance computing clusters. There are general-purpose Linux distributions that target a specific audience, such as users of a specific language or geographical area. Such examples include Ubuntu Kylin for Chinese language users and BlankOn targeted at Indonesians. Profession-specific distributions include Ubuntu Studio for media creation and DNALinux for bioinformatics. There is also a Muslim-oriented distribution of the name Sabily that consequently also provides some Islamic tools. Certain organizations use slightly specialized Linux distributions internally, including GendBuntu used by the French National Gendarmerie, Goobuntu used internally by Google, and Astra Linux developed specifically for the Russian army. References Linux Computing platforms
Operating System (OS)
400
CSI-DOS CSI-DOS is an operating system, created in Samara, for the Soviet Elektronika BK-0011M and Elektronika BK-0011 microcomputers. CSI-DOS did not support the earlier BK-0010. CSI-DOS used its own unique file system and only supported a color graphics video mode. The system supported both hard and floppy drives as well as RAM disks in the computer's memory. It also included software to work with the AY-3-8910 and AY-3-8912 music co-processors, and the Covox Speech Thing. There are a number of games and demos designed specially for the system. The system also included a Turbo Vision-like application programming interface (API) allowing simpler design of user applications, and a graphical file manager called X-Shell. External links Article, contains description of some advantages of CSI-DOS for gaming over other OSs (Russian) Elektronika BK operating systems
Operating System (OS)
401
Windows Open Services Architecture Windows Open Services Architecture (WOSA) is a set of proprietary Microsoft technologies intended to "...provide a single, open-ended interface to enterprise computing environments.". WOSA was announced by Microsoft in 1992. WOSA was pitched as a set of programming interfaces designed to provide application interoperability across the Windows environment. The set of technologies that were part of he WOSA initiative include: LSAPI (Software Licensing API) MAPI (Mail Application Programming Interface) ODBC (Open Database Connectivity) OLE for Process Control SAPI (Speech Application Programming Interface) TAPI (Telelphony Application Programming Interface) Windows SNA (IBM SNA Networks) WOSA/XFS (WOSA for Financial Services) WOSA/XRT (WOSA for Real-time Market Data) See also Component Object Model Object Linking and Embedding References External links Inter-process communication Windows communication and services Architectural pattern (computer science) Enterprise application integration Service-oriented (business computing) Web services Component-based software engineering
Operating System (OS)
402
MorphOS MorphOS is an AmigaOS-like computer operating system (OS). It is a mixed proprietary and open source OS produced for the Pegasos PowerPC (PPC) processor based computer, PowerUP accelerator equipped Amiga computers, and a series of Freescale development boards that use the Genesi firmware, including the Efika and mobileGT. Since MorphOS 2.4, Apple's Mac mini G4 is supported as well, and with the release of MorphOS 2.5 and MorphOS 2.6 the eMac and Power Mac G4 models are respectively supported. The release of MorphOS 3.2 added limited support for Power Mac G5. The core, based on the Quark microkernel, is proprietary, although several libraries and other parts are open source, such as the Ambient desktop. Characteristics and versions Developed for PowerPC central processing units (CPUs) from Freescale and IBM while supporting the original AmigaOS Motorola 68000 series (68k, MC680x0) applications via proprietary task-based emulation, and most AmigaOS PPC applications via API wrappers. It is application programming interface (API) compatible with AmigaOS 3.1 and has a graphical user interface (GUI) based on Magic User Interface (MUI). Besides the Pegasos version of MorphOS, there is a version for Amiga computers equipped with PowerUP accelerator cards produced by Phase5. This version is free, as is registration. If unregistered, it slows down after each two-hour session. PowerUP MorphOS was most recently updated on 23 February 2006; however, it does not exceed the feature set or advancement of the Pegasos release. A version of MorphOS for the Efika, a very small mainboard based on the ultra-low-power MPC5200B processor from Freescale, has been shown at exhibitions and user gatherings in Germany. Current (since 2.0) release of MorphOS supports the Efika. Components ABox ABox is an emulation sandbox featuring a PPC native AmigaOS API clone that is binary compatible with both 68k Amiga applications and both PowerUP and WarpOS formats of Amiga PPC executables. ABox is based in part on AROS Research Operating System. ABox includes Trance JIT code translator for 68k native Amiga applications. Other AHI – audio interface: 6.7 Ambient – the default MorphOS desktop, inspired by Workbench and Directory Opus 5 CyberGraphX – graphics interface originally developed for Amiga computers: 5.1 Magic User Interface – primary graphical user interface (GUI) toolkit: 4.2 Poseidon – the Amiga USB stack developed by Chris Hodges TurboPrint – the printing system TinyGL – OpenGL implementation and Warp3D compatibility is featured via Rendering Acceleration Virtual Engine (RAVE) low-level API: V 51 Quark – manages the low level systems and hosts the A/Box currently MorphOS software MorphOS can run any system friendly Amiga software written for 68k processors. Also it is possible to use 68k libraries or datatypes on PPC applications and vice versa. It also provides compatibility layer for PowerUP and WarpUP software written for PowerUP accelerator cards. The largest repository is Aminet with over 75,000 packages online with packages from all Amiga flavors including music, sound, and artwork. MorphOS-only software repositories are hosted at MorphOS software, MorphOS files and MorphOS Storage. Bundled applications MorphOS is delivered with several desktop applications in the form of pre-installed software. Supported hardware Max. 1.72 GB RAM; virtual memory is not supported. Only Radeon cards have support; Nvidia cards are not supported. Amiga Amiga 1200 with Blizzard PPC accelerator card Amiga 3000 with CyberStorm PPC accelerator card Amiga 4000 with CyberStorm PPC accelerator card Apple Mac mini G4 eMac Power Mac G4 PowerBook G4 (except for 12" aluminum models) iBook G4 Power Mac G5 Power Mac G4 Cube iMac G5 (only model A1145 – G5 2.1 20" (iSight)) Genesi/bPlan GmbH Efika 5200B Pegasos I G3, II G3/G4 ACube ACube Systems Srl company and their Sam460 series mainboards A-Eon Technology AmigaOne X5000 mainboard History The project began in 1999, based on the Quark microkernel. The earliest versions of MorphOS ran only via PPC accelerator cards on the Amiga computers, and required portions of AmigaOS to fully function. A collaborative effort between the companies bPlan (of which the lead MorphOS developer is a partner) and Thendic-France in 2002 resulted in the first regular, non-prototype production of bPlan-engineered Pegasos computers capable of running MorphOS or Linux. Thendic-France had financial problems and folded; however, the collaboration continued under the new banner of "Genesi". A busy promotional year followed in 2003, with appearances at conventions and exhibitions in several places around the world, including the Consumer Electronics Show (CES) in Las Vegas. After some bitter disagreements within the MorphOS development team in 2003 and 2004, culminating with accusations by a MorphOS developer that he and others had not been paid, the Ambient desktop interface was released under GPL and is now actively developed by the Ambient development team. Subject to GPL rules, Ambient continues to be included in the commercial MorphOS product. An alternative MorphOS desktop system is Scalos. On April 1, 2008, the MorphOS team announced that MorphOS 2.0 would be released within Q2/2008. This promise was only kept by a few seconds, with the release of MorphOS 2.0 occurring on June 30, 2008 23:59 CET. MorphOS 3.11 is commercially available at a price of €79 per machine (€49 for the Efika PPC or Sam460 boards). A fully functional demo of MorphOS is available, but without a keyfile, its speed is decreased significantly after 30 minutes of use per session; rebooting the system allows for another 30 minutes of use. Release history of 0.x/1.x series Release history of 2.x/3.x series MorphOS 2 includes a native TCP/IP stack ("Netstack") and a Web browser, Sputnik or Origyn Web Browser. Sputnik was begun under a user community bounty system that also resulted in MOSNet, a free, separate TCP/IP stack for MorphOS 1 users. Sputnik is a port of the KHTML rendering engine, on which WebKit is also based. Sputnik is no longer being developed and was removed from later MorphOS 2 releases. See also Ambient (desktop environment) Amiga APUS (computer) AROS Research Operating System (AROS) Magic User Interface (MUI) References External links Aminet Amiga/MorphOS software repository MorphZone, Supported Computers MorphOS Software Database MorphOS software repository MorphOS: The Lightning OS Obligement – Magazine about AmigaOS and MorphOS www.warmup-asso.org – Portal dedicated to MorphOS users MorphOS Storage – MorphOS Software Storage 2000 software Amiga software MorphOS software Operating system distributions bootable from read-only media PowerPC operating systems Microkernel-based operating systems Microkernels
Operating System (OS)
403
Hobbyist operating system The development of a hobbyist operating system is one of the more involved and technical options for a computer hobbyist. The definition of a hobby operating system can sometimes be vague. It can be from the developer's view, where the developers do it just for fun or learning; it can also be seen from the user's view, where the users are only using it as a toy; or it can be defined as an operating system which doesn't have a very big user base. Development can begin from existing resources like a kernel, an operating system, or a bootloader, or it can also be made completely from scratch. The development platform could be a bare hardware machine, which is the nature of an operating system, but it could also be developed and tested on a virtual machine. Since the hobbyist must claim more ownership for adapting a complex system to the ever-changing needs of the technical terrain, much enthusiasm is common amongst the many different groups attracted to operating system development. Development Elements of operating system development include: Kernel: Bootstrapping Memory management Process management and scheduling Device driver management Program API External programs User interface The C programming language is frequently used for hobby operating system programming, as well as assembly language, though other languages can be used as well. The use of assembly language is common with small systems, especially those based on eight bit microprocessors such as the MOS Technology 6502 family or the Zilog Z80, or in systems with a lack of available resources because of its small output size and low-level efficiency. User interface Most hobby operating systems use a command-line interface or a simple text user interface due to ease of development. More advanced hobby operating systems may have a graphical user interface. For example, AtheOS was a hobby operating system with a graphical interface written entirely by one programmer. Examples Use of BIOS This section is predominantly x86 oriented. The term BIOS (Basic Input/Output System) refers to firmware that initialises computer hardware and has provisions to load an operating system. The BIOS also sets up a standard interface for several low-level device drivers at boot time. BIOS resources are often used by hobbyist operating systems, especially those written on 16-bit x86 machines, as many hobby operating systems developers lack the time to write complex low level drivers themselves or they simply want to get into writing software for the system as soon as possible. The most commonly used BIOS functions are VideoBIOS and Disk services. These are used because video cards and disk drives vary significantly on different machines and specialised drivers are often difficult to write. The use of the BIOS is uncommon in operating systems that operate in Protected mode or Long mode, because the system must switch back to real mode which BIOS drivers run in. See also List of hobbyist operating systems Computer architecture References External links OSDev.org - A hobby OSDev community Independent Software - Set of tutorials on boot loader development and entering protected mode The little book about OS development - This book is a practical guide to writing your own x86 operating system Kernel 101 – Let’s write a Kernel aodfaq - OS development FAQ Bona Fide OS Development - Store of OS development tutorials and other documents A step by step tutorial Operating System Resource Center - Information and resources on various OSDev topics (both software and hardware) Operating system technology
Operating System (OS)
404
Runtime system In computer programming, a runtime system, also called runtime environment, primarily implements portions of an execution model. This is not to be confused with the runtime lifecycle phase of a program, during which the runtime system is in operation. When treating the runtime system as distinct from the runtime environment (RTE), the first may be defined as a specific part of the application software (IDE) used for programming, a piece of software that provides the programmer a more convenient environment for running programs during their production (testing and similar) while the second (RTE) would be the very instance of an execution model being applied to the developed program which is itself then run in the aforementioned runtime system. Most programming languages have some form of runtime system that provides an environment in which programs run. This environment may address a number of issues including the management of application memory, how the program accesses variables, mechanisms for passing parameters between procedures, interfacing with the operating system, and otherwise. The compiler makes assumptions depending on the specific runtime system to generate correct code. Typically the runtime system will have some responsibility for setting up and managing the stack and heap, and may include features such as garbage collection, threads or other dynamic features built into the language. Overview Every programming language specifies an execution model, and many implement at least part of that model in a runtime system. One possible definition of runtime system behavior, among others, is "any behavior not directly attributable to the program itself". This definition includes putting parameters onto the stack before function calls, parallel execution of related behaviors, and disk I/O. By this definition, essentially every language has a runtime system, including compiled languages, interpreted languages, and embedded domain-specific languages. Even API-invoked standalone execution models, such as Pthreads (POSIX threads), have a runtime system that implements the execution model's behavior. Most scholarly papers on runtime systems focus on the implementation details of parallel runtime systems. A notable example of a parallel runtime system is Cilk, a popular parallel programming model. The proto-runtime toolkit was created to simplify the creation of parallel runtime systems. In addition to execution model behavior, a runtime system may also perform support services such as type checking, debugging, or code generation and optimization. Relation to runtime environments The runtime system is also the gateway through which a running program interacts with the runtime environment. The runtime environment includes not only accessible state values, but also active entities with which the program can interact during execution. For example, environment variables are features of many operating systems, and are part of the runtime environment; a running program can access them via the runtime system. Likewise, hardware devices such as disks or DVD drives are active entities that a program can interact with via a runtime system. One unique application of a runtime environment is its use within an operating system that only allows it to run. In other words, from boot until power-down, the entire OS is dedicated to only the application(s) running within that runtime environment. Any other code that tries to run, or any failures in the application(s), will break the runtime environment. Breaking the runtime environment in turn breaks the OS, stopping all processing and requiring a reboot. If the boot is from read-only memory, an extremely secure, simple, single-mission system is created. Examples of such directly bundled runtime systems include: Between 1983 and 1984, Digital Research offered several of their business and educations applications for the IBM PC on bootable floppy diskettes bundled with SpeedStart CP/M-86, a reduced version of CP/M-86 as runtime environment. Some stand-alone versions of Ventura Publisher (1986–1993), Artline (1988–1991), Timeworks Publisher (1988–1991) and ViewMAX (1990–1992) contained special runtime versions of Digital Research's GEM as their runtime environment. In the late 1990s, JP Software's command line processor 4DOS was optionally available in a special runtime version to be linked with BATCOMP pre-compiled and encrypted batch jobs in order to create unmodifyable executables from batch scripts and run them on systems without 4DOS installed. Examples The runtime system of the C language is a particular set of instructions inserted by the compiler into the executable image. Among other things, these instructions manage the process stack, create space for local variables, and copy function call parameters onto the top of the stack. There are often no clear criteria for determining which language behaviors are part of the runtime system itself and which can be determined by any particular source program. For example, in C, the setup of the stack is part of the runtime system. It is not determined by the semantics of an individual program because the behavior is globally invariant: it holds over all executions. This systematic behavior implements the execution model of the language, as opposed to implementing semantics of the particular program (in which text is directly translated into code that computes results). This separation between the semantics of a particular program and the runtime environment is reflected by the different ways of compiling a program: compiling source code to an object file that contains all the functions versus compiling an entire program to an executable binary. The object file will only contain assembly code relevant to the included functions, while the executable binary will contain additional code that implements the runtime environment. The object file, on one hand, may be missing information from the runtime environment that will be resolved by linking. On the other hand, the code in the object file still depends on assumptions in the runtime system; for example, a function may read parameters from a particular register or stack location, depending on the calling convention used by the runtime environment. Another example is the case of using an application programming interface (API) to interact with a runtime system. The calls to that API look the same as calls to a regular software library, however at some point during the call the execution model changes. The runtime system implements an execution model different from that of the language the library is written in terms of. A person reading the code of a normal library would be able to understand the library's behavior by just knowing the language the library was written in. However, a person reading the code of the API that invokes a runtime system would not be able to understand the behavior of the API call just by knowing the language the call was written in. At some point, via some mechanism, the execution model stops being that of the language the call is written in and switches over to being the execution model implemented by the runtime system. For example, the trap instruction is one method of switching execution models. This difference is what distinguishes an API-invoked execution model, such as Pthreads, from a usual software library. Both Pthreads calls and software library calls are invoked via an API, but Pthreads behavior cannot be understood in terms of the language of the call. Rather, Pthreads calls bring into play an outside execution model, which is implemented by the Pthreads runtime system (this runtime system is often the OS kernel). As an extreme example, the physical CPU itself can be viewed as an implementation of the runtime system of a specific assembly language. In this view, the execution model is implemented by the physical CPU and memory systems. As an analogy, runtime systems for higher-level languages are themselves implemented using some other languages. This creates a hierarchy of runtime systems, with the CPU itself—or actually its logic at the microcode layer or below—acting as the lowest-level runtime system. Advanced features Some compiled or interpreted languages provide an interface that allows application code to interact directly with the runtime system. An example is the Thread class in the Java language. The class allows code (that is animated by one thread) to do things such as start and stop other threads. Normally, core aspects of a language's behavior such as task scheduling and resource management are not accessible in this fashion. Higher-level behaviors implemented by a runtime system may include tasks such as drawing text on the screen or making an Internet connection. It is often the case that operating systems provide these kinds of behaviors as well, and when available, the runtime system is implemented as an abstraction layer that translates the invocation of the runtime system into an invocation of the operating system. This hides the complexity or variations in the services offered by different operating systems. This also implies that the OS kernel can itself be viewed as a runtime system, and that the set of OS calls that invoke OS behaviors may be viewed as interactions with a runtime system. In the limit, the runtime system may provide services such as a P-code machine or virtual machine, that hide even the processor's instruction set. This is the approach followed by many interpreted languages such as AWK, and some languages like Java, which are meant to be compiled into some machine-independent intermediate representation code (such as bytecode). This arrangement simplifies the task of language implementation and its adaptation to different machines, and improves efficiency of sophisticated language features such as reflection. It also allows the same program to be executed on any machine without an explicit recompiling step, a feature that has become very important since the proliferation of the World Wide Web. To speed up execution, some runtime systems feature just-in-time compilation to machine code. A modern aspect of runtime systems is parallel execution behaviors, such as the behaviors exhibited by mutex constructs in Pthreads and parallel section constructs in OpenMP. A runtime system with such parallel execution behaviors may be modularized according to the proto-runtime approach. History Notable early examples of runtime systems are the interpreters for BASIC and Lisp. These environments also included a garbage collector. Forth is an early example of a language designed to be compiled into intermediate representation code; its runtime system was a virtual machine that interpreted that code. Another popular, if theoretical, example is Donald Knuth's MIX computer. In C and later languages that supported dynamic memory allocation, the runtime system also included a library that managed the program's memory pool. In the object-oriented programming languages, the runtime system was often also responsible for dynamic type checking and resolving method references. See also Run time (program lifecycle phase) Execution model Programming model Self-booter Static build References Further reading Computing platforms
Operating System (OS)
405
Systemd systemd is a software suite that provides an array of system components for Linux operating systems. Its main aim is to unify service configuration and behavior across Linux distributions; systemd's primary component is a "system and service manager"—an init system used to bootstrap user space and manage user processes. It also provides replacements for various daemons and utilities, including device management, login management, network connection management, and event logging. The name systemd adheres to the Unix convention of naming daemons by appending the letter d. It also plays on the term "System D", which refers to a person's ability to adapt quickly and improvise to solve problems. Since 2015, the majority of Linux distributions have adopted systemd, having replaced other init systems such as SysVinit. Most arguments against systemd are that it suffers from mission creep and bloat. Subsequent criticism also affects other software (such as the GNOME desktop) adding dependencies on systemd—complicating compatibility with other Unix-like operating systems, and making it hard to move away from systemd. Concerns have also been raised about Red Hat and its parent company IBM controlling the scene of init systems on Linux. Some even doubt the integrity of systemd against attackers, claiming that the complexity of systemd results in a greatly enlarged attack surface, reducing the overall security of the platform. On the other hand, systemd has been praised by developers and users of distributions that adopted it for providing a stable, fast out-of-the-box solution for issues that had existed in the Linux space for years. At the time of adoption of systemd on most Linux distributions, it was the only software suite that offered reliable parallellism during boot as well as centralized management of processes, daemons, services and mount points. History Lennart Poettering and Kay Sievers, the software engineers working for Red Hat who initially developed systemd, started a project to replace Linux's conventional System V init in 2010. An April 2010 blog post from Poettering, titled "Rethinking PID 1", introduced an experimental version of what would later become systemd. They sought to surpass the efficiency of the init daemon in several ways. They wanted to improve the software framework for expressing dependencies, to allow more processing to be done concurrently or in parallel during system booting, and to reduce the computational overhead of the shell. In May 2011 Fedora became the first major Linux distribution to enable systemd by default, replacing SysVinit. The reasoning at the time was that systemd provided extensive parallelization during startup, better management of processes and overall a saner, dependency-based approach to control of the system. In October 2012, Arch Linux made systemd the default, also switching from SysVinit. Developers debated since August 2012 and came to the conclusion that it was faster and had more features than SysVinit, and that maintaining the latter was not worth the effort in patches. Some of them thought that the criticism towards the implementation of systemd was not based on actual shortcomings of the software, rather the disliking of Lennart from a part of the Linux community and the general hesitation for change. Specifically, some of the complaints regarding systemd not being programmed in bash, it being bigger and more extensive than SysVinit, the use of D-bus, and the optional on-disk format of the journal were regarded as advantages by programmers. Between October 2013 and February 2014, a long debate among the Debian Technical Committee occurred on the Debian mailing list, discussing which init system to use as the default in Debian 8 "jessie", and culminating in a decision in favor of systemd. The debate was widely publicized and in the wake of the decision the debate continues on the Debian mailing list. In February 2014, after Debian's decision was made, Mark Shuttleworth announced on his blog that Ubuntu would follow in implementing systemd, discarding its own Upstart. In November 2014 Debian Developer Joey Hess, Debian Technical Committee members Russ Allbery and Ian Jackson, and systemd package-maintainer Tollef Fog Heen resigned from their positions. All four justified their decision on the public Debian mailing list and in personal blogs with their exposure to extraordinary stress-levels related to ongoing disputes on systemd integration within the Debian and FOSS community that rendered regular maintenance virtually impossible. In August 2015 systemd started providing a login shell, callable via . In September 2016, a security bug was discovered that allowed any unprivileged user to perform a denial-of-service attack against systemd. Rich Felker, developer of musl, stated that this bug reveals a major "system development design flaw". In 2017 another security bug was discovered in systemd, , which "allows disruption of service" by a "malicious DNS server". Later in 2017, the Pwnie Awards gave author Lennart Poettering a "lamest vendor response" award due to his handling of the vulnerabilities. Design Poettering describes systemd development as "never finished, never complete, but tracking progress of technology". In May 2014, Poettering further described systemd as unifying "pointless differences between distributions", by providing the following three general functions: A system and service manager (manages both the system, by applying various configurations, and its services) A software platform (serves as a basis for developing other software) The glue between applications and the kernel (provides various interfaces that expose functionalities provided by the kernel) Systemd includes features like on-demand starting of daemons, snapshot support, process tracking and Inhibitor Locks. It is not just the name of the init daemon but also refers to the entire software bundle around it, which, in addition to the init daemon, includes the daemons , and , and many other low-level components. In January 2013, Poettering described systemd not as one program, but rather a large software suite that includes 69 individual binaries. As an integrated software suite, systemd replaces the startup sequences and runlevels controlled by the traditional init daemon, along with the shell scripts executed under its control. systemd also integrates many other services that are common on Linux systems by handling user logins, the system console, device hotplugging (see udev), scheduled execution (replacing cron), logging, hostnames and locales. Like the init daemon, is a daemon that manages other daemons, which, including itself, are background processes. is the first daemon to start during booting and the last daemon to terminate during shutdown. The daemon serves as the root of the user space's process tree; the first process (PID 1) has a special role on Unix systems, as it replaces the parent of a process when the original parent terminates. Therefore, the first process is particularly well suited for the purpose of monitoring daemons. executes elements of its startup sequence in parallel, which is theoretically faster than the traditional startup sequence approach. For inter-process communication (IPC), makes Unix domain sockets and D-Bus available to the running daemons. The state of itself can also be preserved in a snapshot for future recall. Core components and libraries Following its integrated approach, systemd also provides replacements for various daemons and utilities, including the startup shell scripts, pm-utils, inetd, , syslog, watchdog, cron and . systemd's core components include the following: is a system and service manager for Linux operating systems. is a command to introspect and control the state of the systemd system and service manager. Not to be confused with sysctl. may be used to determine system boot-up performance statistics and retrieve other state and tracing information from the system and service manager. tracks processes using the Linux kernel's cgroups subsystem instead of using process identifiers (PIDs); thus, daemons cannot "escape" , not even by double-forking. not only uses cgroups, but also augments them with and , two utility programs that facilitate the creation and management of Linux containers. Since version 205, systemd also offers ControlGroupInterface, which is an API to the Linux kernel cgroups. The Linux kernel cgroups are adapted to support kernfs, and are being modified to support a unified hierarchy. Ancillary components Beside its primary purpose of providing a Linux init system, the systemd suite can provide additional functionality, including the following components: is a daemon responsible for event logging, with append-only binary files serving as its logfiles. The system administrator may choose whether to log system events with , or . The potential for corruption of the binary format has led to much heated debate. is the standard library for utilizing udev, which allows third-party applications to query udev resources. is a daemon that manages user logins and seats in various ways. It is an integrated login manager that offers multiseat improvements and replaces ConsoleKit, which is no longer maintained. For X11 display managers the switch to requires a minimal amount of porting. It was integrated in systemd version 30. is a daemon to handle the configuration of the network interfaces; in version 209, when it was first integrated, support was limited to statically assigned addresses and basic support for bridging configuration. In July 2014, systemd version 215 was released, adding new features such as a DHCP server for IPv4 hosts, and VXLAN support. networkctl may be used to review the state of the network links as seen by systemd-networkd. Configuration of new interfaces has to be added under the /lib/systemd/network/ as a new file ending with .network extension. is a boot manager, formerly known as gummiboot. Kay Sievers merged it into systemd with rev 220. is a daemon that can be used to control time-related settings, such as the system time, system time zone, or selection between UTC and local time-zone system clock. It is accessible through D-Bus. It was integrated in systemd version 30. is a utility that takes care of creation and clean-up of temporary files and directories. It is normally run once at startup and then in specified intervals. udev is a device manager for the Linux kernel, which handles the directory and all user space actions when adding/removing devices, including firmware loading. In April 2012, the source tree for udev was merged into the systemd source tree. On 29 May 2014, support for firmware loading through udev was dropped from systemd, as it was decided that the kernel should be responsible for loading firmware. Configuration of systemd is configured exclusively via plain-text files. records initialization instructions for each daemon in a configuration file (referred to as a "unit file") that uses a declarative language, replacing the traditionally used per-daemon startup shell scripts. The syntax of the language is inspired by files. Unit-file types include: (automatically initiated by systemd) (which can be used as a cron-like job scheduler) (used to group and manage processes and resources) (used to group worker processes, isn't intended to be configured via unit files) Adoption While many distributions boot systemd by default, some allow other init systems to be used; in this case switching the init system is possible by installing the appropriate packages. A fork of Debian called Devuan was developed to avoid systemd and has reached version 4.0 for stable usage. In December 2019, the Debian project voted in favour of retaining systemd as the default init system for the distribution, but with support for "exploring alternatives". Integration with other software In the interest of enhancing the interoperability between systemd and the GNOME desktop environment, systemd coauthor Lennart Poettering asked the GNOME Project to consider making systemd an external dependency of GNOME 3.2. In November 2012, the GNOME Project concluded that basic GNOME functionality should not rely on systemd. However, GNOME 3.8 introduced a compile-time choice between the and ConsoleKit API, the former being provided at the time only by systemd. Ubuntu provided a separate binary but systemd became a de facto dependency of GNOME for most Linux distributions, in particular since ConsoleKit is no longer actively maintained and upstream recommends the use of instead. The developers of Gentoo Linux also attempted to adapt these changes in OpenRC, but the implementation contained too many bugs, causing the distribution to mark systemd as a dependency of GNOME. GNOME has further integrated . As of Mutter version 3.13.2, is a dependency for Wayland sessions. Reception The design of systemd has ignited controversy within the free-software community. Critics regard systemd as overly complex and suffering from continued feature creep, arguing that its architecture violates the Unix philosophy. There is also concern that it forms a system of interlocked dependencies, thereby giving distribution maintainers little choice but to adopt systemd as more user-space software comes to depend on its components, which is similar to the problems created by PulseAudio, another project which was also developed by Lennart Poettering. In a 2012 interview, Slackware's lead Patrick Volkerding expressed reservations about the systemd architecture, stating his belief that its design was contrary to the Unix philosophy of interconnected utilities with narrowly defined functionalities. , Slackware does not support or use systemd, but Volkerding has not ruled out the possibility of switching to it. In January 2013, Lennart Poettering attempted to address concerns about systemd in a blog post called The Biggest Myths. In February 2014, musl's Rich Felker opined that PID 1 is too special to be saddled with additional responsibilities. PID 1 should only be responsible for starting the rest of the init system and reaping zombie processes. The additional functionality added by systemd can be provided elsewhere and unnecessarily increases the complexity and attack surface of PID 1. In March 2014 Eric S. Raymond opined that systemd's design goals were prone to mission creep and software bloat. In April 2014, Linus Torvalds expressed reservations about the attitude of Kay Sievers, a key systemd developer, toward users and bug reports in regard to modifications to the Linux kernel submitted by Sievers. In late April 2014 a campaign to boycott systemd was launched, with a website listing various reasons against its adoption. In an August 2014 article published in InfoWorld, Paul Venezia wrote about the systemd controversy and attributed the controversy to violation of the Unix philosophy, and to "enormous egos who firmly believe they can do no wrong". The article also characterizes the architecture of systemd as similar to that of svchost.exe, a critical system component in Microsoft Windows with a broad functional scope. In a September 2014 ZDNet interview, prominent Linux kernel developer Theodore Ts'o expressed his opinion that the dispute over systemd's centralized design philosophy, more than technical concerns, indicates a dangerous general trend toward uniformizing the Linux ecosystem, alienating and marginalizing parts of the open-source community, and leaving little room for alternative projects. He cited similarities with the attitude he found in the GNOME project toward non-standard configurations. On social media, Ts'o also later compared the attitudes of Sievers and his co-developer, Lennart Poettering, to that of GNOME's developers. Forks and alternative implementations Forks of systemd are closely tied to critiques of it outlined in the above section. Forks generally try to improve on at least one of portability (to other libcs and Unix-like systems), modularity, or size. A few forks have collaborated under the FreeInit banner. Fork of components eudev In 2012, the Gentoo Linux project created a fork of udev in order to avoid dependency on the systemd architecture. The resulting fork is called eudev and it makes udev functionality available without systemd. A stated goal of the project is to keep eudev independent of any Linux distribution or init system. elogind Elogind is the systemd project's "logind", extracted out to be a standalone daemon. It integrates with PAM to know the set of users that are logged into a system and whether they are logged in graphically, on the console, or remotely. Elogind exposes this information via the standard org.freedesktop.login1 D-Bus interface, as well as through the file system using systemd's standard layout. Elogind also provides "libelogind", which is a subset of the facilities offered by "libsystemd". There is a "libelogind.pc" pkg-config file as well. consolekit2 ConsoleKit was forked in October 2014 by Xfce developers wanting its features to still be maintained and available on operating systems other than Linux. While not ruling out the possibility of reviving the original repository in the long term, the main developer considers ConsoleKit2 a temporary necessity until systembsd matures. Development ceased in December 2017 and the project may be defunct. LoginKit LoginKit was an attempt to implement a logind (systemd-logind) shim, which would allow packages that depend on systemd-logind to work without dependency on a specific init system. The project has been defunct since February 2015. systembsd In 2014, a Google Summer of Code project named "systembsd" was started in order to provide alternative implementations of these APIs for OpenBSD. The original project developer began it in order to ease his transition from Linux to OpenBSD. Project development finished in July 2016. The systembsd project did not provide an init replacement, but aimed to provide OpenBSD with compatible daemons for , , , and . The project did not create new systemd-like functionality, and was only meant to act as a wrapper over the native OpenBSD system. The developer aimed for systembsd to be installable as part of the ports collection, not as part of a base system, stating that "systemd and *BSD differ fundamentally in terms of philosophy and development practices." notsystemd Notsystemd intends to implement all systemd's features working on any init system. It was forked by the Parabola GNU/Linux-libre developers to build packages with their development tools without the necessity of having systemd installed to run systemd-nspawn. Fork including init system uselessd In 2014, uselessd was created as a lightweight fork of systemd. The project sought to remove features and programs deemed unnecessary for an init system, as well as address other perceived faults. Project development halted in January 2015. uselessd supported the musl and µClibc libraries, so it may have been used on embedded systems, whereas systemd only supports glibc. The uselessd project had planned further improvements on cross-platform compatibility, as well as architectural overhauls and refactoring for the Linux build in the future. InitWare InitWare is a modular refactor of systemd, porting the system to BSD platforms without glibc or Linux-specific system calls. It is known to work on DragonFly BSD, FreeBSD, NetBSD, and GNU/Linux. Components considered unnecessary are dropped. See also BusyBox launchd Linux distributions without systemd Operating system service management readahead runit Service Management Facility GNU Daemon Shepherd Upstart Notes References External links Rethinking PID 1 Freedesktop.org Linux kernel-related software Linux-only free software Software that uses Meson Unix process- and task-management-related software
Operating System (OS)
406
Cisco IOS Cisco Internetwork Operating System (IOS) is a family of network operating systems used on many Cisco Systems routers and current Cisco network switches. Earlier, Cisco switches ran CatOS. IOS is a package of routing, switching, internetworking and telecommunications functions integrated into a multitasking operating system. Although the IOS code base includes a cooperative multitasking kernel, most IOS features have been ported to other kernels such as QNX and Linux for use in Cisco products. Not all Cisco products run IOS. Notable exceptions include ASA security products, which run a Linux-derived operating system, carrier routers which run IOS-XR and Cisco's Nexus switch and FC switch products which run Cisco NX-OS. History The IOS network operating system was developed in the 1980s for routers that had only 256 kB memory and low CPU processing power. Through modular extensions IOS has been adapted to increasing hardware capabilities and new networking protocols. When IOS was developed, Cisco Systems' main product line were routers. The company acquired a number of young companies that focused on network switches, such as the inventor of the first Ethernet switch Kalpana, and as a result Cisco switches did not run the IOS. The Cisco Catalyst series would for some time run the CatOS. In early modular chassis network switches from Cisco, modules with layer 3 routing functionalities were separate devices that ran IOS, while the layer 2 switch modules ran CatOS. Cisco eventually introduced the native mode for chassis, so that they only run one operating system. For the Nexus switches Cisco developed NX-OS, which is similar to IOS, except that it is Linux-based. Command-line interface The IOS command-line interface (CLI) provides a fixed set of multiple-word commands. The set available is determined by the "mode" and the privilege level of the current user. "Global configuration mode" provides commands to change the system's configuration, and "interface configuration mode" provides commands to change the configuration of a specific interface. All commands are assigned a privilege level, from 0 to 15, and can only be accessed by users with the necessary privilege. Through the CLI, the commands available to each privilege level can be defined. Most builds of IOS include a Tcl interpreter. Using the embedded event manager feature, the interpreter can be scripted to react to events within the networking environment, such as interface failure or periodic timers. Available command modes include: User EXEC Mode Privileged EXEC Mode Global Configuration Mode ROM Monitor Mode Setup Mode More than 100 configuration modes and submodes. Architecture Cisco IOS has a monolithic architecture, owing to the limited hardware resources of routers and switches in the 1980s. This means that all processes have direct hardware access to conserve CPU processing time. There is no memory protection between processes and IOS has a run to completion scheduler, which means that the kernel does not pre-empt a running process. Instead the process must make a kernel call before other processes get a chance to run. IOS considers each process a single thread and assigns it a priority value, so that high priority processes are executed on the CPU before queued low priority processes, but high priority processes can not interrupt running low priority processes. The Cisco IOS monolithic kernel does not implement memory protection for the data of different processes. The entire physical memory is mapped into one virtual address space. The Cisco IOS kernel does not perform any memory paging or swapping. Therefore the addressable memory is limited to the physical memory of the network device on which the operating system is installed. IOS does however support aliasing of duplicated virtual memory contents to the same physical memory. This architecture was implemented by Cisco in order to ensure system performance and minimize the operational overheads of the operating system. The disadvantage of the IOS architecture is that it increases the complexity of the operating system, data corruption is possible as one process can write over the data of another, and one process can destabilize the entire operating system or even cause a software-forced crash. In the event of an IOS crash, the operating system automatically reboots and reloads the saved configuration. Routing In all versions of Cisco IOS, packet routing and forwarding (switching) are distinct functions. Routing and other protocols run as Cisco IOS processes and contribute to the Routing Information Base (RIB). This is processed to generate the final IP forwarding table (FIB, Forwarding Information Base), which is used by the forwarding function of the router. On router platforms with software-only forwarding (e.g., Cisco 7200), most traffic handling, including access control list filtering and forwarding, is done at interrupt level using Cisco Express Forwarding (CEF) or dCEF (Distributed CEF). This means IOS does not have to do a process context switch to forward a packet. Routing functions such as OSPF or BGP run at the process level. In routers with hardware-based forwarding, such as the Cisco 12000 series, IOS computes the FIB in software and loads it into the forwarding hardware (such as an ASIC or network processor), which performs the actual packet forwarding function. Interface descriptor block An Interface Descriptor Block, or simply IDB, is a portion of memory or Cisco IOS internal data structure that contains information such as the IP address, interface state, and packet statistics for networking data. Cisco's IOS software maintains one IDB for each hardware interface in a particular Cisco switch or router and one IDB for each subinterface. The number of IDBs present in a system varies with the Cisco hardware platform type. Packages and feature sets IOS is shipped as a unique file that has been compiled for specific Cisco network devices. Each IOS Image therefore include a feature set, which determine the command-line interface (CLI) commands and features that are available on different Cisco devices. Upgrading to another feature set therefore entails the installation of a new IOS image on the networking device and reloading the IOS operating system. Information about the IOS version and feature-set running on a Cisco device can be obtained with the show version command. Most Cisco products that run IOS also have one or more "feature sets" or "packages", typically eight packages for Cisco routers and five packages for Cisco network switches. For example, Cisco IOS releases meant for use on Catalyst switches are available as "standard" versions (providing only basic IP routing), "enhanced" versions, which provide full IPv4 routing support, and "advanced IP services" versions, which provide the enhanced features as well as IPv6 support. Beginning with the 1900, 2900 and 3900 series of ISR Routers, Cisco revised the licensing model of IOS. To simplify the process of enlarging the feature-set and reduce the need for network operating system reloads, Cisco introduced universal IOS images, that include all features available for a device and customers may unlock certain features by purchasing an additional software license. The exact feature set required for a particular function can be determined using the Cisco Feature Set Browser. Routers come with IP Base installed, and additional feature pack licenses can be installed as bolt-on additions to expand the feature set of the device. The available feature packs are: Data adds features like BFD, IP SLAs, IPX, L2TPv3, Mobile IP, MPLS, SCTP. Security adds features like VPN, Firewall, IP SLAs, NAC. Unified Comms adds features like CallManager Express, Gatekeeper, H.323, IP SLAs, MGCP, SIP, VoIP, CUBE(SBC). IOS images can not be updated with software bug fixes. To patch a vulnerability in an IOS, a binary file with the entire operating system needs to be loaded. Versioning Cisco IOS is versioned using three numbers and some letters, in the general form a.b(c.d)e, where: a is the major version number. b is the minor version number. c is the release number, which begins at one and increments as new releases in a same way a.b train are released. "Train" is Cisco-speak for "a vehicle for delivering Cisco software to a specific set of platforms and features." d (omitted from general releases) is the interim build number. e (zero, one or two letters) is the software release train identifier, such as none (which designates the mainline, see below), T (for Technology), E (for Enterprise), S (for Service provider), XA as a special functionality train, XB as a different special functionality train, etc. Rebuilds – Often a rebuild is compiled to fix a single specific problem or vulnerability for a given IOS version. For example, 12.1(8)E14 is a Rebuild, the 14 denoting the 14th rebuild of 12.1(8)E. Rebuilds are produced to either quickly repair a defect, or to satisfy customers who do not want to upgrade to a later major revision because they may be running critical infrastructure on their devices, and hence prefer to minimize change and risk. Interim releases – Are usually produced on a weekly basis, and form a roll-up of current development effort. The Cisco advisory web site may list more than one possible interim to fix an associated issue (the reason for this is unknown to the general public). Maintenance releases – Rigorously tested releases that are made available and include enhancements and bug fixes. Cisco recommend upgrading to Maintenance releases where possible, over Interim and Rebuild releases. Trains Cisco says, "A train is a vehicle for delivering Cisco software to a specific set of platforms and features." Until 12.4 Before Cisco IOS release 15, releases were split into several trains, each containing a different set of features. Trains more or less map onto distinct markets or groups of customers that Cisco targeted. The mainline train is intended to be the most stable release the company can offer, and its feature set never expands during its lifetime. Updates are released only to address bugs in the product. The previous technology train becomes the source for the current mainline train — for example, the 12.1T train becomes the basis for the 12.2 mainline. Therefore, to determine the features available in a particular mainline release, look at the previous T train release. The T – Technology train, gets new features and bug fixes throughout its life, and is therefore potentially less stable than the mainline. (In releases prior to Cisco IOS Release 12.0, the P train served as the Technology train.) Cisco doesn't recommend usage of T train in production environments unless there is urgency to implement a certain T train's new IOS feature. The S – Service Provider train, runs only on the company's core router products and is heavily customized for Service Provider customers. The E – Enterprise train, is customized for implementation in enterprise environments. The B – broadband train, supports internet based broadband features. The X* (XA, XB, etc.) – Special Release train, contains one-off releases designed to fix a certain bug or provide a new feature. These are eventually merged with one of the above trains. There were other trains from time to time, designed for specific needs — for example, the 12.0AA train contained new code required for Cisco's AS5800 product. Since 15.0 Starting with Cisco IOS release 15, there is just a single train, the M/T train. This train includes both extended maintenance releases and standard maintenance releases. The M releases are extended maintenance releases, and Cisco will provide bug fixes for 44 months. The T releases are standard maintenance releases, and Cisco will only provide bug fixes for 18 months. Security and vulnerabilities Because IOS needs to know the cleartext password for certain uses, (e.g., CHAP authentication) passwords entered into the CLI by default are weakly encrypted as 'Type 7' ciphertext, such as "Router(config)#username jdoe password 7 0832585B1910010713181F". This is designed to prevent "shoulder-surfing" attacks when viewing router configurations and is not secure – they are easily decrypted using software called "getpass" available since 1995, or "ios7crypt", a modern variant, although the passwords can be decoded by the router using the "key chain" command and entering the type 7 password as the key, and then issuing a "show key" command; the above example decrypts to "stupidpass". However, the program will not decrypt 'Type 5' passwords or passwords set with the enable secret command, which uses salted MD5 hashes. Cisco recommends that all Cisco IOS devices implement the authentication, authorization, and accounting (AAA) security model. AAA can use local, RADIUS, and TACACS+ databases. However, a local account is usually still required for emergency situations. At the Black Hat Briefings conference in July 2005, Michael Lynn, working for Internet Security Systems at the time, presented information about a vulnerability in IOS. Cisco had already issued a patch, but asked that the flaw not be disclosed. Cisco filed a lawsuit, but settled after an injunction was issued to prevent further disclosures. IOS XR train For Cisco products that required very high availability, such as the Cisco CRS-1, the limitations of a monolithic kernel were not acceptable. In addition, competitive router operating systems that emerged 10–20 years after IOS, such as Juniper's JUNOS, were designed not to have these limitations. Cisco's response was to develop a tree of the Cisco IOS that offered modularity and memory protection between processes, lightweight threads, pre-emptive scheduling and the ability to independently restart failed processes. The IOS XR development train used the real-time operating system microkernel (QNX), so a large part of the IOS source code was re-written to take advantage of the features offered by the kernel. In 2005 Cisco introduced the Cisco IOS XR network operating system on the 12000 series of network routers, extending the microkernel architecture from the CRS-1 routers to Cisco's widely deployed core routers. In 2006 Cisco introduced IOS Software Modularity, which extends the microkernel architecture into the IOS environment, while still providing the software upgrade capabilities. See also Cisco IOS XE Cisco IOS XR Cisco NX-OS JUNOS Supervisor Engine (Cisco) Network operating system Packet Tracer References External links Cisco Security Advisories; Complete History Cisco IOS Commands Cisco-centric Open Source Community Cisco IOS Packaging Rootkits on Cisco IOS Devices IOS Embedded operating systems Internet Protocol based network software Network operating systems Routers (computing)
Operating System (OS)
407
Ubuntu Kylin Ubuntu Kylin () is the official Chinese version of the Ubuntu computer operating system. It is intended for desktop and laptop computers, and has been described as a "loose continuation of the Chinese Kylin OS". In 2013, Canonical Ltd. reached an agreement with the Ministry of Industry and Information Technology of the People's Republic of China to co-create and release an Ubuntu-based operating system with features targeted at the Chinese market. The first official release, Ubuntu Kylin 13.04, was released on 25 April 2013, on the same day as Ubuntu 13.04 (Raring Ringtail). Features include Chinese input methods, Chinese calendars, a weather indicator, and online music search from the Dash. History The current version is 21.10 Version 20.04 introduced version 3.0 of its own, newly developed UKUI (Ubuntu Kylin User Interface). Formerly, UKUI was a customization of the MATE desktop. Version 14.10 introduced the Ubuntu Kylin Software Center (UKSC), and a utility which helps common end-users for daily computing tasks called Youker Assistant. The team cooperates with Sogou to develop Sogou Input Method for Linux. Since it is closed source, it is not included in the official Ubuntu Kylin image, but users can download it from UKSC or Sogou's website. WPS Office, also closed-source, is the default office suite in the pro and enhanced editions. LibreOffice however is used mainly as default in the official vanilla Ubuntu Kylin image from the main Ubuntu server website without WPS Office installed. Release History See also Canaima (operating system) GendBuntu BOSS Linux Inspur LiMux Nova (operating system) VIT, C.A. Kingsoft WPS Office References External links Ubuntu Kylin, wiki of the Ubuntu Kylin Team at wiki.ubuntu.com. Chinese-language Linux distributions Ubuntu derivatives Linux distributions
Operating System (OS)
408
List of computer term etymologies This is a list of the origins of computer-related terms or terms used in the computing world (i.e., a list of computer term etymologies). It relates to both computer hardware and computer software. Names of many computer terms, especially computer applications, often relate to the function they perform, e.g., a compiler is an application that compiles (programming language source code into the computer's machine language). However, there are other terms with less obvious origins, which are of etymological interest. This article lists such terms. A ABEND – originally from an IBM System/360 error message, short for "abnormal end". Jokingly reinterpreted as German Abend ("evening"), because "it is what system operators do to the machine late on Friday when they want to call it a day." Ada – named after Ada Lovelace, who is considered by many to be the first programmer. Apache – originally chosen from respect for the Native American Indian tribe of Apache. It was suggested that the name was appropriate, as Apache began as a series of patches to code written for NCSA's HTTPd daemon. The result was "a patchy" server. AWK – composed of the initials of its authors Aho, Weinberger, and Kernighan. B B – probably a contraction of "BCPL", reflecting Ken Thompson's efforts to implement a smaller BCPL in 8 KB of memory on a DEC PDP-7. Or, named after Bon. biff – named after a dog known by the developers at Berkeley, who – according to the UNIX manual page – died on 15 August 1993, at the age of 15, and belonged to a certain Heidi Stettner. Some sources report that the dog would bark at the mail carrier, making it a natural choice for the name of a mail notification system. The Jargon File contradicts this description, but confirms at least that the dog existed. bit – first used by Claude E. Shannon in his seminal 1948 paper A Mathematical Theory of Communication. Shannon's "bit" is a portmanteau of "binary digit". He attributed its origin to John W. Tukey, who had used the word in a Bell Labs memo of 9 January 1947. Bon – created by Ken Thompson and named either after his wife Bonnie, or else after "a religion whose rituals involve the murmuring of magic formulas" (a reference to the Tibetan native religion Bön). booting or bootstrapping – from the phrase "to pull oneself up by one's bootstraps", originally used as a metaphor for any self-initiating or self-sustaining process. Used in computing due to the apparent paradox that a computer must run code to load anything into memory, but code cannot be run until it is loaded. bug – often (but erroneously) credited to Grace Hopper. In 1946, she joined the Harvard Faculty at the Computation Laboratory where she traced an error in the Harvard Mark II to a moth trapped in a relay. This bug was carefully removed and taped to the log book. However, use of the word 'bug' to describe defects in mechanical systems dates back to at least the 1870s, perhaps especially in Scotland. Thomas Edison, for one, used the term in his notebooks and letters. byte – coined by Werner Buchholz in June 1956 during the early design phase for the IBM Stretch computer. C C – a programming language. Dennis Ritchie, having improved on the B language, named his creation New B. He later renamed it C. (See also D). C++ – an object-oriented programming language, a successor to the C programming language. C++ creator Bjarne Stroustrup named his new language "C with Classes" and then "new C". The original language began to be called "old C" which was considered insulting to the C community. At this time Rick Mascitti suggested the name C++ as a successor to C. In C the '++' operator increments the value of the variable it is appended to, thus C++ would increment the value of C. computer – from the human computers who carried out calculations mentally and possibly with mechanical aids, now replaced by electronic programmable computers. cookie – a packet of information that travels between a browser and the web server. The term was coined by web browser programmer Lou Montulli after the term "magic cookies" used by Unix programmers. The term "magic cookie" in turn derives from "fortune cookie", a cookie with an embedded message. Cursor (user interface) - Cursor is Latin for 'runner.' A cursor is the name given to the transparent slide engraved with a hairline that is used for marking a point on a slide rule. The term was then transferred to computers through analogy. D D – a programming language. Designed by Walter Bright as an improved C, avoiding many of the design problems of C (e.g., extensive pointer manipulation, unenforced array boundaries, etc.). daemon – a process in an operating system that runs in the background. It is not an acronym for Disk And Execution Monitor: according to the original team that introduced the concept, the use of the word daemon was inspired by the Maxwell's demon of physics and thermodynamics (an imaginary agent which helped sort molecules with differing velocities and worked tirelessly in the background) The term was embraced, and possibly popularized, by the Unix operating systems which supported multiple background processes: various local (and later Internet) services were provided by daemons. This is exemplified by the BSD mascot, John Lasseter's drawing of a friendly imp. Dashboard - Originally, the word dashboard applied to a barrier of wood or leather fixed at the front of a horse-drawn carriage or sleigh to protect the driver from mud or other debris "dashed up" (thrown up) by the horses' hooves.[1] The first known use of the term (hyphenated as dash-board, and applied to sleighs) dates from 1847.[2] Commonly these boards did not perform any additional function other than providing a convenient handhold for ascending into the driver's seat, or a small clip with which to secure the reins when not in use. Debian – a Linux distribution. A portmanteau of the names Ian Murdock, the Debian Project creator, and Debra Lynn, Ian's then girlfriend and future wife. default – an initial value for a variable or user setting. The original meaning of the word 'default' is 'failure to fulfill an obligation'. The obligation here is to provide an input that is required by a program. In the early days of programming, if an input value was missing, or 'null', the program would almost certainly crash. This is often to do with variable 'typing' – for example, a simple calculation program would expect a number as an input: any other type of input such as a text string or even a null (no value), would make any mathematical operation such as multiplication impossible. In order to guard against this possibility, programmers defined initial values that would be used if the user *defaulted* or failed to fulfill the obligation of providing the correct input value. Over time, the term 'default' has come to refer to the initial value itself. E Ethernet – a computer networking technology. According to Robert Metcalfe (one of its initial developers), he devised the name in an early company memo as an endocentric compound of "luminiferous ether"—the "substance" that was widely believed to be the medium through which electromagnetic radiation propagated in the late 19th century—and "net", short for "network". When the networking team would describe data flowing into the network infrastructure, they would routinely describe it as data packets going "up into the ether". F finger – Unix command that provides information about users logged into a system. Les Earnest wrote the finger program in 1971 to provide for users who wanted information about other users on a network or system. According to Earnest, it was named after the act of pointing, because it "bypassed the need to point to a user ID and ask, 'Who is that?'" foobar – from the U.S. Army slang acronym, FUBAR. Both foo and bar are commonly used as metasyntactic variables. G Gentoo – a Linux distribution. Named after a variety of penguin, the universal Linux mascot. Git – a distributed version control system. In the project's initial README file, Linus Torvalds wrote that "'git' can mean anything, depending on your mood", and offers several definitions: A random three-letter combination which is pronounceable and not a preexisting Unix command British English slang, meaning a stupid or contemptible person An acronym for "global information tracker" (when it works) An acronym for "goddamn idiotic truckload of sh*t" (when it breaks) When asked about the origin of the name, Torvalds jokingly stated, "I'm an egotistical bastard, and I name all my projects after myself." GNU – a project with an original goal of creating a free operating system. The gnu is also a species of African antelope. The founder of the GNU project Richard Stallman liked the name because of the humour associated with its pronunciation, and was also influenced by The Gnu Song, by Flanders and Swann, which is sung by a gnu. It is also an early example of a recursive acronym: "GNU's Not Unix". Google – a search engine. The name started as an exaggerated boast about the amount of information the search engine would be able to search. It was originally named 'Googol', a word for the number represented by 1 followed by 100 zeros. The word was originally invented by Milton Sirotta, nephew of mathematician Edward Kasner, in 1938 during a discussion of large numbers and exponential notation. Gopher – an early protocol for distributing documents over a network. Declined in favor of the World Wide Web. The name was coined by developer Farhad Anklesaria, as a play on , an assistant who fetches things, and a gopher, who digs, as if through nested hierarchies. The name was also inspired by Goldy Gopher, the mascot for the University of Minnesota where the protocol was developed. grep – a Unix command line utility The name comes from a command in the Unix text editor ed that takes the form g/re/p meaning search globally for a regular expression and print lines where instances are found. "Grep" like "Google" is often used as a verb, meaning "to search". H Hotmail – free email service, now named Outlook.com. Founder Jack Smith got the idea of accessing e-mail via the web from a computer anywhere in the world. When Sabeer Bhatia came up with the business plan for the mail service, he tried all kinds of names ending in 'mail' and finally settled for Hotmail as it included the letters "HTML" – the markup language used to write web pages. It was initially referred to as HoTMaiL with selective upper casing. I i18n – short for "internationalization". "18" is for the number of letters between the i and the n. Related, less common terms include l10n (for localization), g11n (for globalization) and a11y (for accessibility). ICQ – an instant messaging service. ICQ is not an initialism. It is a play on the phrase "I seek you" or "Internet seek you" (similar to CQ in ham radio usage). ID10T – pronounced "ID ten T" – is a code frequently used by a customer service representative (CSR) to annotate their notes and identify the source of a problem as the person who is reporting the problem rather than the system being blamed. This is a thinly veiled reference to the CSR's opinion that the person reporting the problem is an IDIOT. Example: Problem reported caused by ID10T, no resolution possible. See also PEBKAC. J Jakarta Project – a project constituted by Sun and Apache to create a web server for Java servlets and JSPs. Jakarta was the name of the conference room at Sun where most of the meetings between Sun and Apache took place. The conference room was most likely named after Jakarta, the capital city of Indonesia, which is located on the northwest coast of the island of Java. Java – a programming language by Sun Microsystems, later acquired by Oracle. Named after , a blend of coffee from the island of Java, and also used as slang for coffee in general. The language was initially called "Greentalk" and later "Oak", but this was already trademarked by Oak Technologies, so the developers had to choose another name shortly before release. Other suggested names were "WebRunner", "DNA", and "Silk". JavaScript – a programming language. It was originally developed by Brendan Eich of Netscape under the name "Mocha", which was later renamed to "LiveScript", and finally to "JavaScript". The change of name from LiveScript to JavaScript roughly coincided with Netscape adding support for Java technology in its Netscape Navigator web browser. JavaScript was first introduced and deployed in the Netscape browser version 2.0B3 in December 1995. The naming has caused confusion, giving the impression that the language is a spin-off of Java, and it has been characterized by many as a marketing ploy by Netscape to give JavaScript the cachet of what was then the hot new web-programming language. K Kerberos – a computer network authentication protocol that is used by both Windows 2000 and Windows XP as their default authentication method. When created by programmers at MIT in the 1970s, they wanted a name that suggested high security for the project, so they named it after Kerberos, in Greek mythology the three-headed dog guarding the gates of Hades. The reference to Greek mythology is most likely because Kerberos was developed as part of Project Athena. L Linux – an operating system kernel, and the common name for many of the operating systems which use it. Linux creator Linus Torvalds originally used the MINIX operating system on his computer, didn't like it, liked DOS less, and started a project to develop an operating system that would address the problems of MINIX. Hence the working name was Linux (Linus' Minix). Originally, however, Linus had planned to have it named Freax (free + freak + x). His friend Ari Lemmke encouraged Linus to upload it to a network so it could be easily downloaded. Ari gave Linus a directory named linux on his FTP server, as he did not like the name Freax. Lisa – A personal computer designed at Apple Computer during the early 1980s. Apple stated that Lisa was an acronym for Local Integrated Software Architecture; however, it is often inferred that the machine was originally named after the daughter of Apple co-founder Steve Jobs, and that this acronym was invented later to fit the name. Accordingly, two humorous suggestions for expanding the acronym included Let's Invent Some Acronyms, and Let's Invent Silly Acronyms. liveware – computer personnel. A play on the terms "software" and "hardware". Coined in 1966, the word indicates that sometimes the computer problem is not with the computer itself, but with the user. Lotus Software – Lotus founder Mitch Kapor got the name for his company from 'The Lotus Position' ('Padmasana' in Sanskrit). Kapor used to be a teacher of Transcendental Meditation technique as taught by Maharishi Mahesh Yogi. M Macintosh, Mac – a personal computer from Apple Computer. From McIntosh, a popular type of apple. N Nerd – A colloquial term for a computer person, especially an obsessive, singularly focused one. Originally created by Dr. Seuss from his book If I Ran the Zoo. O Oracle – a relational database management system (RDBMS). Larry Ellison, Ed Oates and Bob Miner were working on a consulting project for the CIA (Central Intelligence Agency). The code name for the project was Oracle (the CIA evidently saw this as a system that would give answers to all questions). The project was designed to use the newly written SQL database language from IBM. The project eventually was terminated but they decided to finish what they started and bring it to the world. They kept the name Oracle and created the RDBMS engine. P Pac-Man – a video arcade game. The term comes from paku paku which is a Japanese onomatopoeia used for noisy eating; similar to chomp chomp. The game was released in Japan with the name Puck-Man, and released in the US with the name Pac-Man, fearing that kids may deface a Puck-Man cabinet by changing the P to an F. Patch – A set of changes to a computer program or its supporting data designed to update, fix, or improve it. Historically, software suppliers distributed patches on paper tape or on punched cards, expecting the recipient to cut out the indicated part of the original tape (or deck), and patch in (hence the name) the replacement segment PCMCIA – the standards body for PC card and ExpressCard, expansion card form factors. The Personal Computer Memory Card International Association is an international standards body that defines and promotes standards for expansion devices such as modems and external hard disk drives to be connected to notebook computers. Over time, the acronym PCMCIA has been used to refer to the PC card form factor used on notebook computers. A twist on the acronym is People Can't Memorize Computer Industry Acronyms. PEBKAC – an acronym for "Problem Exists Between Keyboard And Chair", which is a code frequently used by a customer service representative (CSR) to annotate their notes and identify the source of a problem as the person who is reporting the problem rather than the system being blamed. This is a thinly veiled reference to the CSR's opinion that the person reporting the problem is the problem. Example: PEBKAC, no resolution possible. See also ID10T. Pentium – a series of microprocessors from Intel. The fifth microprocessor in the 80x86 series. It would have been named i586 or 80586, but Intel decided to name it Pentium (penta = five) after it lost a trademark infringement lawsuit against AMD due to a judgment that numbers like "286", "386", and "486" cannot be trademarked. According to Intel, Pentium conveys a meaning of strength, like titanium. Since some early Pentium chips contained a mathematical precision error, it has been jokingly suggested that the reason for the chip being named Pentium rather than 586 was that Intel chips would calculate 486 + 100 = 585.99999948. Perl – an interpreted scripting language. Perl was originally named Pearl, after the "pearl of great price" of Matthew 13:46. Larry Wall, the creator of Perl, wanted to give the language a short name with positive connotations and claims to have looked at (and rejected) every three- and four-letter word in the dictionary. He even thought of naming it after his wife Gloria. Before the language's official release Wall discovered that there was already a programming language named Pearl, and changed the spelling of the name. Although the original manuals suggested the backronyms "Practical Extraction and Report Language" and "Pathologically Eclectic Rubbish Lister", these were intended humorously. PHP – a server-side scripting language Originally named "Personal Home Page Tools" by creator Rasmus Lerdorf, it was rewritten by developers Zeev Suraski and Andi Gutmans who gave it the recursive name "PHP Hypertext Preprocessor". Lerdorf currently insists the name should not be thought of as standing for anything, for he selected "Personal Home Page" as the name when he did not foresee PHP evolving into a general-purpose programming language. Pine – e-mail client. Many people believe that Pine stands for "Pine Is Not Elm". However, one of its original authors, Laurence Lundblade, insists this was never the case and that it started off simply as a word and not an acronym; his first choice of a backronym for pine would be "Pine Is Nearly Elm". Over time it was changed to mean Program for Internet News and E-mail. ping – a computer network tool used to detect hosts. The author of ping, Mike Muuss, named it after the pulses of sound made by a sonar called a "ping". Later Dave Mills provided the backronym "Packet Internet Groper". Python – an interpreted scripting programming language. Named after the television series Monty Python's Flying Circus. R Radio button – a GUI widget used for making selections. Radio buttons got their name from the preset buttons in radio receivers. When one used to select preset stations on a radio receiver physically instead of electronically, depressing one preset button would pop out whichever other button happened to be pushed in. Red Hat Linux – a Linux distribution from Red Hat. Company founder Marc Ewing was given the Cornell lacrosse team cap (with red and white stripes) by his grandfather while at college. People would turn to him to solve their problems, and he was referred to as "that guy in the red hat". He lost the cap and had to search for it desperately. The manual of the beta version of Red Hat Linux had an appeal to readers to return the hat if found by anyone. RSA – an asymmetric algorithm for public key cryptography. Based on the surnames of the authors of this algorithm – Ron Rivest, Adi Shamir and Len Adleman. S Samba – a free implementation of Microsoft's networking protocol. The name samba comes from inserting two vowels into the name of the standard protocol that Microsoft Windows network file system use, named Server Message Block (SMB). The author searched a dictionary using grep for words containing S M and B in that order; the only matches were Samba and Salmonberry. shareware – coined by Bob Wallace to describe his word processor PC-Write in early 1983. Before this Jim Knopf (also known as Jim Button) and Andrew Fluegelman called their distributed software "user supported software" and "freeware" respectively, but it was Wallace's terminology that prevailed. spam – unwanted repetitious messages, such as unsolicited bulk e-mail. The term spam is derived from the Monty Python SPAM sketch, set in a cafe where everything on the menu includes SPAM luncheon meat. While a customer plaintively asks for some kind of food without SPAM in it, the server reiterates the SPAM-filled menu. Soon, a chorus of Vikings join in with a song: "SPAM, SPAM, SPAM, SPAM, SPAM, lovely SPAM, wonderful SPAM", over and over again, drowning out all conversation. SPIM – a simulator for a virtual machine closely resembling the instruction set of MIPS processors, is simply MIPS spelled backwards. In recent time, spim has also come to mean SPam sent over Instant Messaging. Swing – a graphics library for Java. Swing was the code-name of the project that developed the new graphic components (the successor of AWT). It was named after swing, a style of dance band jazz that was popularized in the 1930s and unexpectedly revived in the 1990s. Although an unofficial name for the components, it gained popular acceptance with the use of the word in the package names for the Swing API, which begin with javax.swing. T Tomcat – a web server from the Jakarta Project. Tomcat was the code-name for the JSDK 2.1 project inside Sun. Tomcat started off as a servlet specification implementation by James Duncan Davidson who was a software architect at Sun. Davidson had initially hoped that the project would be made open-source, and since most open-source projects had O'Reilly books on them with an animal on the cover, he wanted to name the project after an animal. He came up with Tomcat since he reasoned the animal represented something that could take care of and fend for itself. troff – a document processing system for Unix. Troff stands for "typesetter roff", although many people have speculated that it actually means "Times roff" because of the use of the Times font family in troff by default. Troff has its origins from roff, an earlier formatting program, whose name is a contraction of "run off". Trojan horse – a malicious program that is disguised as legitimate software. The term is derived from the classical myth of the Trojan Horse. Analogously, a Trojan horse appears innocuous (or even to be a gift), but in fact is a vehicle for bypassing security. Tux – The penguin mascot used as the primary logo for the Linux kernel, and Linux-based operating systems. Linus Torvalds, the creator of Linux, suggested a penguin mascot because he "likes penguins a lot", and wanted Linux to be associated with something "kind of goofy and fun". The logo was originally created by Larry Ewing in 1996 as an entry in a Linux Logo competition. The name Tux was contributed by James Hughes, who suggested "(T)orvolds (U)ni(X) — TUX!" U Ubuntu Linux – a Debian-based Linux distribution sponsored by Canonical Ltd. Derived from ubuntu, a South African ideology. Unix – an operating system. When Bell Labs pulled out of the MULTiplexed Information and Computing System (MULTICS) project, which was originally a joint Bell Labs/GE/MIT project, Ken Thompson of Bell Labs, soon joined by Dennis Ritchie, wrote a simpler version of the operating system for a spare DEC minicomputer, allegedly found in a corridor. They needed an OS to run the game Space Travel, which had been compiled under MULTICS. The new OS was named UNICS – UNiplexed Information and Computing System by Brian Kernighan. V vi – a text editor, Initialism for visual, a command in the ex editor which helped users to switch to the visual mode from the ex mode. the first version was written by Bill Joy at UC Berkeley. Vim – a text editor. Acronym for Vi improved after Vim added several features over the vi editor. Vim however had started out as an imitation of Vi and was expanded as Vi imitation. Virus – a piece of program code that spreads by making copies of itself. The term virus was first used as a technical computer science term by Fred Cohen in his 1984 paper "Computer Viruses Theory and Experiments", where he credits Len Adleman with coining it. Although Cohen's use of virus may have been the first academic use, it had been in the common parlance long before that. A mid-1970s science fiction novel by David Gerrold, When H.A.R.L.I.E. was One, includes a description of a fictional computer program named VIRUS that worked just like a virus (and was countered by a program named ANTIBODY). The term "computer virus" also appears in the comic book "Uncanny X-Men" No. 158, published in 1982. A computer virus's basic function is to insert its own executable code into that of other existing executable files, literally making it the electronic equivalent to the biological virus, the basic function of which is to insert its genetic information into that of the invaded cell, forcing the cell to reproduce the virus. W Wiki or WikiWiki – a hypertext document collection or the collaborative software used to create it. Coined by Ward Cunningham, the creator of the wiki concept, who named them for the "wiki wiki" or "quick" shuttle buses at Honolulu Airport. Wiki wiki was the first Hawaiian term he learned on his first visit to the islands. The airport counter agent directed him to take the wiki wiki bus between terminals. Worm – a self-replicating program, similar to a virus. The name 'worm' was taken from a 1970s science fiction novel by John Brunner entitled The Shockwave Rider. The book describes programs known as "tapeworms" which spread through a network for the purpose of deleting data. Researchers writing an early paper on experiments in distributed computing noted the similarities between their software and the program described by Brunner, and adopted that name. WYSIWYG – describes a system in which content during editing appears very similar to the final product. Acronym for What You See Is What You Get, the phrase was originated by a newsletter published by Arlene and Jose Ramos, named WYSIWYG. It was created for the emerging Pre-Press industry going electronic in the late 1970s. X X Window System – a windowing system for computers with bitmap displays. X derives its name as a successor to a pre-1983 window system named the W Window System. Y Yahoo! – internet portal and web directory. Yahoo!'s history site says the name is an acronym for "Yet Another Hierarchical Officious Oracle", but some remember that in its early days (mid-1990s), when Yahoo! lived on a server named akebono.stanford.edu, it was glossed as "Yet Another Hierarchical Object Organizer." The word "Yahoo!" was originally invented by Jonathan Swift and used in his book Gulliver's Travels. It represents a person who is repulsive in appearance and action and is barely human. Yahoo! founders Jerry Yang and David Filo selected the name because they considered themselves yahoos. Z zip – a file format, also used as a verb to mean compress. The file format was created by Phil Katz, and given the name by his friend Robert Mahoney. The compression tool Phil Katz created was named PKZIP. Zip means "speed", and they wanted to imply their product would be faster than ARC and other compression formats of the time. See also Glossary of computer terms List of company name etymologies Lists of etymologies References Etymologies Computer terms
Operating System (OS)
409
OS Fund OS Fund is an American venture capital fund that invests in early-stage science and technology companies. Firm Bryan Johnson created OS Fund in October 2014, a year after selling Braintree to eBay for $800 million. He devoted $100 million from the sale of the mobile-payment processor to establishing the fund. Johnson and Jeff Klunzinger serve as the fund's general partners. The fund draws its name from the acronym for operating system (OS), the software that underlies the basic functions of computers and provides a foundation for other applications. OS Fund focuses on investment in technologies and platforms in genomics, synthetic biology, computationally derived therapeutics, advanced materials, and diagnostics. In September 2015, OS Fund published the methodology it uses to evaluate investments in the field of synthetic biology. Johnson said the fund publicly released the "playbook" on its website to encourage others to invest in emerging sciences. Origins Johnson has said he launched the fund in response to a pullback in federal support of research and development, and because of a general reluctance by more traditional venture capital firms to make science-related investments. He also has spoken of a desire to use his resources to improve the lives of others. Investments OS Fund focuses on early-stage, computationally driven companies that utilize artificial intelligence and machine learning to develop platform technologies in the following sectors: Genomics Synthetic Biology Computationally Derived Therapeutics Advanced Materials Diagnostics Notable OS Fund investments include the following: Ginkgo Bioworks, a Boston-based biotechnology company that uses genetic engineering to produce bacteria with industrial applications; recently valued at $4.2 billion. NuMat Technologies, pioneers of metal-organic frameworks that utilize high-performance computing to design material-enabled products. Atomwise, a San Francisco-based developer of an AI-powered drug discovery platform. Arzeda, a protein and enzymatic design platform used to develop transformative products across industries twoXAR, an AI-driven drug discovery platform. Lygos, a biological engineering platform that converts low-cost sugar to high-value chemicals. Tempo, a software-driven smart factory that merges data, analytics, and automation to rapidly and precisely print complex circuit board assemblies. Truvian, a benchtop blood testing diagnostic system that provides lab-accurate results in 20 minutes. Catalog, a Boston company focused on harnessing DNA to store data. The company recently showed it could store 14 gigabytes of data from Wikipedia.org in DNA molecules. A-Alpha Bio, a drug discovery platform that simultaneously sorts through millions of protein interactions for multiple targets. JUST, a San Francisco-based food technology company focused on sustainable plant-based food products, a 100% plant-based egg alternative made from mung beans. Matternet, the builder and operator of drone logistics networks to transport goods on demand. In 2019, they partnered with UPS to transport medical samples across hospital systems as well as UPS and CVS Pharmacy to make at-home prescription deliveries. References External links Official website Financial services companies established in 2014 American companies established in 2014 Venture capital firms of the United States
Operating System (OS)
410
Windows 8 editions Windows 8, a major release of the Microsoft Windows operating system, was available in four different editions: Windows 8 (Core), Pro, Enterprise, and RT. Only Windows 8 (Core) and Pro were widely available at retailers. The other editions focus on other markets, such as embedded systems or enterprise. All editions except RT support 32-bit IA-32 CPUs and x64 CPUs. Editions Windows 8 (also sometimes referred to as Windows 8 (Core) to distinguish from the OS itself) is the basic edition of Windows for the IA-32 and x64 architectures. This edition contains features aimed at the home market segment and provides all of the basic new Windows 8 features. Windows 8 Pro is comparable to Windows 7 Professional and Ultimate and is targeted towards enthusiasts and business users; it includes all the features of Windows 8. Additional features include the ability to receive Remote Desktop connections, the ability to participate in a Windows Server domain, Encrypting File System, Hyper-V, and Virtual Hard Disk Booting, Group Policy as well as BitLocker and BitLocker To Go. Windows Media Center functionality is available only for Windows 8 Pro as a separate software package. Windows 8 Enterprise provides all the features in Windows 8 Pro (except the ability to install the Windows Media Center add-on), with additional features to assist with IT organization (see table below). This edition is available to Software Assurance customers, as well as MSDN and Technet Professional subscribers, and was released on 16 August 2012. Windows RT is only available pre-installed on ARM-based devices such as tablet PCs. It includes touch-optimized desktop versions of the basic set of Office 2013 applications to users—Microsoft Word, Excel, PowerPoint, and OneNote, and supports device encryption capabilities. Several business-focused features such as Group Policy and domain support are not included. Software for Windows RT can be either downloaded from Windows Store or sideloaded, although sideloading on Windows RT must first be enabled by purchasing additional licenses through Microsoft volume licensing outlet. Desktop software that run on previous versions of Windows cannot be run on Windows RT as Windows Store apps are based on Windows Runtime API which differs from the traditional apps. According to CNET, these essential differences may raise the question of whether Windows RT is an edition of Windows: in a conversation with Mozilla, Microsoft deputy general counsel David Heiner was reported to have said Windows RT "isn't Windows anymore." Mozilla general counsel, however, dismissed the assertion on the basis that Windows RT has the same user interface, application programming interface and update mechanism. Unlike Windows Vista and Windows 7, there are no Starter, Home Basic, Home Premium, or Ultimate editions. Regional restrictions and variations All mentioned editions have the ability to use language packs, enabling multiple user interface languages. (This functionality was previously available in Ultimate or Enterprise edition of Windows 7 and Windows Vista.) However, in China and other emerging markets, a variation of Windows 8 without this capability, called Windows 8 Single Language, is sold. This edition can be upgraded to Windows 8 Pro. Furthermore, like in Windows Phone 7, OEMs can choose not to support certain display languages either out of the box or available for download. These exact choices depend on the device manufacturer, country of purchase, and the wireless carrier. For example, a cellular-connected Samsung ATIV Smart PC running Windows 8 on AT&T only supports English, Spanish, French, German, Italian, and Korean (the last three are available as optional downloads). Additional Windows 8 editions specially destined for European markets have the letter "N" (e.g. Windows 8.1 Enterprise N) suffixed to their names and do not include a bundled copy of Windows Media Player. Microsoft was required to create the "N" editions of Windows after the European Commission ruled in 2004 that it needed to provide a copy of Windows without Windows Media Player tied in. Windows 8.1 with Bing is a reduced-cost SKU of Windows 8.1 for OEMs that was introduced in May 2014. It was introduced as part of an effort to encourage the production of low-cost devices, whilst "driving end-user usage of Microsoft Services such as Bing and OneDrive". It is subsidized by Microsoft's Bing search engine, which is set as the default within Internet Explorer, and cannot be changed to a third-party alternative by the OEM. This restriction does not apply to end-users, who can still change the default search engine freely after installation. It is otherwise identical to the base edition. Editions for embedded systems Windows Embedded 8 Standard is a componentized edition of Windows 8 with for use in and on specialized devices. Windows Embedded 8 Standard was released on 20 March 2013. Windows Embedded 8 Industry is suited to power industry devices such as; ATMs, control panels, kiosks, and POS terminals. Windows Embedded 8 Industry was released on 2 April 2013. Available, Pro, Pro Retail, and Enterprise editions Upgrade compatibility The following in-place upgrade paths are supported from Windows 7. Note that it is only possible to upgrade from an IA-32 variant of Windows 7 to an IA-32 variant of Windows 8; an x64 variant of Windows 7 can only be upgraded to an x64 variant of Windows 8. The retail package entitled Windows 8 Pro Upgrade was restricted to upgrading a computer with licensed Windows XP SP3, Windows Vista or Windows 7. Finally, there is no upgrade path for Windows RT. In-upgrade is not available for Windows Vista and Windows XP. However, on Windows XP SP3 and Windows Vista RTM, it is possible to perform a clean install while preserving personal files. On Windows Vista SP1, it is possible to perform a clean install but save system settings as well. While Microsoft still refers to the scenarios as "upgrade", the user still need to reinstall all apps, carry out necessary license activation steps and reinstate app settings. Comparison chart Notes References 8 de:Microsoft Windows 8#Editionen
Operating System (OS)
411
DOS memory management In IBM PC compatible computing, DOS memory management refers to software and techniques employed to give applications access to more than 640 kibibytes (640*1024 bytes) (kiB) of "conventional memory". The 640 KiB limit was specific to the IBM PC and close compatibles; other machines running MS-DOS had different limits, for example the Apricot PC could have up to 768 KiB and the Sirius Victor 9000, 896 KiB. Memory management on the IBM family was made complex by the need to maintain backward compatibility to the original PC design and real-mode DOS, while allowing computer users to take advantage of large amounts of low-cost memory and new generations of processors. Since DOS has given way to Microsoft Windows and other 32-bit operating systems not restricted by the original arbitrary 640 KiB limit of the IBM PC, managing the memory of a personal computer no longer requires the user to manually manipulate internal settings and parameters of the system. The 640 KiB limit imposed great complexity on hardware and software intended to circumvent it; the physical memory in a machine could be organised as a combination of base or conventional memory (including lower memory), upper memory, high memory (not the same as upper memory), extended memory, and expanded memory, all handled in different ways. Conventional memory The Intel 8088 processor used in the original IBM PC had 20 address lines and so could directly address 1 MiB (220 bytes) of memory. Different areas of this address space were allocated to different kinds of memory used for different purposes. Starting at the lowest end of the address space, the PC had read/write random access memory (RAM) installed, which was used by DOS and application programs. The first part of this memory was installed on the motherboard of the system (in very early machines, 64 KiB, later revised to 256 KiB). Additional memory could be added with cards plugged into the expansion slots; each card contained straps or switches to control what part of the address space accesses memory and devices on that card. On the IBM PC, all the address space up to 640 KiB was available for RAM. This part of the address space is called "conventional memory" since it is accessible to all versions of DOS automatically on start up. Segment 0, the first 64 KB of conventional memory, is also called low memory area. Normally expansion memory is set to be contiguous in the address space with the memory on the motherboard. If there was an unallocated gap between motherboard memory and the expansion memory, the memory would not be automatically detected as usable by DOS. Upper memory area The upper memory area (UMA) refers to the address space between 640 KiB and 1024 KiB (0xA0000–0xFFFFF). The 128 KiB region between 0xA0000 and 0xBFFFF was reserved for VGA screen memory and legacy SMM. The 128 KiB region between 0xC0000 and 0xDFFFF was reserved for device Option ROMs, including Video BIOS. The 64 KiB of the address space from 0xE0000 to 0xEFFFF was reserved for the BIOS or Option ROMs. The IBM PC reserved the uppermost 64 KiB of the address space from 0xF0000 to 0xFFFFF for the BIOS and Cassette BASIC read-only memory (ROM). For example, the monochrome video adapter memory area ran from 704 to 736 KiB (0xB0000–0xB7FFF). If only a monochrome display adapter was used, the address space between 0xA0000 and 0xAFFFF could be used for RAM, which would be contiguous with the conventional memory. The system BIOS ROMs must be at the upper end of the address space because the CPU starting address is fixed by the design of the processor. The starting address is loaded into the program counter of the CPU after a hardware reset and must have a defined value that endures after power is interrupted to the system. On reset or power up, the CPU loads the address from the system ROM and then jumps to a defined ROM location to begin executing the system power-on self-test, and eventually load an operating system. Since an expansion card such as a video adapter, hard drive controller, or network adapter could use allocations of memory in many of the upper memory areas, configuration of some combinations of cards required careful reading of documentation, or experimentation, to find card settings and memory mappings that worked. Mapping two devices to use the same physical memory addresses could result in a stalled or unstable system. Not all addresses in the upper memory area were used in a typical system; unused physical addresses would return undefined and system-dependent data if accessed by the processor. Expanded memory As memory prices declined, application programs such as spreadsheets and computer-aided drafting were changed to take advantage of more and more physical memory in the system. Virtual memory in the 8088 and 8086 was not supported by the processor hardware, and disk technology of the time would make it too slow and cumbersome to be practical. Expanded memory was a system that allowed application programs to access more RAM than directly visible to the processor's address space. The process was a form of bank switching. When extra RAM was needed, driver software would temporarily make a piece of expanded memory accessible to the processor; when the data in that piece of memory was updated, another part could be swapped into the processor's address space. For the IBM PC and IBM PC/XT, with only 20 address lines, special-purpose expanded memory cards were made containing perhaps a megabyte, or more, of expanded memory, with logic on the board to make that memory accessible to the processor in defined parts of the 8088 address space. Allocation and use of expanded memory was not transparent to application programs. The application had to keep track of which bank of expanded memory contained a particular piece of data, and when access to that data was required, the application had to request (through a driver program) the expanded memory board to map that part of memory into the processor's address space. Although applications could use expanded memory with relative freedom, many other software components such as drivers and TSRs were still normally constrained to reside within the 640K "conventional memory" area, which soon became a critically scarce resource. The 80286 and the high memory area When the IBM PC/AT was introduced, the segmented memory architecture of the Intel family processors had the byproduct of allowing slightly more than 1 MiB of memory to be addressed in the "real" mode. Since the 80286 had more than 20 address lines, certain combinations of segment and offset could point into memory above the 0x0100000 (220) location. The 80286 could address up to 16 MiB of system memory, thus removing the behavior of memory addresses "wrapping around". Since the required address line now existed, the combination F800:8000 would no longer point to the physical address 0x0000000 but the correct address 0x00100000. As a result, some DOS programs would no longer work. To maintain compatibility with the PC and XT behavior, the AT included an A20 line gate (Gate A20) that made memory addresses on the AT wrap around to low memory as they would have on an 8088 processor. This gate could be controlled, initially through the keyboard controller, to allow running programs which were designed for this to access an additional 65,520 bytes (64 KiB of memory in real mode. At boot time, the BIOS first enables A20 when counting and testing all of the system's memory, and disables it before transferring control to the operating system. Enabling the A20 line is one of the first steps a protected mode x86 operating system does in the bootup process, often before control has been passed onto the kernel from the bootstrap (in the case of Linux, for example). The high memory area (HMA) is the RAM area consisting of the first 64 KiB, minus 16 bytes, of the extended memory on an IBM PC/AT or compatible microcomputer. Originally, the logic gate was a gate connected to the Intel 8042 keyboard controller. Controlling it was a relatively slow process. Other methods have since been added to allow for more efficient multitasking of programs which require this wrap-around with programs that access all of the system's memory. There was at first a variety of methods, but eventually the industry settled on the PS/2 method of using a bit in port 92h to control the A20 line. Disconnecting A20 would not wrap all memory accesses above 1 MiB, just those in the 1 MiB, 3 MiB, 5 MiB, etc. ranges. Real mode software only cared about the area slightly above 1 MiB, so Gate A20 was enough. Virtual 8086 mode, introduced with the Intel 80386, allows the A20 wrap-around to be simulated by using the virtual memory facilities of the processor: physical memory may be mapped to multiple virtual addresses, thus allowing that the memory mapped at the first megabyte of virtual memory may be mapped again in the second megabyte of virtual memory. The operating system may intercept changes to Gate A20 and make corresponding changes to the virtual memory address space, which also makes irrelevant the efficiency of Gate-A20 toggling. The first user of the HMA among Microsoft products was Windows 2.0 in 1987, which introduced the HIMEM.SYS device driver. Starting with versions 5.0 of DR-DOS (1990) and of MS-DOS (1991), parts of the operating system could be loaded into HMA as well, freeing up to 46 KiB of conventional memory. Other components, such as device drivers and TSRs, could be loaded into the upper memory area (UMA). A20 handler The A20 handler is software controlling access to the high memory area. Extended memory managers usually provide this functionality. In DOS, high memory area managers, such as HIMEM.SYS had the extra task of managing A20 and provided an API for opening/closing A20. DOS itself could utilize the area for some of its storage needs, thereby freeing up more conventional memory for programs. This functionality was enabled by the "DOS=HIGH" directive in the CONFIG.SYS configuration file. A20 gate on later processors The Intel 80486 and Pentium added a special pin named A20M#, which when asserted low forces bit 20 of the physical address to be zero for all on-chip cache or external memory accesses. This was necessary since the 80486 introduced an on-chip cache, and therefore masking this bit in external logic was no longer possible. Software still needs to manipulate the gate and must still deal with external peripherals (the chipset) for that. 80386 and subsequent processors Intel processors from the 386 onward allowed a virtual 8086 mode, which simplified the hardware required to implement expanded memory for DOS applications. Expanded memory managers such as Quarterdeck's QEMM product and Microsoft's EMM386 supported the expanded memory standard without requirement for special memory boards. On 386 and subsequent processors, memory managers like QEMM might move the bulk of the code for a driver or TSR into extended memory and replace it with a small fingerhold that was capable of accessing the extended-memory-resident code. They might analyze memory usage to detect drivers that required more RAM during startup than they did subsequently, and recover and reuse the memory that was no longer needed after startup. They might even remap areas of memory normally used for memory-mapped I/O. Many of these tricks involved assumptions about the functioning of drivers and other components. In effect, memory managers might reverse-engineer and modify other vendors' code on the fly. As might be expected, such tricks did not always work. Therefore, memory managers also incorporated very elaborate systems of configurable options, and provisions for recovery should a selected option render the PC unbootable (a frequent occurrence). Installing and configuring a memory manager might involve hours of experimentation with options, repeatedly rebooting the machine, and testing the results. But conventional memory was so valuable that PC owners felt that such time was well-spent if the result was to free up 30 KiB or 40 KiB of conventional memory space. Extended memory In the context of IBM PC compatible computers, extended memory refers to memory in the address space of the 80286 and subsequent processors, beyond the 1 megabyte limit imposed by the 20 address lines of the 8088 and 8086. Such memory is not directly available to DOS applications running in the so-called "real mode" of the 80286 and subsequent processors. This memory is only accessible in the protected or virtual modes of 80286 and higher processors. See also Global EMM Import Specification (GEMMIS) Virtual DMA Services (VDS) Virtual Control Program Interface (VCPI) Extended Virtual Control Program Interface (XVCPI) DOS Protected Mode Interface (DPMI) DOS Protected Mode Services (DPMS) Helix Cloaking References External links Microsoft support: Overview of Memory-Management Functionality in MS-DOS Computer Chronicles (1990). "High Memory Management". From the Internet Archive. Memory management
Operating System (OS)
412
OSI model The Open Systems Interconnection model (OSI model) is a conceptual model that characterises and standardises the communication functions of a telecommunication or computing system without regard to its underlying internal structure and technology. Its goal is the interoperability of diverse communication systems with standard communication protocols. The model partitions the flow of data in a communication system into seven abstraction layers, from the physical implementation of transmitting bits across a communications medium to the highest-level representation of data of a distributed application. Each intermediate layer serves a class of functionality to the layer above it and is served by the layer below it. Classes of functionality are realized in software by standardized communication protocols. The OSI model was developed starting in the late 1970s to support the emergence of the diverse computer networking methods that were competing for application in the large national networking efforts in the world. In the 1980s, the model became a working product of the Open Systems Interconnection group at the International Organization for Standardization (ISO). While attempting to provide a comprehensive description of networking, the model failed to garner reliance by the software architects in the design of the early Internet, which is reflected in the less prescriptive Internet Protocol Suite, principally sponsored under the auspices of the Internet Engineering Task Force (IETF). History In the early- and mid-1970s, networking was largely either government-sponsored (NPL network in the UK, ARPANET in the US, CYCLADES in France) or vendor-developed with proprietary standards, such as IBM's Systems Network Architecture and Digital Equipment Corporation's DECnet. Public data networks were only just beginning to emerge, and these began to use the X.25 standard in the late 1970s. The Experimental Packet Switched System in the UK circa 1973–1975 identified the need for defining higher level protocols. The UK National Computing Centre publication 'Why Distributed Computing' which came from considerable research into future configurations for computer systems, resulted in the UK presenting the case for an international standards committee to cover this area at the ISO meeting in Sydney in March 1977. Beginning in 1977, the International Organization for Standardization (ISO) conducted a program to develop general standards and methods of networking. A similar process evolved at the International Telegraph and Telephone Consultative Committee (CCITT, from French: Comité Consultatif International Téléphonique et Télégraphique). Both bodies developed documents that defined similar networking models. The OSI model was first defined in raw form in Washington, DC in February 1978 by Hubert Zimmermann of France and the refined but still draft standard was published by the ISO in 1980. The drafters of the reference model had to contend with many competing priorities and interests. The rate of technological change made it necessary to define standards that new systems could converge to rather than standardizing procedures after the fact; the reverse of the traditional approach to developing standards. Although not a standard itself, it was a framework in which future standards could be defined. In 1983, the CCITT and ISO documents were merged to form The Basic Reference Model for Open Systems Interconnection, usually referred to as the Open Systems Interconnection Reference Model, OSI Reference Model, or simply OSI model. It was published in 1984 by both the ISO, as standard ISO 7498, and the renamed CCITT (now called the Telecommunications Standardization Sector of the International Telecommunication Union or ITU-T) as standard X.200. OSI had two major components, an abstract model of networking, called the Basic Reference Model or seven-layer model, and a set of specific protocols. The OSI reference model was a major advance in the standardisation of network concepts. It promoted the idea of a consistent model of protocol layers, defining interoperability between network devices and software. The concept of a seven-layer model was provided by the work of Charles Bachman at Honeywell Information Systems. Various aspects of OSI design evolved from experiences with the NPL network, ARPANET, CYCLADES, EIN, and the International Networking Working Group (IFIP WG6.1). In this model, a networking system was divided into layers. Within each layer, one or more entities implement its functionality. Each entity interacted directly only with the layer immediately beneath it and provided facilities for use by the layer above it. The OSI standards documents are available from the ITU-T as the X.200-series of recommendations. Some of the protocol specifications were also available as part of the ITU-T X series. The equivalent ISO and ISO/IEC standards for the OSI model were available from ISO. Not all are free of charge. OSI was an industry effort, attempting to get industry participants to agree on common network standards to provide multi-vendor interoperability. It was common for large networks to support multiple network protocol suites, with many devices unable to interoperate with other devices because of a lack of common protocols. For a period in the late 1980s and early 1990s, engineers, organizations and nations became polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks. However, while OSI developed its networking standards in the late 1980s, TCP/IP came into widespread use on multi-vendor networks for internetworking. The OSI model is still used as a reference for teaching and documentation; however, the OSI protocols originally conceived for the model did not gain popularity. Some engineers argue the OSI reference model is still relevant to cloud computing. Others say the original OSI model doesn't fit today's networking protocols and have suggested instead a simplified approach. Definitions Communication protocols enable an entity in one host to interact with a corresponding entity at the same layer in another host. Service definitions, like the OSI Model, abstractly describe the functionality provided to an (N)-layer by an (N-1) layer, where N is one of the seven layers of protocols operating in the local host. At each level N, two entities at the communicating devices (layer N peers) exchange protocol data units (PDUs) by means of a layer N protocol. Each PDU contains a payload, called the service data unit (SDU), along with protocol-related headers or footers. Data processing by two communicating OSI-compatible devices proceeds as follows: The data to be transmitted is composed at the topmost layer of the transmitting device (layer N) into a protocol data unit (PDU). The PDU is passed to layer N-1, where it is known as the service data unit (SDU). At layer N-1 the SDU is concatenated with a header, a footer, or both, producing a layer N-1 PDU. It is then passed to layer N-2. The process continues until reaching the lowermost level, from which the data is transmitted to the receiving device. At the receiving device the data is passed from the lowest to the highest layer as a series of SDUs while being successively stripped from each layer's header or footer until reaching the topmost layer, where the last of the data is consumed. Standards documents The OSI model was defined in ISO/IEC 7498 which consists of the following parts: ISO/IEC 7498-1 The Basic Model ISO/IEC 7498-2 Security Architecture ISO/IEC 7498-3 Naming and addressing ISO/IEC 7498-4 Management framework ISO/IEC 7498-1 is also published as ITU-T Recommendation X.200. Layer architecture The recommendation X.200 describes seven layers, labelled 1 to 7. Layer 1 is the lowest layer in this model. Layer 1: Physical layer The physical layer is responsible for the transmission and reception of unstructured raw data between a device and a physical transmission medium. It converts the digital bits into electrical, radio, or optical signals. Layer specifications define characteristics such as voltage levels, the timing of voltage changes, physical data rates, maximum transmission distances, modulation scheme, channel access method and physical connectors. This includes the layout of pins, voltages, line impedance, cable specifications, signal timing and frequency for wireless devices. Bit rate control is done at the physical layer and may define transmission mode as simplex, half duplex, and full duplex. The components of a physical layer can be described in terms of a network topology. Physical layer specifications are included in the specifications for the ubiquitous Bluetooth, Ethernet, and USB standards. An example of a less well-known physical layer specification would be for the CAN standard. Layer 2: Data link layer The data link layer provides node-to-node data transfer—a link between two directly connected nodes. It detects and possibly corrects errors that may occur in the physical layer. It defines the protocol to establish and terminate a connection between two physically connected devices. It also defines the protocol for flow control between them. IEEE 802 divides the data link layer into two sublayers: Medium access control (MAC) layer – responsible for controlling how devices in a network gain access to a medium and permission to transmit data. Logical link control (LLC) layer – responsible for identifying and encapsulating network layer protocols, and controls error checking and frame synchronization. The MAC and LLC layers of IEEE 802 networks such as 802.3 Ethernet, 802.11 Wi-Fi, and 802.15.4 ZigBee operate at the data link layer. The Point-to-Point Protocol (PPP) is a data link layer protocol that can operate over several different physical layers, such as synchronous and asynchronous serial lines. The ITU-T G.hn standard, which provides high-speed local area networking over existing wires (power lines, phone lines and coaxial cables), includes a complete data link layer that provides both error correction and flow control by means of a selective-repeat sliding-window protocol. Security, specifically (authenticated) encryption, at this layer can be applied with MACSec. Layer 3: Network layer The network layer provides the functional and procedural means of transferring packets from one node to another connected in "different networks". A network is a medium to which many nodes can be connected, on which every node has an address and which permits nodes connected to it to transfer messages to other nodes connected to it by merely providing the content of a message and the address of the destination node and letting the network find the way to deliver the message to the destination node, possibly routing it through intermediate nodes. If the message is too large to be transmitted from one node to another on the data link layer between those nodes, the network may implement message delivery by splitting the message into several fragments at one node, sending the fragments independently, and reassembling the fragments at another node. It may, but does not need to, report delivery errors. Message delivery at the network layer is not necessarily guaranteed to be reliable; a network layer protocol may provide reliable message delivery, but it need not do so. A number of layer-management protocols, a function defined in the management annex, ISO 7498/4, belong to the network layer. These include routing protocols, multicast group management, network-layer information and error, and network-layer address assignment. It is the function of the payload that makes these belong to the network layer, not the protocol that carries them. Layer 4: Transport layer The transport layer provides the functional and procedural means of transferring variable-length data sequences from a source to a destination host, while maintaining the quality of service functions. The transport layer may control the reliability of a given link through flow control, segmentation/desegmentation, and error control. Some protocols are state- and connection-oriented. This means that the transport layer can keep track of the segments and retransmit those that fail delivery. The transport layer may also provide the acknowledgement of the successful data transmission and sends the next data if no errors occurred. The transport layer creates segments out of the message received from the application layer. Segmentation is the process of dividing a long message into smaller messages. Reliability, however, is not a strict requirement within the transport layer. Protocols like UDP, for example, are used in applications that are willing to accept some packet loss, reordering, errors or duplication. Streaming media, real-time multiplayer games and voice over IP (VoIP) are examples of applications in which loss of packets is not usually a fatal problem. OSI defines five classes of connection-mode transport protocols ranging from class 0 (which is also known as TP0 and provides the fewest features) to class 4 (TP4, designed for less reliable networks, similar to the Internet). Class 0 contains no error recovery and was designed for use on network layers that provide error-free connections. Class 4 is closest to TCP, although TCP contains functions, such as the graceful close, which OSI assigns to the session layer. Also, all OSI TP connection-mode protocol classes provide expedited data and preservation of record boundaries. Detailed characteristics of TP0-4 classes are shown in the following table: An easy way to visualize the transport layer is to compare it with a post office, which deals with the dispatch and classification of mail and parcels sent. A post office inspects only the outer envelope of mail to determine its delivery. Higher layers may have the equivalent of double envelopes, such as cryptographic presentation services that can be read by the addressee only. Roughly speaking, tunnelling protocols operate at the transport layer, such as carrying non-IP protocols such as IBM's SNA or Novell's IPX over an IP network, or end-to-end encryption with IPsec. While Generic Routing Encapsulation (GRE) might seem to be a network-layer protocol, if the encapsulation of the payload takes place only at the endpoint, GRE becomes closer to a transport protocol that uses IP headers but contains complete Layer 2 frames or Layer 3 packets to deliver to the endpoint. L2TP carries PPP frames inside transport segments. Although not developed under the OSI Reference Model and not strictly conforming to the OSI definition of the transport layer, the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) of the Internet Protocol Suite are commonly categorized as layer-4 protocols within OSI. Transport Layer Security (TLS) does not strictly fit inside the model either. It contains characteristics of the transport and presentation layers. Layer 5: Session layer The session layer controls the dialogues (connections) between computers. It establishes, manages and terminates the connections between the local and remote application. It provides for full-duplex, half-duplex, or simplex operation, and establishes procedures for checkpointing, suspending, restarting, and terminating a session. In the OSI model, this layer is responsible for gracefully closing a session. This layer is also responsible for session checkpointing and recovery, which is not usually used in the Internet Protocol Suite. The session layer is commonly implemented explicitly in application environments that use remote procedure calls. In the modern TCP/IP system, the session layer is non-existent and is simply part of TCP. Layer 6: Presentation layer The presentation layer establishes context between application-layer entities, in which the application-layer entities may use different syntax and semantics if the presentation service provides a mapping between them. If a mapping is available, presentation protocol data units are encapsulated into session protocol data units and passed down the protocol stack. This layer provides independence from data representation by translating between application and network formats. The presentation layer transforms data into the form that the application accepts. This layer formats data to be sent across a network. It is sometimes called the syntax layer. The presentation layer can include compression functions. The Presentation Layer negotiates the Transfer Syntax. The original presentation structure used the Basic Encoding Rules of Abstract Syntax Notation One (ASN.1), with capabilities such as converting an EBCDIC-coded text file to an ASCII-coded file, or serialization of objects and other data structures from and to XML. ASN.1 effectively makes an application protocol invariant with respect to syntax. Layer 7: Application layer The application layer is the OSI layer closest to the end user, which means both the OSI application layer and the user interact directly with the software application. This layer interacts with software applications that implement a communicating component. Such application programs fall outside the scope of the OSI model. Application-layer functions typically include identifying communication partners, determining resource availability, and synchronizing communication. When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. The most important distinction in the application layer is the distinction between the application-entity and the application. For example, a reservation website might have two application-entities: one using HTTP to communicate with its users, and one for a remote database protocol to record reservations. Neither of these protocols have anything to do with reservations. That logic is in the application itself. The application layer has no means to determine the availability of resources in the network. Cross-layer functions Cross-layer functions are services that are not tied to a given layer, but may affect more than one layer. Some orthogonal aspects, such as management and security, involve all of the layers (See ITU-T X.800 Recommendation). These services are aimed at improving the CIA triad—confidentiality, integrity, and availability—of the transmitted data. Cross-layer functions are the norm, in practice, because the availability of a communication service is determined by the interaction between network design and network management protocols. Specific examples of cross-layer functions include the following: Security service (telecommunication) as defined by ITU-T X.800 recommendation. Management functions, i.e. functions that permit to configure, instantiate, monitor, terminate the communications of two or more entities: there is a specific application-layer protocol, common management information protocol (CMIP) and its corresponding service, common management information service (CMIS), they need to interact with every layer in order to deal with their instances. Multiprotocol Label Switching (MPLS), ATM, and X.25 are 3a protocols. OSI subdivides the Network Layer into three sublayers: 3a) Subnetwork Access, 3b) Subnetwork Dependent Convergence and 3c) Subnetwork Independent Convergence. It was designed to provide a unified data-carrying service for both circuit-based clients and packet-switching clients which provide a datagram-based service model. It can be used to carry many different kinds of traffic, including IP packets, as well as native ATM, SONET, and Ethernet frames. Sometimes one sees reference to a Layer 2.5. Cross MAC and PHY Scheduling is essential in wireless networks because of the time-varying nature of wireless channels. By scheduling packet transmission only in favourable channel conditions, which requires the MAC layer to obtain channel state information from the PHY layer, network throughput can be significantly improved and energy waste can be avoided. Programming interfaces Neither the OSI Reference Model, nor any OSI protocol specifications, outline any programming interfaces, other than deliberately abstract service descriptions. Protocol specifications define a methodology for communication between peers, but the software interfaces are implementation-specific. For example, the Network Driver Interface Specification (NDIS) and Open Data-Link Interface (ODI) are interfaces between the media (layer 2) and the network protocol (layer 3). Comparison to other networking suites The table below presents a list of OSI layers, the original OSI protocols, and some approximate modern matches. It is very important to note that this correspondance is rough: the OSI model contains idiosyncracies not found in later systems such as the IP stack in modern Internet. Comparison with TCP/IP model The design of protocols in the TCP/IP model of the Internet does not concern itself with strict hierarchical encapsulation and layering. RFC 3439 contains a section entitled "Layering considered harmful". TCP/IP does recognize four broad layers of functionality which are derived from the operating scope of their contained protocols: the scope of the software application; the host-to-host transport path; the internetworking range; and the scope of the direct links to other nodes on the local network. Despite using a different concept for layering than the OSI model, these layers are often compared with the OSI layering scheme in the following manner: The Internet application layer maps to the OSI application layer, presentation layer, and most of the session layer. The TCP/IP transport layer maps to the graceful close function of the OSI session layer as well as the OSI transport layer. The internet layer performs functions as those in a subset of the OSI network layer. The link layer corresponds to the OSI data link layer and may include similar functions as the physical layer, as well as some protocols of the OSI's network layer. These comparisons are based on the original seven-layer protocol model as defined in ISO 7498, rather than refinements in the internal organization of the network layer. The OSI protocol suite that was specified as part of the OSI project was considered by many as too complicated and inefficient, and to a large extent unimplementable. Taking the "forklift upgrade" approach to networking, it specified eliminating all existing networking protocols and replacing them at all layers of the stack. This made implementation difficult and was resisted by many vendors and users with significant investments in other network technologies. In addition, the protocols included so many optional features that many vendors' implementations were not interoperable. Although the OSI model is often still referenced, the Internet protocol suite has become the standard for networking. TCP/IP's pragmatic approach to computer networking and to independent implementations of simplified protocols made it a practical methodology. Some protocols and specifications in the OSI stack remain in use, one example being IS-IS, which was specified for OSI as ISO/IEC 10589:2002 and adapted for Internet use with TCP/IP as . See also Common Management Information Service (CMIS) GOSIP, the (U.S.) Government Open Systems Interconnection Profile Hierarchical internetworking model Layer 8 List of information technology initialisms Management plane Recursive Internetwork Architecture Service layer Further reading John Day, "Patterns in Network Architecture: A Return to Fundamentals" (Prentice Hall 2007, ) Marshall Rose, "The Open Book" (Prentice-Hall, Englewood Cliffs, 1990) David M. Piscitello, A. Lyman Chapin, Open Systems Networking (Addison-Wesley, Reading, 1993) Andrew S. Tanenbaum, Computer Networks, 4th Edition, (Prentice-Hall, 2002) References External links Microsoft Knowledge Base: The OSI Model's Seven Layers Defined and Functions Explained ISO/IEC standard 7498-1:1994 (PDF document inside ZIP archive) (requires HTTP cookies in order to accept licence agreement) ITU-T X.200 (the same contents as from ISO) Cisco Systems Internetworking Technology Handbook Reference models Computer-related introductions in 1977 Computer-related introductions in 1979 ISO standards ITU-T recommendations ITU-T X Series Recommendations ISO/IEC 7498
Operating System (OS)
413
Sintran III Sintran III is a real-time, multitasking, multi-user operating system used with Norsk Data minicomputers from 1974. Unlike its predecessors Sintran I and II, it was written entirely by Norsk Data, in Nord Programming Language (Nord PL, NPL), an intermediate language for Norsk Data computers. Overview Sintran was mainly a command-line interface based operating system, though there were several shells which could be installed to control the user environment more strictly, by far the most popular of which was USER-ENVIRONMENT. One of the clever features was to be able to abbreviate commands and file names between hyphens. For example, typing LIST-FILES would give users several prompts, including for print, paging etc. Users could override this using the following LI-FI ,,n, which would abbreviate the LIST-FILES command prompt and bypass any of the prompts. One could also refer to files in this way, for example, with PED H-W: which would refer to HELLO-WORLD:SYMB if this was the only file having H, any number of characters, a hyphen -, a W, any number of characters, and any file ending. This saved many keystrokes and would allow users a very nice learning experience, from complete and self-explanatory commands like LIST-ALL-FILES to L-A-F for an advanced user. (The hyphen key on Norwegian keyboards resides where the slash key does on U.S. ones.) Now that Sintran has mostly disappeared as an operating system, there are few references to it. However a job control or batch processing language was available named JEC, believed to be named Job Execution Controller, this could be used to set up batch jobs to compile COBOL programs, etc. References Discontinued operating systems Norsk Data software Proprietary operating systems Real-time operating systems 1974 software
Operating System (OS)
414
List of alternative shells for Windows This is a list of software that provides an alternative graphical user interface for Microsoft Windows operating systems. The technical term for this interface is a shell. Windows' standard user interface is the Windows shell; Windows 3.0 and Windows 3.1x have a different shell, called Program Manager. The programs in this list do not restyle the Windows shell, but replace it; therefore, they look and function differently, and have different configuration options. See also Comparison of Start menu replacements for Windows 8 Comparison of X Window System desktop environments Desktop environment History of the graphical user interface Microsoft Bob Removal of Internet Explorer References External links Desktop shell replacement Windows-only software Computing-related lists
Operating System (OS)
415
Apple–Intel architecture The Apple–Intel architecture, or Mactel, is an unofficial name used for Apple Macintosh personal computers developed and manufactured by Apple Inc. that use Intel x86 processors, rather than the PowerPC and Motorola 68000 ("68k") series processors used in their predecessors or the ARM processors used in their successors. With the change in architecture, a change in firmware became necessary; Apple selected the Intel-designed Extensible Firmware Interface (EFI) as its comparable component to the Open Firmware used on its PowerPC architectures, and as the firmware-based replacement for the PC BIOS from Intel. With the change in processor architecture to x86, Macs gained the ability to boot into x86-native operating systems (such as Microsoft Windows), while Intel VT-x brought near-native virtualization with Mac OS X as the host OS. Technologies Background Apple uses a subset of the standard PC architecture, which provides support for Mac OS X and support for other operating systems. Hardware and firmware components that must be supported to run an operating system on Apple-Intel hardware include the Extensible Firmware Interface. The EFI and GUID Partition Table With the change in architecture, a change in firmware became necessary. Extensible Firmware Interface (EFI) is the firmware-based replacement for the PC BIOS from Intel. Designed by Intel, it was chosen by Apple to replace Open Firmware, used on PowerPC architectures. Since many operating systems, such as Windows XP and many versions of Windows Vista, are incompatible with EFI, Apple released a firmware upgrade with a Compatibility Support Module that provides a subset of traditional BIOS support with its Boot Camp product. GUID Partition Table (GPT) is a standard for the layout of the partition table on a physical hard disk. It is a part of the Extensible Firmware Interface (EFI) standard proposed by Intel as a substitute for the earlier PC BIOS. The GPT replaces the Master Boot Record (MBR) used with BIOS. Booting To Mac operating systems Intel Macs can boot in two ways: directly via EFI, or in a "legacy" BIOS compatibility mode. For multibooting, holding down "Option" gives a choice of bootable devices, while the rEFInd bootloader is commonly used for added configurability. Legacy Live USBs cannot be used on Intel Macs; the EFI firmware can recognize and boot from USB drives, but it can only do this in EFI mode–when the firmware switches to BIOS mode, it no longer recognizes USB drives, due to lack of a BIOS-mode USB driver. Many operating systems, such as earlier versions of Windows and Linux, could only be booted in BIOS mode, or were more easily booted or perform better when booted in BIOS mode, and thus USB booting on Intel-based Macs was for a time largely limited to Mac OS X, which can easily be booted via EFI. To non-Mac operating systems On April 5, 2006, Apple made available for download a public beta version of Boot Camp, a collection of technologies that allows users of Intel-based Macs to boot Windows XP Service Pack 2. The first non-beta version of Boot Camp is included in Mac OS X v10.5, "Leopard." Before the introduction of Boot Camp, which provides most hardware drivers for Windows XP, drivers for XP were difficult to find. Linux can also be booted with Boot Camp. Differences from standard PCs Intel-based Mac computers use very similar hardware to PCs from other manufacturers that ship with Microsoft Windows or Linux operating systems. In particular, CPUs, chipsets, and GPUs are entirely compatible. However, Apple computers also include some custom hardware and design choices not found in competing systems: System Management Controller is a custom Apple chip that controls various functions of the computer related to power management, including handling the power button, management of battery and thermal sensors, among others. It also plays a part in the protection scheme deployed to restrict booting macOS to Apple hardware (see Digital Rights Management below). Intel-based Mac doesn't use standard TPM. Laptop input devices. Early MacBook and MacBook Pro computers used an internal variant of USB as a keyboard and trackpad interconnect. Since the 2013 revision of MacBook Air, Apple started to use a custom Serial Peripheral Interface controller instead. The 2016 MacBook Pro additionally uses a custom internal USB device dubbed "iBridge" as an interface to the Touch Bar and Touch ID components, as well as the FaceTime Camera. PC laptops generally use internal variant of the legacy PS/2 keyboard interconnect. PS/2 also used to be the standard for PC laptop pointing devices, although a variety of other interfaces, including USB, SMBus, and I2C, may also be used. Additional custom hardware may include a GMUX chip that controls GPU switching, non-compliant implementations of NVMe solid-state storage and non-standard configurations of HD Audio subsystem. Keyboard layout has significant differences between Apple and IBM PC keyboards. While PC keyboards can be used in macOS, as well as Mac keyboards in Microsoft Windows, some functional differences occur. For example, the (PC) and (Mac) keys function equivalently; the same is true for (PC) and (Mac) – however, the physical location of those keys is reversed. There are also keys exclusive for each platform (e.g. ), some of which may require software remapping to achieve the desired function. Compact and laptop keyboards from Apple also lack some keys considered essential on PCs, such as the forward key, although some of them are accessible through the key. Boot process. All Intel-based Macs have been using some version of EFI as the boot firmware. At the time the platform debuted in 2006, it was in a stark contrast to PCs, which almost universally employed legacy BIOS, and Apple's implementation of EFI did not initially implement the Compatibility Support Module that would allow booting contemporary standard PC operating systems. Apple updated the firmware with CSM support with the release of Boot Camp in April 2006, and since the release of Windows 8 in 2012, Microsoft has required its OEM partners to use UEFI boot process on PCs, which made the differences smaller. However, Apple's version of EFI also includes some custom extensions that are utilized during regular macOS boot process, which include the following: A driver for the HFS Plus file system with support locating the bootloader based on the "blessed directory" and "blessed file" properties of HFS+ volumes. The EFI System Partition is thus not used or necessary for regular macOS boot process. Rudimentary pre-boot GUI framework, including support for image drawing, mouse cursor and events. This is used by FileVault 2 to present the login screen before loading the operating system. Other non-standard EFI services for managing various firmware features such as the computer's NVRAM and boot arguments. Some of these differences can pose as obstacles both to running macOS on non-Apple hardware and booting alternative operating systems on Mac computers – Apple only provides drivers for its custom hardware for macOS and Microsoft Windows (as part of Boot Camp); drivers for other operating systems such as Linux need to be written by third parties, usually volunteer free software enthusiasts. Digital Rights Management Digital Rights Management in the Apple–Intel architecture is accomplished via the "Dont Steal Mac OS X.kext," sometimes referred to as DSMOS or DSMOSX, a file present in Intel-capable versions of the Mac OS X operating system. Its presence enforces a form of Digital Rights Management, preventing Mac OS X being installed on stock PCs. The name of the kext is a reference to the Mac OS X license conditions, which allow installation on Apple hardware only. According to Apple, anything else is stealing Mac OS X. The kext is located at /System/Library/Extensions on the volume containing the operating system. The extension contains a kernel function called that performs AES decryption of "apple-protected" programs. A system lacking a proper key will not be able to run the Apple-restricted binaries, which include , , , , , , , , , or . After the initial announcement of first Intel-based Mac hardware configurations, reporting a Trusted Platform Module among system components, it was believed that the TPM is responsible for handling the DRM protection. It was later proven to not be the case. The keys are actually contained within the System Management Controller, a component exclusive to Apple computers, and can be easily retrieved from it. These two 32-byte keys form a human-readable ASCII string copyrighted by Apple, establishing another possible line of legal defence against prospective clone makers. Virtualization The Intel Core Duo (and later, including the current i5, i7, i9, and Xeon) processors found in Intel Macs support Intel VT-x, which allows for high performance (near-native) virtualization that gives the user the ability to run and switch between two or more operating systems simultaneously, rather than having to dual-boot and run only one operating system at a time. The first software to take advantage of this technology was Parallels Desktop for Mac, released in June 2006. The Parallels virtualization products allow users to use installations of Windows XP and later in a virtualized mode while running OS X. VirtualBox is virtualization software from Oracle Corporation, which was released January 2007. Available for Mac OS X as well as other host operating systems, it supports Intel VT-x and can run multiple other guest operating systems, including Windows XP and later. It is available free of charge under either a proprietary license or the GPL free software license and is used by default when running Docker images of other operating systems VMware offers a product similar to Parallels called Fusion, released August 2007. VMware's virtualization product also allows users to use installations of Windows XP and later under OS X. Regardless of the product used, there are inherent limitations and performance penalties in using a virtualized guest OS versus the native macOS or booting an alternative OS solution offered via Boot Camp. See also Mac transition to Intel processors References and notes External links Intel EFI open-source implementation, code-name 'Tiano' Macintosh platform IBM PC compatibles 32-bit computers 64-bit computers
Operating System (OS)
416
Multics Multics ("Multiplexed Information and Computing Service") is an influential early time-sharing operating system based on the concept of a single-level memory. It has been said that Multics "has influenced all modern operating systems since, from microcomputers to mainframes." Initial planning and development for Multics started in 1964, in Cambridge, Massachusetts. Originally it was a cooperative project led by MIT (Project MAC with Fernando Corbató) along with General Electric and Bell Labs. It was developed on the GE 645 computer, which was specially designed for it; the first one was delivered to MIT in January, 1967. Multics was conceived as a commercial product for General Electric, and became one for Honeywell, albeit not very successfully. Due to its many novel and valuable ideas, Multics has had a significant influence on computer science despite its faults. Multics has numerous features intended to ensure high availability so that it would support a computing utility similar to the telephone and electricity utilities. Modular hardware structure and software architecture are used to achieve this. The system can grow in size by simply adding more of the appropriate resource, be it computing power, main memory, or disk storage. Separate access control lists on every file provide flexible information sharing, but complete privacy when needed. Multics has a number of standard mechanisms to allow engineers to analyze the performance of the system, as well as a number of adaptive performance optimization mechanisms. Novel ideas Multics implements a single-level store for data access, discarding the clear distinction between files (called segments in Multics) and process memory. The memory of a process consists solely of segments that were mapped into its address space. To read or write to them, the process simply uses normal central processing unit (CPU) instructions, and the operating system takes care of making sure that all the modifications were saved to disk. In POSIX terminology, it is as if every file were mmap()ed; however, in Multics there is no concept of process memory, separate from the memory used to hold mapped-in files, as Unix has. All memory in the system is part of some segment, which appears in the file system; this includes the temporary scratch memory of the process, its kernel stack, etc. One disadvantage of this was that the size of segments was limited to 256 kilowords, just over 1 MB. This was due to the particular hardware architecture of the machines on which Multics ran, having a 36-bit word size and index registers (used to address within segments) of half that size (18 bits). Extra code had to be used to work on files larger than this, called multisegment files. In the days when one megabyte of memory was prohibitively expensive, and before large databases and later huge bitmap graphics, this limit was rarely encountered. Another major new idea of Multics was dynamic linking, in which a running process could request that other segments be added to its address space, segments which could contain code that it could then execute. This allowed applications to automatically use the latest version of any external routine they called, since those routines were kept in other segments, which were dynamically linked only when a process first tried to begin execution in them. Since different processes could use different search rules, different users could end up using different versions of external routines automatically. Equally importantly, with the appropriate settings on the Multics security facilities, the code in the other segment could then gain access to data structures maintained in a different process. Thus, to interact with an application running in part as a daemon (in another process), a user's process simply performed a normal procedure-call instruction to a code segment to which it had dynamically linked (a code segment that implemented some operation associated with the daemon). The code in that segment could then modify data maintained and used in the daemon. When the action necessary to commence the request was completed, a simple procedure return instruction returned control of the user's process to the user's code. Multics also supported extremely aggressive on-line reconfiguration: central processing units, memory banks, disk drives, etc. could be added and removed while the system continued operating. At the MIT system, where most early software development was done, it was common practice to split the multiprocessor system into two separate systems during off-hours by incrementally removing enough components to form a second working system, leaving the rest still running the original logged-in users. System software development testing could be done on the second system, then the components of the second system were added back to the main user system, without ever having shut it down. Multics supported multiple CPUs; it was one of the earliest multiprocessor systems. Multics was the first major operating system to be designed as a secure system from the outset. Despite this, early versions of Multics were broken into repeatedly. This led to further work that made the system much more secure and prefigured modern security engineering techniques. Break-ins became very rare once the second-generation hardware base was adopted; it had hardware support for ring-oriented security, a multilevel refinement of the concept of master mode. A US Air Force tiger team project tested Multics security in 1973 under the codeword ZARF. On 28 May 1997, the American National Security Agency declassified this use of the codeword ZARF. Multics was the first operating system to provide a hierarchical file system, and file names could be of almost arbitrary length and syntax. A given file or directory could have multiple names (typically a long and short form), and symbolic links between directories were also supported. Multics was the first to use the now-standard concept of per-process stacks in the kernel, with a separate stack for each security ring. It was also the first to have a command processor implemented as ordinary user code – an idea later used in the Unix shell. It was also one of the first written in a high-level language (Multics PL/I), after the Burroughs MCP system written in ALGOL. The deployment of Multics into secure computing environments also spurred the development of innovative supporting applications. In 1975, Morrie Gasser of MITRE Corporation developed a pronounceable random word generator to address password requirements of installations such as the Air Force Data Services Center (AFDSC) processing classified information. To avoid guessable passwords, the AFDSC decided to assign passwords but concluded the manual assignment required too much administrative overhead. Thus, a random word generator was researched and then developed in PL1. Instead of being based on phonemes, the system employed phonemic segments (second order approximations of English) and other rules to enhance pronounceability and randomness, which was statistically modeled against other approaches. A descendant of this generator was added to Multics during Project Guardian. Project history In 1964, Multics was developed initially for the GE-645 mainframe, a 36-bit system. GE's computer business, including Multics, was taken over by Honeywell in 1970; around 1973, Multics was supported on the Honeywell 6180 machines, which included security improvements including hardware support for protection rings. Bell Labs pulled out of the project in 1969; some of the people who had worked on it there went on to create the Unix system. Multics development continued at MIT and General Electric. Honeywell continued system development until 1985. About 80 multimillion-dollar sites were installed, at universities, industry, and government sites. The French university system had several installations in the early 1980s. After Honeywell stopped supporting Multics, users migrated to other systems like Unix. In 1985, Multics was issued certification as a B2 level secure operating system using the Trusted Computer System Evaluation Criteria from the National Computer Security Center (NCSC) a division of the NSA, the first operating system evaluated to this level. Multics was distributed from 1975 to 2000 by Groupe Bull in Europe, and by Bull HN Information Systems Inc. in the United States. In 2006, Bull SAS released the source code of Multics versions MR10.2, MR11.0, MR12.0, MR12.1, MR12.2, MR12.3, MR12.4 & MR12.5 under a free software license. The last known Multics installation running natively on Honeywell hardware was shut down on October 30, 2000, at the Canadian Department of National Defence in Halifax, Nova Scotia, Canada. Current status In 2006 Bull HN released the source code for MR12.5, the final 1992 Multics release, to MIT. Most of the system is now available as free software with the exception of some optional pieces such as TCP/IP. In 2014 Multics was successfully run on current hardware using an emulator. The 1.0 release of the emulator is now available. Release 12.6f of Multics accompanies the 1.0 release of the emulator, and adds a few new features, including command line recall and editing using the video system. Commands The following is a list of programs and commands for common computing tasks that are supported by the Multics command-line interface. apl ceil change_wdir (cwd) cobol copy (cp) echo emacs floor fortran (ft) gcos (gc) help home_dir (hd) if list (ls) login (l) logout ltrim mail (ml) pascal pl1 print (pr) print_wdir (pwd) runoff (rf) rtrim sort teco trunc where (wh) who working_dir (wd) Retrospective observations Peter H. Salus, author of a book covering Unix's early years, stated one position: "With Multics they tried to have a much more versatile and flexible operating system, and it failed miserably". This position, however, has been widely discredited in the computing community because many of Multics' technical innovations are used in modern commercial computing systems. The permanently resident kernel of Multics, a system derided in its day as being too large and complex, was only 135 KB of code. In comparison, a Linux system in 2007 might have occupied 18 MB. The first MIT GE-645 had 512 kilowords of memory (2 MiB), a truly enormous amount at the time, and the kernel used only a moderate portion of Multics main memory. The entire system, including the operating system and the complex PL/1 compiler, user commands, and subroutine libraries, consisted of about 1500 source modules. These averaged roughly 200 lines of source code each, and compiled to produce a total of roughly 4.5 MiB of procedure code, which was fairly large by the standards of the day. Multics compilers generally optimised more for code density than CPU performance, for example using small sub-routines called operators for short standard code sequences, which makes comparison of object code size with modern systems less useful. High code density was a good optimisation choice for Multics as a multi-user system with expensive main memory. During its commercial product history, it was often commented internally that the Honeywell Information Systems (HIS) (later Honeywell-Bull) sales and marketing staff were more familiar with and comfortable making the business case for Honeywell’s other computer line, the DPS 6 running GCOS. The DPS-6 and GCOS was a well-regarded and reliable platform for inventory, accounting, word processing, and vertical market applications, such as banking, where it had a sizeable customer base. In contrast, the full potential of Multics’ flexibility for even mundane tasks was not easy to comprehend in that era and its features were generally outside the skill set of contemporary business analysts. The scope of this disconnect was concretized by an anecdote conveyed by Paul Stachour, CNO/CSC: When American Telephone and Telegraph was changing its name to just AT&T in 1983, a staffer from Honeywell’s legal department showed up and asked a Multician if he could arrange to have the name changed in all of their computerized documents. When asked when the process could be completed, the Multician replied, "It's done." The staffer repeated that he needed hundreds perhaps thousands of documents updated. The Multician explained that he had executed a global search and replace as the staffer was speaking, and the task was in fact completed. Influence on other projects Unix The design and features of Multics greatly influenced the Unix operating system, which was originally written by two Multics programmers, Ken Thompson and Dennis Ritchie. Superficial influence of Multics on Unix is evident in many areas, including the naming of some commands. But the internal design philosophy was quite different, focusing on keeping the system small and simple, and so correcting some perceived deficiencies of Multics because of its high resource demands on the limited computer hardware of the time. The name Unix (originally Unics) is itself a pun on Multics. The U in Unix is rumored to stand for uniplexed as opposed to the multiplexed of Multics, further underscoring the designers' rejections of Multics' complexity in favor of a more straightforward and workable approach for smaller computers. (Garfinkel and Abelson cite an alternative origin: Peter Neumann at Bell Labs, watching a demonstration of the prototype, suggested the pun name UNICS – pronounced "eunuchs" – as a "castrated Multics", although Dennis Ritchie is said to have denied this.) Ken Thompson, in a transcribed 2007 interview with Peter Seibel refers to Multics as "overdesigned and overbuilt and over everything. It was close to unusable. They [Massachusetts Institute of Technology] still claim it's a monstrous success, but it just clearly wasn't". On the influence of Multics on Unix, Thompson stated that "the things that I liked enough (about Multics) to actually take were the hierarchical file system and the shell — a separate process that you can replace with some other process". Other operating systems The Prime Computer operating system, PRIMOS, was referred to as "Multics in a shoebox" by William Poduska, a founder of the company. Poduska later moved on to found Apollo Computer, whose AEGIS and later Domain/OS operating systems, sometimes called "Multics in a matchbox", extended the Multics design to a heavily networked graphics workstation environment. The Stratus VOS operating system of Stratus Computer (now Stratus Technologies) was very strongly influenced by Multics, and both its external user interface and internal structure bear many close resemblances to the older project. The high-reliability, availability, and security features of Multics were extended in Stratus VOS to support a new line of fault tolerant computer systems supporting secure, reliable transaction processing. Stratus VOS is the most directly-related descendant of Multics still in active development and production usage today. The protection architecture of Multics, restricting the ability of code at one level of the system to access resources at another, was adopted as the basis for the security features of ICL's VME operating system. See also Time-sharing system evolution Peter J. Denning Jack B. Dennis Robert Fano – director of Project MAC at MIT (1963–1968) Robert M. Graham (computer scientist) J. C. R. Licklider – director of Project MAC at MIT (1968–1971) Peter G. Neumann Elliott Organick Louis Pouzin – introduced the term shell for the command language used in Multics Jerome H. Saltzer Roger R. Schell Glenda Schroeder – implemented the first command line user interface shell and proposed the first email system with Pouzin and Crisman Victor A. Vyssotsky References Further reading The literature contains a large number of papers about Multics, and various components of it; a fairly complete list is available at the Multics Bibliography page and on a second, briefer 1994 Multics bibliography (text format). The most important and/or informative ones are listed below. F. J. Corbató, V. A. Vyssotsky, Introduction and Overview of the Multics System (AFIPS 1965) is a good introduction to the system. F. J. Corbató, C. T. Clingen, J. H. Saltzer, Multics – The First Seven Years (AFIPS, 1972) is an excellent review, written after a considerable period of use and improvement over the initial efforts. J. J. Donovan, S. Madnick, Operating Systems, is a fundamental read on operating systems. J. J. Donovan, Systems Programming, is a good introduction into systems programming and operating systems. Technical details Jerome H. Saltzer, Introduction to Multics (MIT Project MAC, 1974) is a considerably longer introduction to the system, geared towards actual users. Elliott I. Organick, The Multics System: An Examination of Its Structure (MIT Press, 1972) is the standard work on the system, although it documents an early version, and some features described therein never appeared in the actual system. V. A. Vyssotsky, F. J. Corbató, R. M. Graham, Structure of the Multics Supervisor (AFIPS 1965) describes the basic internal structure of the Multics kernel. Jerome H. Saltzer, Traffic Control in a Multiplexed Computer System (MIT Project MAC, June 1966) is the original description of the idea of switching kernel stacks; one of the classic papers of computer science. R. C. Daley, P. G. Neumann, A General Purpose File System for Secondary Storage (AFIPS, 1965) describes the file system, including the access control and backup mechanisms. R. J. Feiertag, E. I. Organick, The Multics Input/Output System. Describes the lower levels of the I/O implementation. A. Bensoussan, C. T. Clingen, R. C. Daley, The Multics Virtual Memory: Concepts and Design, (ACM SOSP, 1969) describes the Multics memory system in some detail. Paul Green, Multics Virtual Memory – Tutorial and Reflections is a good in-depth look at the Multics storage system. Roger R. Schell, Dynamic Reconfiguration in a Modular Computer System (MIT Project MAC, 1971) describes the reconfiguration mechanisms. Security Paul A. Karger, Roger R. Schell, Multics Security Evaluation: Vulnerability Analysis (Air Force Electronic Systems Division, 1974) describes the classic attacks on Multics security by a "tiger team". Jerome H. Saltzer, Michael D. Schroeder, The Protection of Information in Computer Systems (Proceedings of the IEEE, September 1975) describes the fundamentals behind the first round of security upgrades; another classic paper. M. D. Schroeder, D. D. Clark, J. H. Saltzer, D. H. Wells. Final Report of the Multics Kernel Design Project (MIT LCS, 1978) describes the security upgrades added to produce an even more improved version. Paul A. Karger, Roger R. Schell, Thirty Years Later: Lessons from the Multics Security Evaluation (IBM, 2002) is an interesting retrospective which compares actual deployed security in today's hostile environment with what was demonstrated to be possible decades ago. It concludes that Multics offered considerably stronger security than most systems commercially available in 2002. External links multicians.org is a comprehensive site with a lot of material Multics papers online Multics glossary Myths discusses numerous myths about Multics in some detail, including the myths that it failed, that it was big and slow, as well as a few understandable misapprehensions Multics security Unix and Multics Multics general info and FAQ Includes extensive overview of other software systems influenced by Multics Honeywell, Inc., MULTICS records, 1965–1982. Charles Babbage Institute, University of Minnesota. Multics development records include the second MULTICS System Programmers Manual; MULTICS Technical Bulletins that describe procedures, applications, and problems, especially concerning security; and returned "Request for Comments Forms" that include technical papers and thesis proposals. Official source code archive at MIT Multics repository at Stratus Computer Multics at Universitaet Mainz Active project to emulate the Honeywell dps-8/m Multics CPU Various scanned Multics manuals Multicians.org and the History of Operating Systems, a critical review of Multicians.org, plus a capsule history of Multics. 1969 software AT&T computers Bell Labs Discontinued operating systems Free software operating systems General Electric mainframe computers Honeywell mainframe computers Massachusetts Institute of Technology software Time-sharing operating systems Mainframe computer software
Operating System (OS)
417
OpenGEU OpenGEU was a free computer operating system based upon the popular Ubuntu Linux distribution, which in turn is based on Debian. OpenGEU combined the strengths and ease of use of GNOME desktop environment with the lightweight, and graphical eye candy features of the Enlightenment window manager into a unique and user-friendly desktop. While OpenGEU was originally derived from Ubuntu, the design of the user gave it a significantly different appearance to the user, with original art themes, software and tools. Geubuntu Initially called Geubuntu (a mix of GNOME, Enlightenment and Ubuntu), OpenGEU was an unofficial re-working of Ubuntu. The name change from Geubuntu to OpenGEU occurred on 21 January 2008 in order to remove the "-buntu" suffix from its name. This was done in respect for Ubuntu's own trademark policies, which require all officially recognized Ubuntu derivatives to be based upon software found only in the official Ubuntu repositories–a criterion not met by OpenGEU. Installation Installation of OpenGEU was generally performed via a Live CD, which allowed the user to first test OpenGEU on their system prior to installation (albeit with a performance limit from loading applications off the disk). This is particularly useful for testing hardware compatibility and driver support. The CD also contained the Ubiquity installer, which guided the user through the permanent installation process. Due to the fact that OpenGEU used "Ubiquity," the installation process was nearly identical to that of Ubuntu. Alternatively, users could download a disk image of the CD from an online source which could then be written to a physical medium or run from a hard drive via UNetbootin. Another option was to add the OpenGEU repositories to an established Ubuntu-based system and install OpenGEU via the package manager. Programs Default environment As described above, OpenGEU includes software from both the GNOME and Enlightenment projects. Unlike Ubuntu, which uses Metacity or Compiz 3D, OpenGEU used Enlightenment DR17 as its primary window manager for its rich two-dimensional features, such as real transparency and desktop animation options. Starting with OpenGEU 8.10 Luna Serena, a port of Compiz called Ecomorph has been available for 3D effects, as well. Themes manager Starting with OpenGEU 8.04.1 Luna Crescente, the GEUTheme application became default in the distribution. This is a toolshowed a list of installed OpenGEU themes to the user, enabling the user to browse through them and select one with one-click ease. GEUTheme could fetch new themes from the internet or from an expansion CD for the user. The tool had some advanced customization abilities–it helping the user to install and customize many aspects of their OpenGEU desktop environment (including icon themes, GTK+, ETK, E17, EWL themes, wallpapers, fonts, etc.). It was also possible for the user to create new OpenGEU themes, as well as to export, import, and share them. The creation of this tool marked the first availability of a Desktop Effects Manager, similar to that of Ubuntu, for an Enlightenment desktop. This tool was incorporated into the GEUTheme application. Additional components Since the distribution was an Ubuntu derivative, the range of available software was almost identical to that of Ubuntu and the other related Ubuntu projects. Additional repositories were created by the OpenGEU development team, and were pre-enabled for the distribution's use. Enlightenment 17 software was compiled, re-packaged in the .deb packaging format, and uploaded to the repositories and a number of new software packages were developed by the OpenGEU team itself: the OpenGEU Themes Manager, eTray, e17-settings-daemon and several E17 modules. Themes The E17 window manager used a number of different libraries to render GUI applications. To ensure every application shares the same look on the desktop, it was necessary to develop themes for various libraries that utilize the same art and graphics. This was a difficult and time-consuming task compared to that of GNOME, where all the GTK+ applications use the same default GTK+ libraries to render widgets. OpenGEU therefore developed a way to allow easy switching from one theme to another, and accordingly change the graphics of every desktop component at the same time. Every OpenGEU theme was also capable of changing the look of icon sets and wallpapers. In other words, OpenGEU themes were just a set of sub-themes for all of the different libraries used in the distribution (Edje, ETK, EWL, GTK+), designed so that the user would not notice any change in the appearance when opening their various chosen applications. Sunshine and Moonlight OpenGEU was presented as an artistic distribution. The two main signature themes of OpenGEU were Sunshine and Moonlight. While Sunshine and Moonlight are considered the primary themes, there were also a number of alternative themes available. Project focus OpenGEU focused on reducing minimum hardware requirements, such as by providing two alternative methods to enable compositing effects without any particular hardware or driver requirement. The primary OpenGEU concept was that of building a complete and universally accessible E17 desktop—filling all of the missing parts in E17 with GNOME tools, while maintaining the speed of the distribution–for usability on any system. Remixes OpenGeeeU 8.10 Luna Serena was released March 23, 2009. It's a version of OpenGEU modified to work on the Asus EeePC. OpenGeeeU 8.10 uses EasyPeasy as its platform, so it is optimized for netbooks and it includes all of the drivers and fixes needed for EeePC to work out of the box. Release history The end Despite announcing a switch to Debian as a base rather than Ubuntu a while after publishing Quarto di Luna in January 2010, a public release never came. As of August 2012 the Web site has been taken down along with their forums, mailing lists and other information, indicating that the project has disbanded. Reviews and citations OpenGEU has been independently reviewed by a number of on- and off-line Linux magazines: Full Circle Magazine Softpedia Linux.com DistroWatch Dedoimedo.com See also Bodhi Linux References External links OpenGEU's page on Launchpad Ubuntu derivatives Linux distributions
Operating System (OS)
418
HandyLinux HandyLinux is a simplified Linux operating system developed in France, derived from the Debian stable branch. It was designed to be easily accessible and downloadable, so that it could be used by people with very little computer experience and on a range of older hardware that was no longer supported by the latest versions of proprietary operating systems. It was particularly aimed at older people with dated hardware who do not need nor possess the skill to use many features afforded by state-of-the-art operating systems. The last version was released in June 2016, and the project is now listed as "discontinued" by DistroWatch. On April 20, 2020, it was announced that HandyLinux was being replaced by Debian-Facile, which is not a distribution itself but a customization of Debian. Goals The goal of the HandyLinux project was to provide a "stable" Debian-based OS for elderly people, novices, and people seeking freedom and full functionality on a user-friendly desktop. HandyLinux was an official Debian derivative with a simple and clear graphical user interface called the HandyMenu. The system featured built-in tools to facilitate the handling of home computing. The distribution and the project's documentation was aimed primarily at French-language users. Documentation was intended to teach users desktop navigation and help them to learn the HandyLinux distribution. Prospective users were encouraged to browse the HandyLinux online forum and ask questions about the operating system. Features HandyLinux was designed to be installable on any modern computer with, at minimum, a Pentium 4 processor, 512 MB of RAM, and 3.7 GB of hard drive storage available. The distribution could be run as either of two "live" versions, live CD (handylinuxlight) or live USB, to sample the prepackaged software and test its compatibility with the installed hardware. Alternatively, it could be installed on a netbook equipped with at least 4 GB of storage on computers built before 2005 (HandyLinux i486 non-PAE) and on computers built from 2005 onward (HandyLinux i686-PAE). If they choose, users could remove the default HandyMenu and substitute the classical menu of the Xfce desktop environment, as well as add software packages, and customize the look and feel of the distribution after becoming more experienced with the OS by reading the documentation. All the software needed for a functional desktop was included in the disk image, and an Internet connection was not necessary to install the program bundle prepackaged with the HandyLinux operating system. Applications File manager: Thunar Internet: Firefox, Icedove, Transmission Multimedia: Clementine, VLC media player Office: LibreOffice Remote Assistance: TeamViewer (optional with graphic installer) Printer installation by: CUPS Tools were integrated for improving accessibility: a color inverter, a screen filter and magnifier, direct access to documentation, copy and paste button, a virtual keyboard, and voice synthesis integrated into the browser. Some small practical programs are also included: screenshot, calendar, file search, automatic download folder sorting, Bleachbit for cleaning, "Archive Manager", Disk Utility for formatting, Hardinfo for system information, and XL-wallpaper wallpaper changer. Desktop environment HandyLinux's native desktop environment was based on Xfce, and the Compiz compositing window manager is optional. Users could switch to the more traditional Xfce menu if they preferred it to the HandyMenu. A seven-tab menu with a large computer icon enabled users to graphically launch applications, and only a single click is necessary to open files and folders or run a program. Releases HandyLinux updates generally followed updates of the Debian stable branch. A development fork of HandyLinux in the Finnish language was published May 29, 2014. See also Comparison of Linux distributions Elementary OS List of Linux distributions References External links HandyLinux Documentation (Archived) HandyLinux on DistroWatch The beginners handbook Debian Debian-based distributions Operating system distributions bootable from read-only media Linux distributions
Operating System (OS)
419
BareMetal BareMetal is an exokernel-based single address space operating system (OS) created by Return Infinity. It is written in assembly to achieve high-performance computing with minimal footprint with a "just enough operating system" (JeOS) approach. The operating system is primarily targeted towards virtualized environments for cloud computing, or HPCs due to its design as a lightweight kernel (LWK). It could be used as a unikernel. It was inspired by another OS written in assembly, MikeOS, and it is a recent example of an operating system that is not written in C or C++, nor based on Unix-like kernels. Overview Hardware requirements AMD/Intel based 64-bit computer Memory: 4 MB (plus 2 MB for every additional core) Hard Disk: 32 MB One task per core Multitasking on BareMetal is unusual for modern operating systems. BareMetal uses an internal work queue that all CPU cores poll. A task added to the work queue will be processed by any available CPU core in the system and will execute until completion, which results in no context switch overhead. Programming API An API is documented but, in line with its philosophy, the OS does not enforce entry points for system calls (e.g.: no call gates or other safety mechanisms). C BareMetal OS has a build script to pull the latest code, make the needed changes, and then compile C code using the Newlib C standard library. C++ A mostly-complete C++11 Standard Library was designed and developed for working in ring 0. The main goal of such library is providing, on a library level, an alternative to hardware memory protection used in classical OSes, with help of carefully designed classes. Rust A Rust program demonstration was added to the programs in November 2014, demonstrating the ability to write Rust programs for BareMetal OS. Networking TCP/IP stack A TCP/IP stack was the #1 feature request. A port of lwIP written in C was announced in October 2014. minIP, a minimalist IP stack in ANSI C able to provide enough functionalities to serve a simple static webpage, is being developed as a proof of concept to learn the fundamentals in preparation for an x86-64 assembly re-write planned for the future. References External links BareMetal OS Google Group discussion forum Free software operating systems Hobbyist operating systems Microkernels Software using the BSD license Assembly language software
Operating System (OS)
420
Windows RT Windows RT is a {{#ifexpr:<2023|deprecated|discontinued}} mobile operating system developed by Microsoft. It is a version of Windows 8 built for the 32-bit ARM architecture (ARMv7). First unveiled in January 2011 at Consumer Electronics Show, the Windows RT 8 operating system was officially launched alongside Windows 8 on October 26, 2012, with the release of three Windows RT-based devices, including Microsoft's original Surface tablet. Unlike Windows 8, Windows RT is only available as preloaded software on devices specifically designed for the operating system by original equipment manufacturers (OEMs). Microsoft intended for devices with Windows RT to take advantage of the architecture's power efficiency to allow for longer battery life, to use system-on-chip (SoC) designs to allow for thinner devices and to provide a "reliable" experience over time. In comparison to other mobile operating systems, Windows RT also supports a relatively large number of existing USB peripherals and accessories and includes a version of Microsoft Office 2013 optimized for ARM devices as pre-loaded software. However, while Windows RT inherits the appearance and functionality of Windows 8, it has a number of limitations; it can only execute software that is digitally signed by Microsoft (which includes pre-loaded software and Windows Store apps), and it lacks certain developer-oriented features. It also lacks support for running applications designed for x86 processors, which were the main platform for Windows at the time. This would later be corrected with the release of Windows 10 version 1709 for ARM64 devices. Windows RT was released to mixed reviews from various outlets and critics. Some felt that Windows RT devices had advantages over other mobile platforms (such as iOS or Android) because of its bundled software and the ability to use a wider variety of USB peripherals and accessories, but the platform was criticized for its poor software ecosystem, citing the early stage of Windows Store and its incompatibility with existing Windows software, and other limitations over Windows 8. Critics and analysts deemed Windows RT to be commercially unsuccessful, citing these limitations, its unclear, uncompetitive position of sitting as an underpowered system between Windows Phone and Windows 8, and the introduction of Windows 8 devices with battery life and functionality that met or exceeded that of Windows RT devices. Improvements to Intel's mobile processors, along with a decision by Microsoft to remove OEM license fees for Windows on devices with screens smaller than 9 inches, spurred a market for low-end Wintel tablets running the full Windows 8 platform. These devices largely cannibalized Windows RT; vendors began phasing out their Windows RT devices due to poor sales, and less than a year after its release, Microsoft suffered a US$900 million loss that was largely blamed on poor sales of the ARM-based Surface tablet and unsold stock. Only two more Windows RT devices, Microsoft's Surface 2 and the Nokia Lumia 2520 in late 2013, were released beyond the five original launch devices, and no Windows RT counterpart to the Surface Pro 3 was released due to a re-positioning of the Surface line into the high-end market, and a switch to Intel architecture for the Surface 3. These developments left Microsoft's future support of the platform in doubt. With the end of production for both Surface 2 and Lumia 2520, Microsoft and its subsidiaries no longer manufacture any Windows RT devices. History At the 2011 Consumer Electronics Show, it was officially announced that the next version of Windows would provide support for system-on-chip (SoC) implementations based on the ARM architecture. Steven Sinofsky, then Windows division president, demonstrated an early version of a Windows port for the architecture, codenamed Windows on ARM (WoA), running on prototypes with Qualcomm Snapdragon, Texas Instruments OMAP, and Nvidia Tegra 2 chips. The prototypes featured working versions of Internet Explorer 9 (with DirectX support via the Tegra 2's GPU), PowerPoint and Word, along with the use of class drivers to allow printing to an Epson printer. Sinofsky felt that the shift towards SoC designs were "a natural evolution of hardware that's applicable to a wide range of form factors, not just to slates", while Microsoft CEO Steve Ballmer emphasized the importance of supporting SoCs on Windows by proclaiming that the operating system would "be everywhere on every kind of device without compromise." Initial development on WoA took place by porting code from Windows 7; Windows Mobile smartphones were used to test early builds of WoA because of lack of readily available ARM-based tablets. Later testing was performed using a custom-designed array of rack-mounted ARM-based systems. Changes to the Windows codebase were made to optimize the OS for the internal hardware of ARM devices, but a number of technical standards traditionally used by x86 systems are also used. WoA devices would use UEFI firmware and have a software-based Trusted Platform Module to support device encryption and UEFI Secure Boot. ACPI is also used to detect and control plug and play devices and provide power management outside the SoC. To enable wider hardware support, peripherals such as human interface devices, storage and other components that use USB and I²C connections use class drivers and standardized protocols. Windows Update serves as the mechanism for updating all system drivers, software, and firmware. Microsoft showcased other aspects of the new operating system, to be known as Windows 8, during subsequent presentations. Among these changes (which also included an overhauled interface optimized for use on touch-based devices built around Metro design language) was the introduction of Windows Runtime (WinRT). Software developed using this new architecture could be processor-independent (allowing compatibility with both x86- and ARM-based systems), would emphasize the use of touch input, would run within a sandboxed environment to provide additional security, and be distributed through Windows Store—a store similar to services such as the App Store and Google Play. WinRT was also optimized to provide a more "reliable" experience on ARM-based devices; as such, backward compatibility for Win32 software otherwise compatible with older versions of Windows was intentionally excluded from Windows on ARM. Windows developers indicated that existing Windows applications were not specifically optimized for reliability and energy efficiency on the ARM architecture and that WinRT was sufficient for providing "full expressive power" for applications, "while avoiding the traps and pitfalls that can potentially reduce the overall experience for consumers." Consequentially, this lack of backward compatibility would also prevent existing malware from running on the operating system. On April 16, 2012, Microsoft announced that Windows on ARM would be officially branded as Windows RT. Microsoft did not explicitly indicate what the "RT" in the operating system's name referred to, but it was believed to refer to the WinRT architecture. Steven Sinofsky stated that Microsoft would ensure the differences between Windows RT and 8 were adequately addressed in advertising. However, reports found that promotional web pages for the Microsoft Surface tablet had contained confusing wording alluding to the compatibility differences and that Microsoft Store representatives were providing inconsistent and sometimes incorrect information about Windows RT. In response, Microsoft stated that Microsoft Store staff members would be given an average of 15 hours of training prior to the launch of Windows 8 and Windows RT to ensure that consumers were able to make the correct choice for their needs. The first Windows RT devices were officially released alongside Windows 8 on October 26, 2012. Windows 8.1, an upgrade for Windows 8 and RT, was released in Windows Store on October 17, 2013, containing a number of improvements to the operating system's interface and functionality. For Windows RT devices, the update also adds Outlook to the included Office RT suite. The update was temporarily recalled by Microsoft shortly after its release, following reports that some Surface users had encountered a rare bug which corrupted their device's Boot Configuration Data during installation, resulting in an error on startup. On October 21, 2013, Microsoft released recovery media and instructions which could be used to repair the device and restored access to Windows 8.1 the next day. Differences from Windows 8 While Windows RT functions similarly to Windows 8, there are still some notable differences, primarily involving software and hardware compatibility. Julie Larson-Green, then executive vice president of the Devices and Studios group at Microsoft, explained that Windows RT was ultimately designed to provide a "closed, turnkey" user experience, "where it doesn't have all the flexibility of Windows, but it has the power of Office and then all the new style applications. So you could give it to your kid and he's not going to load it up with a bunch of toolbars accidentally out of Internet Explorer and then come to you later and say, 'why am I getting all these pop-ups?' It just isn't capable of doing that by design." Included software Windows RT does not include Windows Media Player, in favor of other multimedia apps found on Windows Store; devices are pre-loaded with the in-house Xbox Music and Xbox Video apps. All Windows RT devices include Office 2013 Home & Student RT—a version of Microsoft Office that is optimized for ARM systems. As the version of Office RT included on Windows RT devices is based on the Home & Student version, it cannot be used for "commercial, nonprofit, or revenue-generating activities" unless the organization has a volume license for Office 2013, or the user has an Office 365 subscription with commercial use rights. For compatibility and security reasons, certain advanced features, such as Visual Basic macros, are not available in Office RT. Windows RT also includes a BitLocker-based device encryption system, which passively encrypts a user's data once they sign in with a Microsoft account. Software compatibility Due to the different architecture of ARM-based devices compared to x86 devices, Windows RT has software compatibility limitations. Although the operating system still provides the traditional Windows desktop environment alongside Windows 8's touch-oriented user interface, the only desktop applications officially supported by Windows RT are those that come with the operating system itself; such as File Explorer, Internet Explorer, and Office RT. Only Windows Store apps can be installed by users on Windows RT devices; they must be obtained from Windows Store or sideloaded in enterprise environments. Developers cannot port desktop applications to run on Windows RT since Microsoft developers felt that they would not be properly optimized for the platform. As a consequence, Windows RT also does not support "new-experience enabled" web browsers: a special class of app used on Windows 8 that allows web browsers to bundle variants that can run in the Windows RT "modern-style user interface" and integrate with other apps but still use Win32 code like desktop programs. Hardware compatibility In a presentation at Windows 8's launch event in New York City, Steven Sinofsky claimed that Windows RT would support 420 million existing hardware devices and peripherals. However, in comparison to Windows 8, full functionality will not be available for all devices, and some devices will not be supported at all. Microsoft provides a "Compatibility Center" portal where users can search for compatibility information on devices with Windows RT; on launch, the site listed just over 30,000 devices that were compatible with the operating system. Networking and device management While Windows RT devices can join a HomeGroup and access files stored within shared folders and libraries on other devices within the group, files cannot be shared from the Windows RT device itself. Windows RT does not support connecting to a domain for network logins, nor does it support using Group Policy for device management. However, Exchange ActiveSync, the Windows Intune service, or System Center Configuration Manager 2012 SP1 can be used to provide some control over Windows RT devices in enterprise environments, such as the ability to apply security policies and provide a portal which can be used to sideload apps from outside Windows Store. User interface After installation of the KB3033055 update for Windows RT 8.1, a desktop Start menu becomes available as an alternative to the Start screen. It is divided into two columns, with one devoted to recent and pinned applications, and one devoted to live tiles. It is similar to, but not identical to, Windows 10's version. Support lifecycle Windows RT follows the lifecycle policy of Windows 8 and Windows 8.1. The original Surface tablet fell under Microsoft's support policies for consumer hardware and received mainstream support until April 11, 2017. Mainstream support for Windows RT ended on January 12, 2016. Users must update to Windows RT 8.1 to continue to receive support. Mainstream support for Windows RT 8.1 ended January 9, 2018, and extended support for Windows RT 8.1 will end on January 10, 2023. Devices Microsoft imposed tight control on the development and production of Windows RT devices: they were designed in cooperation with the company, and built to strict design and hardware specifications, including requirements to only use "approved" models of certain components. To ensure hardware quality and control the number of devices released upon launch, the three participating ARM chip makers were only allowed to partner with up to two PC manufacturers to develop the first "wave" of Windows RT devices in Microsoft's development program. Qualcomm partnered with Samsung and HP, Nvidia with Asus and Lenovo, and Texas Instruments with Toshiba. Additionally, Microsoft partnered with Nvidia to produce Surface (retroactively renamed "Surface RT") – the first Windows-based computing device to be manufactured and marketed directly by Microsoft. Windows RT was designed to support chips meeting the ARMv7 architecture, a 32-bit processor platform. Shortly after the original release of Windows RT, ARM Holdings disclosed that it was working with Microsoft and other software partners on supporting the new ARMv8-A architecture, which includes a new 64-bit variant, in preparation for future devices. Multiple hardware partners pulled out of the program during the development of Windows RT, the first being Toshiba and Texas Instruments. TI later announced that it was pulling out of the consumer market for ARM system-on-chips to focus on embedded systems. HP also pulled out of the program, believing that Intel-based tablets were more appropriate for business use than ARM. HP was replaced by Dell as an alternate Qualcomm partner. Acer also intended to release a Windows RT device alongside its Windows 8-based products, but initially decided to delay it until the second quarter of 2013 in response to the mixed reaction to Surface. The unveiling of the Microsoft-developed tablet caught Acer by surprise, leading to concerns that Surface could leave "a huge negative impact for the [Windows] ecosystem and other brands." First-generation devices The first wave of Windows RT devices included: Microsoft Surface (released October 26, 2012, concurrently with general availability of Windows 8) Asus VivoTab RT (released October 26, 2012) Dell XPS 10 (released December 2012; discontinued on September 25, 2013) Lenovo IdeaPad Yoga 11 (released December 2012) Samsung Ativ Tab (Released in United Kingdom on December 14, 2012; American and German releases cancelled) After having planned to produce a Windows RT device close to its launch, Acer's president Jim Wong later indicated that there was "no value" in the current version of the operating system, and would reconsider its plans for future Windows RT products when the Windows 8.1 update was released. On August 9, 2013, Asus announced that it would no longer produce any Windows RT products; chairman Johnny Shih expressed displeasure at the market performance of Windows RT, considering it to be "not very promising". During the introduction of its Android and Windows 8-based Venue tablets in October 2013, Dell's vice president Neil Hand stated that the company had no plans to produce an updated version of the XPS 10. Second-generation devices In September 2013, Nvidia CEO Jen-Hsun Huang stated that the company was "working really hard" with Microsoft on developing a second revision of Surface. The Microsoft Surface 2 tablet, which is powered by Nvidia's quad-core Tegra 4 platform and features the same full HD display as the Surface Pro 2, was officially unveiled on September 23, 2013, and released on October 22, 2013, following Windows 8.1 general availability the previous week. On the same day as the Surface 2's release, Nokia (the acquisition of their mobile business by Microsoft had just been announced, but not yet been completed) unveiled the Lumia 2520, a Windows RT tablet with a Qualcomm Snapdragon 800 processor, 4G LTE, and a design similar to its line of Windows Phone products. An LTE-capable version of the Surface 2 was made available the following year. In January 2015, after its stock sold out on Microsoft Store online, Microsoft confirmed that it had discontinued further production of the Surface 2 to focus on Surface Pro products. Microsoft ended production of the Lumia 2520 the following month, ending active production of Windows RT devices after just over two years of general availability. With the end of production for both Surface 2 and Lumia 2520, Microsoft and its subsidiaries no longer manufacture any Windows RT devices. Cancelled devices Microsoft originally developed a "mini" version of its Surface tablet later known as Surface Mini and had planned to unveil it alongside the Surface Pro 3 in May 2014; it was reportedly cancelled at the last minute. Images of the product were leaked in June 2017, revealing specifications such as a Qualcomm Snapdragon 800, an 8-inch display, and support for the Surface Pen instead of a keyboard attachment. In July 2016, an image depicting a number of cancelled Nokia-branded Lumia devices was released, depicting a prototype for a second Nokia tablet known as the Lumia 2020. Details revealed in September 2017 showed the product to have an 8.3-inch display and the same Snapdragon 800 chip as that of the Surface "mini" tablet. Reception Windows RT's launch devices received mixed reviews upon their release. In a review of the Asus VivoTab RT by PC Advisor, Windows RT was praised for being a mobile operating system that still offered some PC amenities such as a full-featured file manager, but noted its lack of compatibility with existing Windows software, and that it had no proper media player aside from a "shameless, in-your-face conduit to Xbox Music." AnandTech believed Windows RT was the first "legitimately useful" mobile operating system, owing in part to its multitasking system, bundled Office programs, smooth interface performance, and "decent" support for a wider variety of USB devices in comparison to other operating systems on the ARM architecture. However, the OS was panned for its slow application launch times in comparison to a recent iPad, and spotty driver support for printers. The small number of "quality" apps available on launch was also noted—but considered to be a non-issue, assuming that the app ecosystem would "expand significantly unless somehow everyone stops buying Windows-based systems on October 26th." Reception of the preview release of RT 8.1 was mixed; both ExtremeTech and TechRadar praised the improvements to the operating system's tablet-oriented interface, along with the addition of Outlook; TechRadars Dan Grabham believed that the inclusion of Outlook was important because "nobody in their right mind would try and handle work email inside the standard Mail app—it's just not up to the task." However, both experienced performance issues running the beta on the Tegra 3-based Surface; ExtremeTech concluded that "as it stands, we’re still not sure why you would ever opt to buy a Windows RT tablet when there are similarly priced Atom-powered x86 devices that run the full version of Windows 8." Market relevance and response The need to market an ARM-compatible version of Windows was questioned by analysts because of recent developments in the PC industry; both Intel and AMD introduced x86-based system-on-chip designs for Windows 8, Atom "Clover Trail" and "Temash" respectively, in response to the growing competition from ARM licensees. In particular, Intel claimed that Clover Trail-based tablets could provide battery life rivaling that of ARM devices; in a test by PC World, Samsung's Clover Trail-based Ativ Smart PC was shown to have battery life exceeding that of the ARM-based Surface. Peter Bright of Ars Technica argued that Windows RT had no clear purpose, since the power advantage of ARM-based devices was "nowhere near as clear-cut as it was two years ago", and that users would be better off purchasing Office 2013 themselves because of the removed features and licensing restrictions of Office RT. Windows RT was also met with lukewarm reaction from manufacturers; in June 2012, Hewlett-Packard canceled its plans to release a Windows RT tablet, stating that its customers felt Intel-based tablets were more appropriate for use in business environments. In January 2013, Samsung cancelled the American release of its Windows RT tablet, the Ativ Tab, citing the unclear positioning of the operating system, "modest" demand for Windows RT devices, plus the effort and investment required to educate consumers on the differences between Windows 8 and RT as reasons for the move. Mike Abary, senior vice president of Samsung's U.S. PC and tablet businesses, also stated that the company was unable to build the Ativ Tab to meet its target price point—considering that lower cost was intended to be a selling point for Windows RT devices. Nvidia CEO Jen-Hsun Huang expressed disappointment over the market performance of Windows RT, but called on Microsoft to continue increasing its concentration on the ARM platform. Huang also commented on the exclusion of Outlook from the Office 2013 suite included on the device and suggested that Microsoft port the software for RT as well (in response to public demand, Microsoft announced the inclusion of Outlook with future versions of Windows RT in June 2013). In May 2013, reports surfaced that HTC had scrapped plans to produce a 12-inch Windows RT tablet as it would cost too much to produce, and that there would be greater demand for smaller devices. The poor demand resulted in price cuts for various Windows RT products; in April 2013 the price of Dell's XPS 10 fell from US$450 US to $300, and Microsoft began offering free covers for its Surface tablet in some territories as a limited-time promotion—itself a US$130 value for the Type Cover alone. Microsoft also reportedly reduced the cost of Windows RT licenses for devices with smaller screens, hoping that this could spur interest in the platform. In July 2013, Microsoft cut the price of the first-generation Surface worldwide by 30%, with its U.S. price falling to $350. Concurrently, Microsoft reported a loss of US$900 million due to the lackluster sales of the device. In August 2013, Dell silently pulled the option to purchase the XPS 10 from its online store without a keyboard dock (raising its price back up to US$479), and pulled the device entirely in September 2013. Microsoft's discount on the Surface tablet did result in a slight increase of market share for the device; in late-August 2013, usage data from the advertising network AdDuplex (which provides advertising services within Windows Store apps) revealed that Surface usage had increased from 6.2 to 9.8%. Restrictions and compatibility limitations In contrast to Windows 8 (where the feature had to be enabled by default on OEM devices, but remain user-configurable), Microsoft requires all Windows RT devices to have UEFI Secure Boot permanently enabled, preventing the ability to run alternative operating systems on them. Tom Warren of The Verge stated that he would have preferred Microsoft to "keep a consistent approach across ARM and x86, though, not least because of the number of users who'd love to run Android alongside Windows 8 on their future tablets", but noted that the decision to impose such restrictions was in line with similar measures imposed by other mobile operating systems, including recent Android devices and Microsoft's own Windows Phone mobile platform. The requirement to obtain most software on Windows RT through Windows Store was considered to be similar in nature to the application stores on other "closed" mobile platforms; where only software certified under guidelines issued by the vendor (i.e. Microsoft) can be distributed in the store. Microsoft was also criticized by the developers of the Firefox web browser for effectively preventing the development of third-party web browsers for Windows RT (and thus forcing use of its own Internet Explorer browser) by restricting the development of desktop applications and by not providing the same APIs and exceptions available on Windows 8 to code web browsers that can run as apps. However, the European Union, in response to a complaint about the restrictions in relation to an antitrust case involving Microsoft, ruled that "so far, there are no grounds to pursue further investigation on this particular issue." As mandated by the EU, the BrowserChoice.eu service is still included in Windows 8. "Jailbreak" exploit In January 2013, a privilege escalation exploit was discovered in the Windows kernel that can allow unsigned code to run under Windows RT; the exploit involved the use of a remote debugging tool (provided by Microsoft to debug WinRT apps on Windows RT devices) to execute code which changes the signing level stored in RAM to allow unsigned code to execute (by default, it is set to a level that only allows code signed by Microsoft to execute). Alongside his explanation of the exploit, the developer also included a personal appeal to Microsoft urging them to remove the restrictions on Windows RT devices, contending that their decision was not for technical reasons, and that the devices would be more valuable if this functionality were available. In a statement, a Microsoft spokesperson applauded the effort, indicating that the exploit does not pose a security threat because it requires administrative access to the device, advanced techniques, and would still require programs to be re-compiled for ARM. However, Microsoft has still indicated that the exploit would be patched in a future update. A batch file-based tool soon surfaced on XDA Developers to assist users in the process of performing the exploit, and a variety of ported desktop applications began to emerge, such as the emulator Bochs, PuTTY and TightVNC. Afterwards, an emulator known as "Win86emu" surfaced, allowing users to run x86 software on a jailbroken Windows RT device. However, it does not support all Windows APIs, and runs programs slower than they would on a native system. Demise In November 2013, speaking about Windows RT at the UBS Global Technology Conference, Julie Larson-Green made comments discussing the future of Microsoft's mobile strategy surrounding the Windows platform. Larson-Green stated that in the future (accounting for Windows, Windows RT, and Windows Phone), Microsoft was "[not] going to have three [mobile operating systems]." The fate of Windows RT was left unclear by her remarks; industry analysts interpreted them as signs that Microsoft was preparing to discontinue Windows RT due to its poor adoption, while others suggested that Microsoft was planning to unify Windows with Windows Phone. Microsoft ultimately announced its "Universal Windows Apps" platform at Build 2014, which would allow developers to create WinRT apps for Windows, Windows Phone, and Xbox One that share common codebases. These initiatives were compounded by a goal for Windows 10 to unify the core Windows operating system across all devices. Critics interpreted Microsoft's move to cancel the launch of a smaller Surface model in May 2014 as a further sign that Microsoft, under new CEO Satya Nadella, and new device head Stephen Elop (who joined Microsoft upon the purchase of Nokia's mobile phone business in September 2013, only to depart the company the following year), was planning to further downplay Windows RT, given that the company had shifted its attention towards a higher-end, productivity-oriented market with the Pro 3—one which would be inappropriate for Windows RT given its positioning and limitations. Analysts believed that Microsoft was planning to leverage its acquisition of Nokia's device business for future Windows RT devices, possibly under the Lumia brand; this ultimately turned out to be a failure, and Microsoft would eventually leave the consumer mobile phone market, selling its assets to Foxconn and HMD Global in May 2016. Newer Intel processors for mobile devices were more competitive in comparison to ARM equivalents in regards to performance and battery life; this factor and other changes made by Microsoft, such as the removal of Windows OEM license fees on devices with screens less than 9 inches in size, spurred the creation of a market for lower-end tablets running the full Windows 8 operating system on Intel-compatible platforms, leaving further uncertainty over Microsoft's support of ARM outside of smartphones—where they remain ubiquitous. Such a device came in March 2015, when Microsoft unveiled a new low-end Surface model, the Intel Atom-based Surface 3; unlike previous low-end Surface models, Surface 3 did not use ARM and Windows RT. In June 2016, Microsoft announced that production of this device would end by December 2016, with sales ending the following month. No follow-up device was planned in this segment, signalling the company's departure from the low-end Windows consumer tablet market, where experts have been continually debating whether a future exists. In 2021, Microsoft said it will stop distributing and updating Windows RT after January 10, 2023 (EOL Date). Successors Windows 10 influence On January 21, 2015, Microsoft unveiled Windows 10 Mobile, an edition of Windows 10 for smartphones and sub-8-inch tablets running on ARM architecture; unlike RT, which was based upon the user experience of the PC version, Windows 10 on these devices is a continuation of the Windows Phone user experience that emphasizes the ability for developers to create "universal" Windows apps that can run across PCs, tablets, and phones, and only supports the modern-style interface and Windows apps (although on compatible devices, a limited desktop experience will be available when connected to an external display). Following the event, a Microsoft spokesperson stated that the company was working on a Windows RT update that would provide "some of the functionality of Windows 10", but declined to offer any further details. As such, Microsoft does not officially consider Windows RT to be a supported upgrade path to Windows 10. Shortly afterwards, Microsoft ended production of both the Surface 2 and Lumia 2520. The "Update for Windows RT 8.1 feature improvement" (KB3033055), also referred to by Microsoft as "Windows 8.1 RT Update 3", was released on September 16, 2015; it adds a version of the updated Start menu seen in early preview versions of Windows 10 (which combines an application list with a sidebar of tiles), but otherwise does not contain any other significant changes to the operating system or its functionality, nor any support for Windows 10's application ecosystem. The Verge characterized this update as being similar to Windows Phone 7.8—which similarly backported user interface changes from its successor, without making any other significant upgrades to the platform. Return of ARM and app limitations On December 7, 2016, Microsoft announced that as part of a partnership with Qualcomm, it planned to launch an ARM version of Windows 10 for Snapdragon-based devices, initially focusing on laptops. Unlike Windows RT, the ARM version of Windows 10 will allow use of an x86 processor emulator to run Win32 desktop software, rather than only allowing apps from Windows Store. The following year, Microsoft announced the Always Connected PC brand, covering Windows 10 devices with cellular connectivity; the launch featured two Snapdragon 835-powered 2-in-1 laptops from Asus and HP, and an integration of Qualcomm's Snapdragon X16 gigabit LTE modem with AMD's Ryzen Mobile platform. On May 2, 2017, Microsoft unveiled Windows 10 S, an edition of Windows 10 designed primarily for low-end mobile devices targeting the education market (competing primarily with Google's Linux-based Chrome OS). Similarly to Windows RT, it restricted software installation to applications obtained via Windows Store. Windows 10 S was replaced by S Mode, a mode in which manufacturers can ship Windows 10 computers with the same restrictions, but they can be turned off by the user. References External links Windows RT 8.1: FAQ Windows 8 vs Windows RT 8: what's the difference? 2012 software ARM operating systems Mobile operating systems Tablet operating systems Windows 8 Discontinued versions of Microsoft Windows no:Windows 8#RT
Operating System (OS)
421
System Object Model (file format) In computing, the System Object Model (SOM) is a proprietary executable file format developed by Hewlett-Packard for its HP-UX and MPE/ix operating systems. In particular, SOM is the native format used for 32-bit application executables, object code, and shared libraries running under the PA-RISC family of processors. With the introduction of 64-bit processors, Hewlett Packard adopted the Executable and Linkable Format (ELF) format to represent the wider 64-bit program code, while still using SOM for applications running in 32-bit mode. Later, with the introduction of the Itanium processor family, HP-UX has abandoned the SOM format in favor of ELF for both 32-bit and 64-bit application code. In HP-UX the SOM file format is sometimes called the a.out format and is described by C programming language structures in the header file "/usr/include/a.out.h". However the SOM format is technically not the same as the standard a.out format used by many other Unix operating systems. Overview of the SOM file format A SOM file consists of a fixed-size header record followed by a number of sections, some of which are optional. The header always appears at the beginning of the file and contains the byte offsets and sizes of where the other sections are located within the file. Except for the header the other sections may appear anywhere in the file, although the typical layout of a SOM file (assuming all sections are present) is as follows: Header Record Auxiliary Header Record Space Records Subspace Records Loader Fixup Records Space Strings Symbol Records Fixup Records Symbol Strings Compiler Records Data for Loadable Spaces Data for Unloadable Spaces Numeric fields are stored in big endian byte order, the native byte order of the PA-RISC, with most being 32-bit wide. Character strings are generally encoded in 8-bit ASCII and both prefixed with a 32-bit length indicator as well as being null-terminated, like C strings. Most records are word-aligned (start at even-byte offsets) with padding introduced as necessary. See also Comparison of executable file formats External links HP-UX a.out(4) manual page, Hewlett-Packard The 32-bit PA-RISC Run-time Architecture Document, HP-UX 11.0 Version 1.0, Hewlett-Packard, 1997 The 32-bit PA-RISC Run-time Architecture Document, HP-UX 10.20 version 3.0, Hewlett-Packard, 1997. Also available at parisc-linux.org HP-UX Software Transition Kit Glossary, Hewlett-Packard (online) PA-RISC 1.1 Architecture Specifications Executable file formats HP software
Operating System (OS)
422
Zorin OS Zorin OS is a Linux distribution based on Ubuntu. It uses a GNOME 3 or XFCE 4 desktop environment as default, although the desktop is heavily customized in order to help Windows and macOS users transition to Linux easily. Wine and PlayOnLinux can be easily installed in Zorin OS, allowing users to run compatible Windows software for ease of transition from Windows. Zorin OS's creators maintain 3 free editions of the operating system, and a "Pro" edition for purchase. The current releases are Zorin OS 16 Pro, Zorin OS 16 Pro Lite, Zorin OS 16 Core, Zorin OS 16 Core Lite, Zorin OS 16 Education and Zorin OS 16 Education Lite. The new editions continue to use the Ubuntu-based Linux kernel and GNOME or XFCE interface. Features Zorin OS is fully graphical, with a graphical installer. For stability and security, it follows the long-term releases of the main Ubuntu system. It uses its own software repositories as well as Ubuntu's repositories. These repositories are accessible through the common "apt-get" commands via the Linux terminal, or a GUI-based software manager that provides an app store-like experience for users who don't wish to use the terminal. The OS also comes with a number of desktop layouts or themes to modify the desktop environment - The themes let users change the interface to resemble those of Microsoft Windows, macOS, or Ubuntu and allow the interface to be familiar regardless of the previous system a user has come from. As with all GNOME-based desktop environments, the look and feel of the desktop can be modified easily using GNOME extensions. Free vs Pro Editions The Zorin team offers various flavors of Zorin OS available to users. There are two free versions called "Core" and "Lite", with "Pro" versions of these available for download with a purchase through the project's website. The Pro versions offer extra layout themes for different operating systems, including both a Windows 11 and "Windows Classic" themes, and comes preinstalled with popular FOSS programs such as Blender, in-house software for managing passwords, casting content to Miracast-compatible screens, and more. These programs could also be installed individually for free. Finally, the Pro editions of Zorin OS also provide a collection of commissioned wallpapers to choose from, and installation technical support. Zorin OS Pro can be installed on multiple computers with one license, except for businesses and schools that must purchase a license for each PC the OS is installed onto. About The Zorin OS company is based in Dublin, Ireland. The project was started in 2008 by co-founders Artyom and Kyrill Zorin. History Zorin OS was initially released on 1 July 2009. Earlier versions required users to do a clean install, but since version 12.4, the system's update manager can be used to upgrade existing installations. Version history Security Zorin OS comes with the Uncomplicated Firewall installed, although it is not enabled by default. Reception Zorin OS has been praised for its intuitive and familiar layouts, functionality, and installation process; as well as for making it easy to use a Windows-similar layout, install NVIDIA drivers, and navigate an easy and intuitive software center. Notes and references External links Zorin OS on OpenSourceFeed Gallery Irish brands Linux distributions Ubuntu derivatives X86-64 Linux distributions
Operating System (OS)
423
Windows 3.0 Windows 3.0 is the third major release of Microsoft Windows, launched in 1990. It features a new graphical user interface (GUI) where applications are represented as clickable icons, as opposed to the list of file names seen in its predecessors. Later updates would expand the software's capabilities, one of which added multimedia support for sound recording and playback, as well as support for CD-ROMs. Windows 3.0 is the first version of Windows to perform well both critically and commercially. Critics and users considered its GUI to be a challenger to those of Apple Macintosh and Unix. Other praised features were the improved multitasking, customizability, and especially the utilitarian management of computer memory that troubled the users of Windows 3.0's predecessors. Microsoft was criticized by third-party developers for the bundling of its separate software with the operating environment, which they viewed as an anticompetitive practice. Windows 3.0 sold 10 million copies before it was succeeded by Windows 3.1 in 1992. On December 31, 2001, Microsoft declared Windows 3.0 obsolete and stopped providing support and updates for the system. Development history Before Windows 3.0, Microsoft had a partnership with IBM, where the latter had sold personal computers running on the former's MS-DOS since 1981. Microsoft had made previous attempts to develop a successful operating environment called Windows, and IBM declined to include the project in its product line. As MS-DOS was entering its fifth iteration, IBM demanded a version of DOS that could run in "protected mode", which would allow it to execute multiple programs at once, among other benefits. MS-DOS was originally designed to run in real mode and run only one program at a time, due to the capability limitations of the Intel 8088 microprocessor. Intel had later released the Intel 80286, which was designed to support such multitasking efficiently (with several different hardware features, including memory protection, hardware task switching, program privilege separation, and virtual memory, all absent on the earlier Intel x86 CPUs) and which could be directly connected to 16 times as much memory as the 8088 (and 8086). The two companies developed the next generation of DOS, OS/2. OS/2 software was not compatible with DOS, giving IBM an advantage. In late 1987, Windows/386 2.0 introduced a protected mode kernel that could multitask several DOS applications using virtual 8086 mode, but all Windows applications still ran in a shared virtual DOS machine. As the rest of the Microsoft team moved on to the OS/2 2.0 project, David Weise, a member of the Windows development team and a critic of IBM, believed that he could restart the Windows project. Microsoft needed programming tools that could run in protected mode, so it hired Murray Sargent, a physics professor from the University of Arizona who had developed a DOS extender and a debugging program that could work with protected mode applications. Windows 3.0 originated in 1988 as an independent project by Weise and Sargent, who used the latter's debugger to improve the memory manager and run Windows applications in their own separate protected memory segments. In a few months, Weiss and Sargent cobbled together a rough prototype that could run Windows versions of Word, Excel, and PowerPoint, then presented it to company executives, who were impressed enough to approve it as an official project. When IBM learned of Microsoft's upcoming project, their relationship was damaged, but Microsoft asserted that it would cancel Windows after its launch and that it would continue to develop OS/2. Windows 3.0 was formally and officially announced on May 22, 1990, in the New York City Center Theater, where Microsoft released it worldwide. The event had 6,000 attendees, and it was broadcast live in the Microsoft social fairs of seven other North American cities and twelve major cities outside. It cost Microsoft US$3 million to host the festivities—something its founder, Bill Gates, referred to as the "most extravagant, extensive, and expensive software introduction ever." Microsoft decided not to offer free runtime licenses of the software to applications vendors, as runtime versions of Windows lacked the capacity to multitask. Instead, the company offered upgrades for both full and runtime previous versions of Windows at a cost of —considerably lower than the full license's suggested retail price of $149. The software could also be obtained by purchasing computers with it preinstalled from hardware manufacturers. The first of these manufacturers were Zenith Data Systems, Austin Computer Systems and CompuAdd, followed by more than 25 others; notably, IBM was not one of them. Microsoft had intended to make Windows 3.0 appealing to the public in general. The company's "Entry Team", assigned to that task, was concerned that the public might perceive it to be no more than a tool for large enterprises, due to the software's high system requirements. Major game publishers did not see it as a potential game platform, instead sticking to DOS. Microsoft's product manager Bruce Ryan compiled games that the Windows team had designed in its spare time to create Microsoft Entertainment Pack, which included Tetris and Minesweeper. There was little budget put in the project, and none of that was spent on quality testing. Nevertheless, the Entertainment Pack was sold as a separate product, and it became so popular that it was followed by three other Entertainment Packs. On December 31, 2001, Microsoft dropped support for Windows 3.0, along with previous versions of Windows and Windows 95, Windows for Workgroups, and MS-DOS versions up to 6.22. Features Windows 3.0 features a significantly revamped graphical user interface (GUI), which was described as having a three-dimensional look similar to the Presentation Manager, rather than the flat look of its predecessor, Windows 2.1x. It also includes technical improvements to the memory management to make better use of the capabilities of Intel's 80286 and 80386 processors. Dynamic Data Exchange is a multitasking protocol whereby multiple running applications dynamically exchange data with one another, i.e., when data in one application changes, so does the data in another. This feature had appeared in Windows previously, but until Windows 3.0, due to memory constraints, users were unable to use the protocol. These users instead had to exit to DOS to run one application, close it, and open another to exchange data. Due to its support for the 386 and later processors, Windows 3.0 can also use virtual memory, which is a portion of a hard disk drive that is substituted for memory by the processor in the event that its own memory is exhausted. Like its predecessors, Windows 3.0 is not an operating system per se, but rather an operating environment that is designed for DOS and controls its functions. The MS-DOS Executive file manager was replaced with Program Manager, the list-based File Manager, and Task List. Program Manager is a graphical shell composed of icons, each with an underlying title. They can be moved and arranged in any order, and the icons' titles can be renamed. When double-clicked on, these icons open corresponding applications or smaller windows within the Program Manager window called group windows. These group windows contain such icons and can be minimized to prevent cluttering of the Program Manager window's space. File Manager is another shell used to access or modify applications, but displays them as files contained in directories in a list format. Its purpose as an alternative to using DOS commands is to facilitate moving files and directories. Task List displays all running applications and may also be used to terminate them, select a different program, cascade or tile the windows, and arrange minimized desktop icons. The Control Panel, where users can change settings to customize Windows and hardware, was also redesigned as an icon-based window. The drivers bundled with Windows 3.0 support up to 16 simultaneous colors from EGA, MCGA or VGA palettes, as opposed to the previous maximum of eight colors, though the operating environment itself supports graphics adapters that offer resolutions and the number of colors greater than VGA. Windows 3.0 also introduced the Palette Manager, a set of functions that allow applications to change the lookup palette of graphics cards displaying up to 256 colors in order to use needed colors. When multiple displayed windows exceed the 256-color limit, Windows 3.0 prioritizes the active window to use that application's colors, without resorting to dithering and then filling in areas. Windows 3.0 retains many of the simple applications from its predecessors, such as the text editor Notepad, the word processor Write, and the improved paint program Paintbrush. Calculator is expanded to include scientific calculations. Recorder is a new program that records macros, or sequences of keystrokes and mouse movements, which are then assigned to keys as shortcuts to perform complex functions quickly. Also, the earlier Reversi game was complemented with the card game Microsoft Solitaire, which would eventually be inducted into the World Video Game Hall of Fame in 2019. Another notable program is Help. Unlike DOS applications, which may have help functions as part of them, Windows Help is a separate and readily accessible application that accompanies all Windows programs that support it. Updates There are two updates known to have been published for Windows 3.0. One of them is Windows 3.0a, released in December 1990. It modified Windows' DOS extender—a program that enables DOS applications to access extended memory—to prevent errors caused by software calling into real-mode code when Windows is loaded in standard mode. It also simplified the installation process and alleviated crashes associated with networking, printing, and low-memory conditions. Windows 3.0 with Multimedia Extensions Windows 3.0 with Multimedia Extensions 1.0 (MME) was released to third-party manufacturers in October 1991. The application programming interface introduced Media Control Interface, designed for any media-related device such as graphics and audio cards, scanners, and videotape players. It also supported recording and playing digital audio, MIDI devices, screensavers and analog joysticks, as well as CD-ROM drives, which were then becoming increasingly available. Other features included additional applets such as an alarm clock and Media Player, used to run media files. MME supports stereo sound and 16-bit audio bit depth and sampling rates of up to 44.1 kHz. System requirements The official system requirements for Windows 3.0 and its substantial update, Windows 3.0 with Multimedia Extensions: The processor and memory minimum requirements for the original version are those needed to run Windows in real mode, the lowest of the three operating modes. This mode severely limits the multitasking capabilities of Windows, although it can still use expanded memory, which is memory that is added by installing expanded memory boards or memory managers. However, it also provides backward compatibility with as many hardware and software designed for DOS as possible, and it may be used to run DOS applications and older Windows applications not optimized for Windows 3.0 if running them in higher operating modes is not possible. Standard mode requires at least an 80286 processor, and although the memory required is unchanged, the mode does allow the processor to use extended memory for running applications. 386 enhanced mode requires at least an 80386 processor and two megabytes of memory. While the other modes can run DOS applications in full-screen only and must suspend DOS applications in order to run Windows programs and vice versa, the DOS applications in 386 enhanced mode can be run windowed and concurrently with the Windows applications. Unlike the other modes, this one cannot be used to run DOS applications that use DOS extenders incompatible with DPMI specifications. Normally, Windows will start in the highest operating mode the computer can use, but the user may force it into lower modes by typing WIN /R or WIN /S at the DOS command prompt. If the user selects an operating mode that cannot be used due to lack of RAM or CPU support, Windows merely boots into the next lowest one. Reception Windows 3.0 is considered to be the first version of Windows to receive critical acclaim. Users and critics universally lauded its icon-based interface and the ensuing ease of performing operations, as well as the improved multitasking and greater control over customizing their environments. Computerworld considered the software to share the same benefits as OS/2 and Unix. Garry Ray of Lotus considered this version of Windows the first of the environment to bear "serious long-term consideration." Bill Howard of PC Magazine found its user interface to be easy to use, though not quite as intuitive as Macintosh. The editor of InfoWorld, Michael J. Miller, had faith that PC users would fully transition from the preceding text-only environment to the GUI with Windows 3.0 as their primary choice. One critical aspect of Windows 3.0 is how it managed memory. Before its release, users of previous versions of Windows were burdened with trying to circumvent memory constraints to utilize those versions' touted capabilities. The Windows software occupied a large amount of memory, and users regularly experienced system slowdowns and often exceeded memory limits. Windows 3.0 also had relatively high memory requirements by 1990's standards, but with the three memory modes, it was praised for using memory more efficiently, removing the 640–kilobyte limit that had existed in computers running on Microsoft software since DOS, and supporting more powerful CPUs. Ted Needleman of the computer magazine Modern Electronics called Windows 3.0's GUI "state-of-the-art" and compared Microsoft's previous attempts to produce such a GUI to Apple Lisa, Apple's early such attempt and the predecessor to its far more successful Macintosh. He cautioned about the seemingly cheap upgrade cost of US$50 when the system requirements and the need to upgrade any installed applications for compatibility are considered. He also cautioned that the software's advantages could be taken only by running Windows applications. However, in February 1991, PC Magazine noted a vast array of applications designed specifically for Windows 3.0, including many that had yet to be available for OS/2. It also cited two other factors leading to the operating environment's success: one of them was the inexpensive cost of the hardware needed to run it compared to the Macintosh, and the other was its focus on fully utilizing hardware components that were relatively powerful by its time's standards. Amid the unprecedented success of Windows 3.0, Microsoft came under attack by critics as well as the United States Federal Trade Commission, who alleged that the company had attempted to dominate the applications market by luring its competitors into developing software for IBM's OS/2 while it was developing its own for Windows. At the time of Windows 3.0's release, Microsoft had only 10 and 15 percent of the market shares on spreadsheets and word processors, respectively, but those figures had risen to over 60 percent in 1995, overtaking previously dominant competitors such as Lotus Development Corporation and WordPerfect. Microsoft did indeed suggest developers to write applications for the OS/2, but it also intended Windows 3.0 to be a "low-end" alternative to the latter, with Gates referring to the OS/2 as the operating system of the 1990s. The Windows brand was also intended to be canceled after this version's release. The investigations into—and the eventual subsequent suing of—Microsoft led to a settlement on July 15, 1994, where Microsoft agreed not to bundle separate software packages with its operating products. It marked the first time that the company had ever been investigated for anticompetitive practices. Sales Windows 3.0 is also considered the first Windows to see commercial success. At the time of release, of the 40 million personal computers installed, only five percent used either previous version of Windows, but within its first week of availability, it rose as the top-selling business software. After six months, two million licenses were sold. Its success was interdependent with the PC industry, exemplified by an explosion of demand for and subsequent production of Intel's more powerful microprocessor, the 80486. Windows became so widely used in businesses that Brian Livingston of InfoWorld wrote in October 1991 that "a company with no PCs that run Windows is almost like a company without a fax machine." Microsoft had spent a total of $10 million in its marketing campaign for the software, including the $3 million for its release. When its successor, Windows 3.1, was released, sales totaled about 10 million licenses, and a year later the Windows series would overtake DOS as the bestselling application of all time. Windows 3.0 is regarded in retrospect as a turning point in the future of Microsoft, being attributed to its later dominance in the operating system market and to the company's improved applications market share. The company used to have close ties with IBM since the former's inception, but the unexpected success of its new product would lead to the two companies recasting their relationship, where they would continue to sell each other's operating products until 1993. After the fiscal year of 1990, Microsoft reported revenues of US$1.18 billion, with $337 million appearing in the fourth quarter. This annual statistic is up from $803.5 million in fiscal 1989, and it made Microsoft the first microcomputer software company to reach the $1 billion mark in one year. Microsoft officials attributed the results to the sales of Windows 3.0. References External links Windows history: Windows 3.0 takes off , an article detailing a brief history of Windows 3.0 1990 software DOS software 3.0 Products and services discontinued in 2001 History of Microsoft History of software Products introduced in 1990
Operating System (OS)
424
VPS/VM VPS/VM (Virtual Processing System/Virtual Machine) was an operating system that ran on IBM System/370 – IBM 3090 computers at Boston University in general use from 1977 to around 1990, and in limited use until at least 1993. During the 1980s VPS/VM was the main operating system of Boston University and often ran up to 250 users at a time when rival VM/CMS computing systems could only run 120 or so users. Each user ran in a Virtual Machine under VM, an IBM hypervisor operating system. VM provided the virtual IBM 370 machine which the VPS operating system ran under. The VM code was modified to allow all the VPS virtual machines to share pages of storage with read and write access. VPS utilized a shared nucleus, as well as pages used to facilitate passing data from one VPS virtual machine to another. This organization is very similar to that of MVS; substituting Address Spaces for Virtual Machines. Origins According to Craig Estey, who worked at the Boston University Academic Computing Center between 1974 and 1977: Description An IBM-based operating system, and quite like some DOS/VSE time sharing options, VPS/VM provided the user an IBM 3270 full screen terminal (a green screen) and a user interface that was like VM/CMS. Each user had an 11 megabyte virtual machine (with a strange 3 megabyte memory gap in the middle) and, from 1984 onwards, could run several programs at a time. The operating system was sparsely documented but was written first by Charles Brown, a BU doctoral student, and John H. Porter, a physics PHD, who later became the head of the VPS project (and eventually Boston University's vice president for information systems and technology). Marian Moore wrote much of the later VM code necessary to run the VPS system. Josie Bondoc wrote some of the later VPS additions, like UNIX piping. Many MVS/VM programs ran on VPS/VM, such as XEDIT, and compilers for Pascal, PL/1, C and Cobol. These MVS/VM programs ran under an OS simulation program that simulated the OS/VM supervisor calls (SVCs). Margorie Orr supervised the OS simulation program development and maintenance. Some of the programmers who wrote parts of the OS simulation package, or maintained it were Margorie Orr, Timothy Greiser, Daniel Levbre, John Coldwell Lotz, and Paul Cheffers. Michael Krugman wrote some of the early main utilities such as IFMSG, the JCL language for VPS, and also MAIL, the early email program. SENDMAIL, written by Francis Costanzo, implemented email, under the BITNET system. Some pre SQL databases installed on VPS were FOCUS and NOMAD2. Michael Gettes wrote an early and quick HELP system. The file system was not hierarchical and originally each file had to have a unique 8 character filename. This eventually grew onerous and each user was given their own private directory. Tapes and IBM disk files were supported as well as native VPS text files. There was a very simple shell and no patterns were supported except for the PAW computer program, written by Paul Cheffers. The graphics department, under Glenn Bresnahan, essentially, ported over most of the UNIX utilities in the mid 1980s. William Marshall did much of the early system documentation, as well as providing PL/1 support. Joe Dempty was the User Services director. Diana Robanske was a statistics consultant and ran student assistance services from 1980-1985. John Houlihan was also a User Services statistics consultant. VPS/VM was a working pre-GUI IBM operating system, and could often run more users than other IBM TSO-based systems. When most University-based systems only provided editors and compilers, VPS provided these services to a 10,000 BU university community for over 10 years. VPS/VM policy was for the operating system and main utilities to be written in IBM 370 assembler language. This decision restricted the development of the system and it ultimately could not compete with the UNIX-based systems which eventually replaced it. However, VPS eventually modeled many of the features of then-current operating systems around the world and was a keen trainer for many companies that needed IBM370 assembler programmers in the 1980s. See also Time-sharing system evolution References Paul Cheffers, the article's original author, worked on the VPS/VM operating system from 1981 to 1985. Time-sharing operating systems Boston University IBM mainframe operating systems
Operating System (OS)
425
Outline of computing The following outline is provided as an overview of and topical guide to computing: Computing – activity of using and improving computer hardware and computer software. Branches of computing Computer science (see also Outline of computer science) Information technology – refers to the application (esp in businesses and other organisations) of computer science, that is, its use by mankind (see also Outline of information technology) Information systems – refers to the study of the application of IT to business processes Computer engineering (see also Outline of computer engineering) Software engineering (see also Outline of software engineering) Computer science Computer science – (outline) Computer science Theory of computation Scientific computing Metacomputing Autonomic computing Computers See information processor for a high-level block diagram. Computer Computer hardware History of computing hardware Processor design Computer network Computer performance by orders of magnitude Instruction-level taxonomies After the commoditization of memory, attention turned to optimizing CPU performance at the instruction level. Various methods of speeding up the fetch-execute cycle include: designing instruction set architectures with simpler, faster instructions: RISC as opposed to CISC Superscalar instruction execution VLIW architectures, which make parallelism explicit Software Software engineering Computer programming Computational Software patent Firmware System software Device drivers Operating systems Utilities Application Software Databases Geographic information system Spreadsheet Word processor Programming languages interpreters Compilers Assemblers Speech recognition Speech synthesis History of computing History of computing History of computing hardware from the tally stick to the quantum computer History of computer science History of computer animation History of computer graphics History of computer networking History of computer vision Punched card Unit record equipment IBM 700/7000 series IBM 1400 series IBM System/360 History of IBM magnetic disk drives Business computing Accounting software Computer-aided design Computer-aided manufacturing Computer-aided dispatch Customer relationship management Data warehouse Decision support system Electronic data processing Enterprise resource planning Geographic information system Hospital information system Human resource management system Management information system Material requirements planning Product Lifecycle Management Strategic enterprise management Supply chain management Utility Computing Human factors Accessible computing Computer-induced medical problems Computer user satisfaction Human-computer interaction (outline) Human-centered computing Computer network Wired and wireless computer network Types Wide area network Metropolitan area network City Area Network Village Area Network Local area network Wireless local area network Mesh networking Collaborative workspace Internet Network management Computing technology based wireless networking (CbWN) The main goal of CbWN is to optimize the system performance of the flexible wireless network. Source coding Codebook design for side information based transmission techniques such as Precoding Wyner-Ziv coding for cooperative wireless communications Security Dirty paper coding for cooperative multiple antenna or user precoding Intelligence Game theory for wireless networking Cognitive communications Flexible sectorization, Beamforming and SDMA Software Software defined radio (SDR) Programmable air-interface Downloadable algorithm: e.g., downloadable codebook for Precoding Computer security Cryptology – cryptography – information theory Cracking – demon dialing – Hacking – war dialing – war driving Social engineering – Dumpster diving Physical security – Black bag job Computer security Computer surveillance Defensive programming Malware Security engineering Data Numeric data Integral data types – bit, byte, etc. Real data types: Floating point (Single precision, Double precision, etc.) Fixed point Rational number Decimal Binary-coded decimal (BCD) Excess-3 BCD (XS-3) Biquinary-coded decimal representation: Binary – Octal – Decimal – Hexadecimal (hex) Computer mathematics – Computer numbering formats Character data storage: Character – String – text representation: ASCII – Unicode – Multibyte – EBCDIC (Widecharacter, Multicharacter) – FIELDATA – Baudot Other data topics Data compression Digital signal processing Image processing Data management Routing Data Protection Act Classes of computers There are several terms which describe classes, or categories, of computers: Analog computer Calculator Desktop computer Desktop replacement computer Digital computer Embedded computer Home computer Laptop Mainframe Minicomputer Microcomputer Personal computer Portable computer Personal digital assistant (aka PDA, or Handheld computer) Programmable logic controller or PLC Server Smartphone Supercomputer Tablet computer Video game console Workstation Organizations Companies – current Apple Asus Avaya Dell Fujitsu Gateway Computers Groupe Bull HCL Hewlett-Packard Hitachi, Ltd. Intel Corporation IBM Lenovo Microsoft NEC Corporation Novell Panasonic Red Hat Silicon Graphics Sun Microsystems Unisys Companies – historic Acorn, bought by Olivetti Amdahl Corporation, bought by Fujitsu Bendix Corporation Burroughs Corporation, merged with Sperry to become Unisys Compaq, bought by Hewlett-Packard Control Data Cray Data General Digital Equipment Corporation, bought by Compaq, later bought by Hewlett-Packard Digital Research – produced system software for early Intel microprocessor-based computers Elliott Brothers English Electric Company Ferranti General Electric, computer division bought by Honeywell, then Bull Honeywell, computer division bought by Bull ICL Leo Lisp Machines, Inc. Marconi Micro Instrumentation and Telemetry Systems produced the first widely sold microcomputer system (kit and assembled) Nixdorf Computer, bought by Siemens Norsk Data Olivetti Osborne Packard Bell PERQ Prime Computer Raytheon Royal McBee RCA Scientific Data Systems, sold to Xerox Siemens Sinclair Research, created the Sinclair ZX Spectrum, ZX80 and ZX81 Southweat Technical products Corporation produced microcomputers systems (kit and assembled), peripherals, and software based on Motorola 6800 and 6809 microcomputer chips Sperry, which bought UNIVAC, and later merged with Burroughs to become Unisys Symbolics UNIVAC Varian Data Machines, a division of Varian Associates which was bought by Sperry Wang Professional organizations Association for Computing Machinery (ACM) Association for Survey Computing (ASC) British Computer Society (BCS) Canadian Information Processing Society (CIPS) Computer Measurement Group (CMG) Institute of Electrical and Electronics Engineers (IEEE), in particular the IEEE Computer Society Institution of Electrical Engineers International Electrotechnical Commission (IEC) Standards bodies International Electrotechnical Commission (IEC) International Organization for Standardization (ISO) Institute of Electrical and Electronics Engineers (IEEE) Internet Engineering Task Force (IETF) World Wide Web Consortium (W3C) Open standards bodies See also Open standard Apdex Alliance – Application Performance Index Application Response Measurement (ARM) Computing publications Digital Bibliography & Library Project – , lists over 910,000 bibliographic entries on computer science and several thousand links to the home pages of computer scientists. Persons influential in computing Major figures associated with making personal computers popular. Microsoft Bill Gates Paul Allen Apple Inc. Steve Jobs Steve Wozniak External links FOLDOC: the Free On-Line Dictionary Of Computing Computing Computing
Operating System (OS)
426
Opcode In computing, an opcode (abbreviated from operation code, also known as instruction machine code, instruction code, instruction syllable, instruction parcel or opstring) is the portion of a machine language instruction that specifies the operation to be performed. Beside the opcode itself, most instructions also specify the data they will process, in the form of operands. In addition to opcodes used in the instruction set architectures of various CPUs, which are hardware devices, they can also be used in abstract computing machines as part of their byte code specifications. Overview Specifications and format of the opcodes are laid out in the instruction set architecture (ISA) of the processor in question, which may be a general CPU or a more specialized processing unit. Opcodes for a given instruction set can be described through the use of an opcode table detailing all possible opcodes. Apart from the opcode itself, an instruction normally also has one or more specifiers for operands (i.e. data) on which the operation should act, although some operations may have implicit operands, or none at all. There are instruction sets with nearly uniform fields for opcode and operand specifiers, as well as others (the x86 architecture for instance) with a more complicated, variable-length structure. Instruction sets can be extended through the use of opcode prefixes which add a subset of new instructions made up of existing opcodes following reserved byte sequences. Operands Depending on architecture, the operands may be register values, values in the stack, other memory values, I/O ports (which may also be memory mapped), etc., specified and accessed using more or less complex addressing modes. The types of operations include arithmetic, data copying, logical operations, and program control, as well as special instructions (such as CPUID and others). Assembly language, or just assembly, is a low-level programming language, which uses mnemonic instructions and operands to represent machine code. This enhances the readability while still giving precise control over the machine instructions. Most programming is currently done using high-level programming languages, which are typically easier to read and write. These languages need to be compiled (translated into assembly language) by a system-specific compiler, or run through other compiled programs. Software instruction sets Opcodes can also be found in so-called byte codes and other representations intended for a software interpreter rather than a hardware device. These software-based instruction sets often employ slightly higher-level data types and operations than most hardware counterparts, but are nevertheless constructed along similar lines. Examples include the byte code found in Java class files which are then interpreted by the Java Virtual Machine (JVM), the byte code used in GNU Emacs for compiled Lisp code, .NET Common Intermediate Language (CIL), and many others. See also Gadget (machine instruction sequence) Illegal opcode Opcode database Syllable (computing) References Further reading Machine code
Operating System (OS)
427
ACM SIGOPS ACM SIGOPS is the Association for Computing Machinery's Special Interest Group on Operating Systems, an international community of students, faculty, researchers, and practitioners associated with research and development related to operating systems. The organization sponsors prestigious international conferences related to computer systems, operating systems, computer architectures, distributed computing, and virtual environments. In addition, the organization offers multiple awards recognizing outstanding participants in the field, including the Dennis M. Ritchie Doctoral Dissertation Award, in honor of Dennis Ritchie, co-creator of the renowned C programming language and Unix operating system. History In 1965, Henriette Avram started the ACM Special Interest Committee on Time-Sharing (SICTIME), and Arthur M. Rosenberg became the first chair. In 1968, the name was changed to ACM SIGOPS. By 1969, the organization included nearly 1000 members. Conferences ACM SIGOPS sponsors the following industry conferences, some independently and some in partnership with industry participants such as ACM SIGPLAN, USENIX, Oracle, Microsoft, and VMWare. APSYS: Asia-Pacific Workshop on Systems ASPLOS: International Conference on Architectural Support for Programming Languages and Operating Systems EuroSys: European Conference on Computer Systems OSDI: USENIX Symposium on Operating Systems Design and Implementation PODC: Symposium on Principles of Distributed Computing SOCC: International Symposium on Cloud Computing SOSP: Symposium on Operating Systems Principles SYSTOR: ACM International Systems and Storage Conference VEE: International Conference on Virtual Execution Environments Hall of Fame ACM SIGOPS includes a Hall of Fame Award, started in 2005, recognizing influential papers from ten or more years in the past. Notable recipients include: Leslie Lamport (2013) Barbara Liskov (2012) Richard Rashid Dennis Ritchie (2002) Journal ACM SIGOPS publishes the Operating Systems Review (OSR), a forum for topics including operating systems and architecture for multiprogramming, multiprocessing, and time-sharing, and computer system modeling and analysis. See also Cloud computing Computer engineering Computer multitasking Computer science Computing Kernel List of operating systems Operating system Timeline of operating systems Virtual machine References External links SIGOPS Association ACM SIGOPS France Association for Computing Machinery Special Interest Groups International professional associations
Operating System (OS)
428
Tulip System-1 The Tulip system I was a 16-bit personal computer based on the Intel 8086 and made by Tulip Computers, formerly an import company for the Exidy Sorcerer, called CompuData Systems. Its 6845-based video display controller could display 80×24 text in 8 different fonts for supporting different languages, including a (Videotex based) font for 2×3 pseudo graphic symbols for displaying 160×72 pixel graphics in text mode. The video display generator could also display graphics with a 384×288 or 768×288 (color) or 768×576 (monochrome) pixel resolution using its built-in NEC 7220 video display Coprocessor, which had hardware supported drawing functions, with a very advanced set of bit-block transfers, it could do line generating, arc, circle, ellipse, ellipse arc, filled arc, filled circle, filled ellipse, filled elliptical arc and many other varied commands. Its memory could be upgraded in units of 128 KB up to 896 KB (much more than the 640 KB of the original PC). It included a SASI hard disk interface (a predecessor of the SCSI-standard) and was optionally delivered with a 5 MB or 10 MB hard disk. The floppy disk size was 400 KB (10 sectors, instead of 8 or 9 with the IBM PC) or 800kb (80 tracks). It ran at 8 MHz with a true 16 bit CPU , almost twice the speed of the IBM PC XT which was launched only a few months earlier in July 1983. It had the possibility to use an 8087 coprocessor for math, which increased the speed to > 200 kflops, which was near mainframe data at that time. After initially using CP/M-86 it quickly switched to using generic MS-DOS 2.00. There was a rudimentary IBM-BIOS-emulator, which allowed the user to use WordStar and a few other IBM-PC software, but Compudata B.V. shipped WordStar and some other software as adopted software for this computer. There was a programming support by Compudata B.V. with MS-Basic, MS-Pascal and MS-Fortran. On a private base, TeX and Turbo Pascal were ported to the Tulip System 1. References External links Tulip I on web site of the Dutch Tulip association (in Dutch) A user's personal experience with a Tulip System-I IBM PC compatibles
Operating System (OS)
429
MacBook The MacBook is a brand of Macintosh notebook computers designed and marketed by Apple Inc. that use Apple's macOS operating system since 2006. It replaced the PowerBook and iBook brands during the Mac transition to Intel processors, announced in 2005. The current lineup consists of the MacBook Air (2008–present) and the MacBook Pro (2006–present). Two different lines simply named "MacBook" existed from 2006 to 2012 and 2015 to 2019. On November 10, 2020, Apple announced models of the MacBook Air and MacBook Pro incorporating the new Apple M1 system on a chip. Now the latest generation of Apple's chip is the M1 Pro and M1 Max only available on the MacBooks Pro (14" and 16") Overview The MacBook family was initially housed in designs similar to the iBook and PowerBook lines which preceded them, now making use of a unibody aluminum construction first introduced with the MacBook Air. This new construction also has a black plastic keyboard that was first used on the MacBook Air, which itself was inspired by the sunken keyboard of the original polycarbonate MacBooks. The now standardized keyboard brings congruity to the MacBook line, with black keys on a metallic aluminum body. The lids of the MacBook family are held closed by a magnet with no mechanical latch, a design element first introduced with the polycarbonate MacBook. Memory, drives, and batteries were accessible in the old MacBook lineup, though the newest compact lineup solders or glues all such components in place. All of the current MacBooks feature backlit keyboards. The MacBook was discontinued from February 2012 until March 2015, when a new model featuring an ultraportable design and an all-metal enclosure was introduced. It was again discontinued in July 2019 following a price reduction of the 3rd generation MacBook Air and discontinuation of the 2nd generation model. MacBook family models Current MacBook Air The MacBook Air is Apple's least expensive notebook computer. While the 1st generation was released as a premium ultraportable positioned above the 2006 - 2012 MacBook, lowered prices on subsequent iterations and the discontinuation of that MacBook has made it serve as the entry-level Macintosh portable. The 2010 to 2017 base model came with a 13-inch screen and was Apple's thinnest notebook computer until the introduction of the MacBook in March 2015. This MacBook Air model features two USB Type-A 3.0 ports and a Thunderbolt 2 port, as well as an SDXC card slot (only on the 13inch model). This model of MacBook Air did not have a Retina Display. A MacBook Air model with an 11-inch screen was available from October 2010 to October 2016. In 2017, the MacBook Air received a small refresh, with the processor speed increased to 1.8 GHz. On October 30, 2018, the MacBook Air underwent a major design change, dropping the USB Type-A ports, MagSafe, and the SD card slot in favor of two USB-C/Thunderbolt 3 ports and a headphone jack. It was updated with a Retina display and Intel Y-series Amber Lake i5 CPUs, as well as a Force Touch trackpad, a third-generation butterfly mechanism keyboard, and the Touch ID sensor found in the fourth-generation MacBook Pro, but without the Touch Bar. The base price was also raised, although the base configuration of the 2017 model was retained until July 9, 2019, when it was discontinued along with the Retina MacBook. The base price of this model was also dropped to $1099 ($999 for students) on the same day. On November 10, 2020, Apple announced that the MacBook Air would use the new Apple M1 system on a chip. The new Air does not have a fan, ensuring silent operation, but limiting the M1 chip speed in sustained operations. Performance was claimed to be higher than most current Intel laptops. MacBook Pro The MacBook Pro is Apple's higher-end notebook available in both 13-inch and 16-inch configurations. The current generation 13-inch MacBook Pro was introduced in October 2018. It features a touch-sensitive OLED display strip located in place of the function keys, a Touch ID sensor integrated with the power button, and four USB-C ports that also serve as Thunderbolt 3 ports. The 13-inch model was also available in a less expensive configuration with conventional function keys and only two USB-C/Thunderbolt 3 ports, but since July 2019, the base MacBook Pro model has the Touch Bar as well as quad-core processors, similar to the higher-end models, although it still has only two USB-C / Thunderbolt 3 ports. The May 4, 2020 refresh adopts many of the upgrades seen in the 16" 2019 MacBook Pro, including the scissor mechanism keyboard ("Magic Keyboard") and a physical Escape button. On November 13, 2019, Apple released the 16-inch MacBook Pro, replacing the 15-inch model of the previous generation, and replacing the butterfly keyboard with a scissor mechanism keyboard (dubbed the Magic Keyboard by Apple), reverting to the old "inverted-T" arrow key layout, replacing the virtual Escape key on the Touch Bar with a physical key, and replacing the AMD Polaris and Vega graphics from the 15-inch model with options from AMD's Navi graphics architecture, as well as reengineering the speakers, microphone array, and the thermal system compared to the 15-inch; the latter had thermal limitations in the 15-inch model due to its design. In addition, the 16-inch is available with up to 64 GB of DDR4 2667 MHz RAM and up to 8 TB of SSD storage. It also has a 100 Wh battery; this is the largest battery that can be easily carried onto a commercial airliner under U.S. Transportation Security Administration rules. On November 10, 2020, Apple announced a new model of the MacBook Pro incorporating the new Apple M1 system on a chip. Apple will continue to sell versions of the MacBook Pro with Intel processors. The MacBook Pro with M1 SoC incorporates a fan, allowing sustained operation of the M1 chip at its full performance level, which is claimed to match or exceed that of Intel versions. Unlike Intel Pro models, the M1 version only comes with a 13-inch screen, has only two Thunderbolt ports and has a maximum of 16 GB random access memory (RAM). On October 18, 2021, Apple announced updated 14-inch and 16-inch MacBook Pro models during an online event. They are based on the M1 Pro and M1 Max, Apple's first professional-focused ARM-based systems on a chip. This release addressed many criticisms of the previous generation by reintroducing hard function keys in place of the Touch Bar, an HDMI 2.0 port, a SDXC reader and MagSafe charging. Other additions include a Liquid Retina XDR display with thinner bezels and an iPhone-like notch, ProMotion supporting 120Hz variable refresh rate, a 1080p webcam, Wi-Fi 6, 3 Thunderbolt 4 ports, a six-speaker sound system supporting Dolby Atmos, and support for a third 6K display on M1 Max models. The 16-inch version is bundled with a 140W GaN power supply that supports USB-C Power Delivery 3.1, though only MagSafe supports full-speed charging as the machine's USB-C ports are limited to 100W. Discontinued It was discontinued on July 20, 2011, for consumer purchase and in February 2012 for institutions, being superseded by the 2nd generation MacBook Air, as the 11-inch model introduced in 2010 had the same starting price of the MacBook. The sales of the Mac computers amounted to 18.21 million units in Apple’s 2018 fiscal year. The Retina MacBook was a line of Macintosh portable computers introduced in March 2015. It was discontinued on July 9, 2019, as it had been superseded by the 13-inch Retina MacBook Air, which had a lower base price ($1,299 for the MacBook, $1,199 for the 2018 MacBook Air, and $1,099 for the 2019 MacBook Air), additional USB-C / Thunderbolt 3 ports (the MacBook has only one USB-C port vs two USB-C / Thunderbolt 3 ports on the MacBook Air), and better performance. All Intel-based MacBook Air and MacBook Pro models were discontinued in November 2020 and October 2021 respectively, and replaced by Apple silicon models. Comparisons Sales See also Comparison of Macintosh models MacBook Air MacBook Pro References X86 Macintosh computers Consumer electronics brands
Operating System (OS)
430
Personal computer A personal computer (PC) is a multi-purpose microcomputer whose size, capabilities, and price make it feasible for individual use. Personal computers are intended to be operated directly by an end user, rather than by a computer expert or technician. Unlike large, costly minicomputers and mainframes, time-sharing by many people at the same time is not used with personal computers. Primarily in the late 1970s and 1980s, the term home computer was also used. Institutional or corporate computer owners in the 1960s had to write their own programs to do any useful work with the machines. While personal computer users may develop their own applications, usually these systems run commercial software, free-of-charge software ("freeware"), which is most often proprietary, or free and open-source software, which is provided in "ready-to-run", or binary, form. Software for personal computers is typically developed and distributed independently from the hardware or operating system manufacturers. Many personal computer users no longer need to write their own programs to make any use of a personal computer, although end-user programming is still feasible. This contrasts with mobile systems, where software is often available only through a manufacturer-supported channel, and end-user program development may be discouraged by lack of support by the manufacturer. Since the early 1990s, Microsoft operating systems and Intel hardware dominated much of the personal computer market, first with MS-DOS and then with Microsoft Windows. Alternatives to Microsoft's Windows operating systems occupy a minority share of the industry. These include Apple's macOS and free and open-source Unix-like operating systems, such as Linux. The advent of personal computers and the concurrent Digital Revolution have significantly affected the lives of people in all countries. Terminology The term "PC" is an initialism for "personal computer". While the IBM Personal Computer incorporated the designation in its model name, the term originally described personal computers of any brand. In some contexts, "PC" is used to contrast with "Mac", an Apple Macintosh computer. Since none of these Apple products were mainframes or time-sharing systems, they were all "personal computers" and not "PC" (brand) computers. In 1995, a CBS segment on the growing popularity of PC reported "For many newcomers PC stands for Pain and Confusion". History In the history of computing, early experimental machines could be operated by a single attendant. For example, ENIAC which became operational in 1946 could be run by a single, albeit highly trained, person. This mode pre-dated the batch programming, or time-sharing modes with multiple users connected through terminals to mainframe computers. Computers intended for laboratory, instrumentation, or engineering purposes were built, and could be operated by one person in an interactive fashion. Examples include such systems as the Bendix G15 and LGP-30 of 1956, and the Soviet MIR series of computers developed from 1965 to 1969. By the early 1970s, people in academic or research institutions had the opportunity for single-person use of a computer system in interactive mode for extended durations, although these systems would still have been too expensive to be owned by a single person. The personal computer was made possible by major advances in semiconductor technology. In 1959, the silicon integrated circuit (IC) chip was developed by Robert Noyce at Fairchild Semiconductor, and the metal-oxide-semiconductor (MOS) transistor was developed by Mohamed Atalla and Dawon Kahng at Bell Labs. The MOS integrated circuit was commercialized by RCA in 1964, and then the silicon-gate MOS integrated circuit was developed by Federico Faggin at Fairchild in 1968. Faggin later used silicon-gate MOS technology to develop the first single-chip microprocessor, the Intel 4004, in 1971. The first microcomputers, based on microprocessors, were developed during the early 1970s. Widespread commercial availability of microprocessors, from the mid-1970s onwards, made computers cheap enough for small businesses and individuals to own. In what was later to be called the Mother of All Demos, SRI researcher Douglas Engelbart in 1968 gave a preview of features that would later become staples of personal computers: e-mail, hypertext, word processing, video conferencing, and the mouse. The demonstration required technical support staff and a mainframe time-sharing computer that were far too costly for individual business use at the time. Early personal computersgenerally called microcomputerswere often sold in a kit form and in limited volumes, and were of interest mostly to hobbyists and technicians. Minimal programming was done with toggle switches to enter instructions, and output was provided by front panel lamps. Practical use required adding peripherals such as keyboards, computer displays, disk drives, and printers. Micral N was the earliest commercial, non-kit microcomputer based on a microprocessor, the Intel 8008. It was built starting in 1972, and a few hundred units were sold. This had been preceded by the Datapoint 2200 in 1970, for which the Intel 8008 had been commissioned, though not accepted for use. The CPU design implemented in the Datapoint 2200 became the basis for x86 architecture used in the original IBM PC and its descendants. In 1973, the IBM Los Gatos Scientific Center developed a portable computer prototype called SCAMP (Special Computer APL Machine Portable) based on the IBM PALM processor with a Philips compact cassette drive, small CRT, and full function keyboard. SCAMP emulated an IBM 1130 minicomputer in order to run APL/1130. In 1973, APL was generally available only on mainframe computers, and most desktop sized microcomputers such as the Wang 2200 or HP 9800 offered only BASIC. Because SCAMP was the first to emulate APL/1130 performance on a portable, single user computer, PC Magazine in 1983 designated SCAMP a "revolutionary concept" and "the world's first personal computer". This seminal, single user portable computer now resides in the Smithsonian Institution, Washington, D.C.. Successful demonstrations of the 1973 SCAMP prototype led to the IBM 5100 portable microcomputer launched in 1975 with the ability to be programmed in both APL and BASIC for engineers, analysts, statisticians, and other business problem-solvers. In the late 1960s such a machine would have been nearly as large as two desks and would have weighed about half a ton. Another desktop portable APL machine, the MCM/70, was demonstrated in 1973 and shipped in 1974. It used the Intel 8008 processor. A seminal step in personal computing was the 1973 Xerox Alto, developed at Xerox's Palo Alto Research Center (PARC). It had a graphical user interface (GUI) which later served as inspiration for Apple's Macintosh, and Microsoft's Windows operating system. The Alto was a demonstration project, not commercialized, as the parts were too expensive to be affordable. Also in 1973 Hewlett Packard introduced fully BASIC programmable microcomputers that fit entirely on top of a desk, including a keyboard, a small one-line display, and printer. The Wang 2200 microcomputer of 1973 had a full-size cathode ray tube (CRT) and cassette tape storage. These were generally expensive specialized computers sold for business or scientific uses. 1974 saw the introduction of what is considered by many to be the first true "personal computer", the Altair 8800 created by Micro Instrumentation and Telemetry Systems (MITS). Based on the 8-bit Intel 8080 Microprocessor, the Altair is widely recognized as the spark that ignited the microcomputer revolution as the first commercially successful personal computer. The computer bus designed for the Altair was to become a de facto standard in the form of the S-100 bus, and the first programming language for the machine was Microsoft's founding product, Altair BASIC. In 1976, Steve Jobs and Steve Wozniak sold the Apple I computer circuit board, which was fully prepared and contained about 30 chips. The Apple I computer differed from the other kit-style hobby computers of era. At the request of Paul Terrell, owner of the Byte Shop, Jobs and Wozniak were given their first purchase order, for 50 Apple I computers, only if the computers were assembled and tested and not a kit computer. Terrell wanted to have computers to sell to a wide range of users, not just experienced electronics hobbyists who had the soldering skills to assemble a computer kit. The Apple I as delivered was still technically a kit computer, as it did not have a power supply, case, or keyboard when it was delivered to the Byte Shop. The first successfully mass-marketed personal computer to be announced was the Commodore PET after being revealed in January 1977. However, it was back-ordered and not available until later that year. Three months later (April), the Apple II (usually referred to as the "Apple") was announced with the first units being shipped 10 June 1977, and the TRS-80 from Tandy Corporation / Tandy Radio Shack following in August 1977, which sold over 100,000 units during its lifetime. Together, these 3 machines were referred to as the "1977 trinity". Mass-market, ready-assembled computers had arrived, and allowed a wider range of people to use computers, focusing more on software applications and less on development of the processor hardware. In 1977 the Heath company introduced personal computer kits known as Heathkits, starting with the Heathkit H8, followed by the Heathkit H89 in late 1979. With the purchase of the Heathkit H8 you would obtain the chassis and CPU card to assemble yourself, additional hardware such as the H8-1 memory board that contained 4k of RAM could also be purchased in order to run software. The Heathkit H11 model was released in 1978 and was one of the first 16-bit personal computers; however, due to its high retail cost of $1,295 was discontinued in 1982. During the early 1980s, home computers were further developed for household use, with software for personal productivity, programming and games. They typically could be used with a television already in the home as the computer display, with low-detail blocky graphics and a limited color range, and text about 40 characters wide by 25 characters tall. Sinclair Research, a UK company, produced the ZX Seriesthe ZX80 (1980), ZX81 (1981), and the ZX Spectrum; the latter was introduced in 1982, and totaled 8 million unit sold. Following came the Commodore 64, totaled 17 million units sold and the Amstrad CPC series (464–6128). In the same year, the NEC PC-98 was introduced, which was a very popular personal computer that sold in more than 18 million units. Another famous personal computer, the revolutionary Amiga 1000, was unveiled by Commodore on 23 July 1985. The Amiga 1000 featured a multitasking, windowing operating system, color graphics with a 4096-color palette, stereo sound, Motorola 68000 CPU, 256 KB RAM, and 880 KB 3.5-inch disk drive, for US$1,295. Somewhat larger and more expensive systems were aimed at office and small business use. These often featured 80-column text displays but might not have had graphics or sound capabilities. These microprocessor-based systems were still less costly than time-shared mainframes or minicomputers. Workstations were characterized by high-performance processors and graphics displays, with large-capacity local disk storage, networking capability, and running under a multitasking operating system. Eventually, due to the influence of the IBM PC on the personal computer market, personal computers and home computers lost any technical distinction. Business computers acquired color graphics capability and sound, and home computers and game systems users used the same processors and operating systems as office workers. Mass-market computers had graphics capabilities and memory comparable to dedicated workstations of a few years before. Even local area networking, originally a way to allow business computers to share expensive mass storage and peripherals, became a standard feature of personal computers used at home. IBM's first PC was introduced on 12 August 1981. In 1982 "The Computer" was named Machine of the Year by Time magazine. In the 2010s, several companies such as Hewlett-Packard and Sony sold off their PC and laptop divisions. As a result, the personal computer was declared dead several times during this period. An increasingly important set of uses for personal computers relied on the ability of the computer to communicate with other computer systems, allowing interchange of information. Experimental public access to a shared mainframe computer system was demonstrated as early as 1973 in the Community Memory project, but bulletin board systems and online service providers became more commonly available after 1978. Commercial Internet service providers emerged in the late 1980s, giving public access to the rapidly growing network. In 1991, the World Wide Web was made available for public use. The combination of powerful personal computers with high-resolution graphics and sound, with the infrastructure provided by the Internet, and the standardization of access methods of the Web browsers, established the foundation for a significant fraction of modern life, from bus time tables through unlimited distribution of free videos through to online user-edited encyclopedias. Types Stationary Workstation A workstation is a high-end personal computer designed for technical, mathematical, or scientific applications. Intended primarily to be used by one person at a time, they are commonly connected to a local area network and run multi-user operating systems. Workstations are used for tasks such as computer-aided design, drafting and modeling, computation-intensive scientific and engineering calculations, image processing, architectural modeling, and computer graphics for animation and motion picture visual effects. Desktop computer Before the widespread use of PCs, a computer that could fit on a desk was remarkably small, leading to the "desktop" nomenclature. More recently, the phrase usually indicates a particular style of computer case. Desktop computers come in a variety of styles ranging from large vertical tower cases to small models which can be tucked behind or rest directly beneath (and support) LCD monitors. While the term "desktop" often refers to a computer with a vertically aligned computer tower case, these varieties often rest on the ground or underneath desks. Despite this seeming contradiction, the term "desktop" does typically refer to these vertical tower cases as well as the horizontally aligned models which are designed to literally rest on top of desks and are therefore more appropriate to the "desktop" term, although both types qualify for this "desktop" label in most practical situations aside from certain physical arrangement differences. Both styles of these computer cases hold the systems hardware components such as the motherboard, processor chip, other internal operating parts. Desktop computers have an external monitor with a display screen and an external keyboard, which are plugged into ports on the back of the computer case. Desktop computers are popular for home and business computing applications as they leave space on the desk for multiple monitors. A gaming computer is a desktop computer that generally comprises a high-performance video card, processor and RAM, to improve the speed and responsiveness of demanding video games. An all-in-one computer (also known as single-unit PCs) is a desktop computer that combines the monitor and processor within a single unit. A separate keyboard and mouse are standard input devices, with some monitors including touchscreen capability. The processor and other working components are typically reduced in size relative to standard desktops, located behind the monitor, and configured similarly to laptops. A nettop computer was introduced by Intel in February 2008, characterized by low cost and lean functionality. These were intended to be used with an Internet connection to run Web browsers and Internet applications. A Home theater PC (HTPC) combines the functions of a personal computer and a digital video recorder. It is connected to a TV set or an appropriately sized computer display, and is often used as a digital photo viewer, music and video player, TV receiver, and digital video recorder. HTPCs are also referred to as media center systems or media servers. The goal is to combine many or all components of a home theater setup into one box. HTPCs can also connect to services providing on-demand movies and TV shows. HTPCs can be purchased pre-configured with the required hardware and software needed to add television programming to the PC, or can be assembled from components. Keyboard computers are computers inside of keyboards. Examples include the Commodore 64, MSX, Amstrad CPC, Atari ST and the ZX Spectrum. Portable The potential utility of portable computers was apparent early on. Alan Kay described the Dynabook in 1972, but no hardware was developed. The Xerox NoteTaker was produced in a very small experimental batch around 1978. In 1975, the IBM 5100 could be fit into a transport case, making it a portable computer, but it weighed about 50 pounds. Before the introduction of the IBM PC, portable computers consisting of a processor, display, disk drives and keyboard, in a suit-case style portable housing, allowed users to bring a computer home from the office or to take notes at a classroom. Examples include the Osborne 1 and Kaypro; and the Commodore SX-64. These machines were AC-powered and included a small CRT display screen. The form factor was intended to allow these systems to be taken on board an airplane as carry-on baggage, though their high power demand meant that they could not be used in flight. The integrated CRT display made for a relatively heavy package, but these machines were more portable than their contemporary desktop equals. Some models had standard or optional connections to drive an external video monitor, allowing a larger screen or use with video projectors. IBM PC-compatible suitcase format computers became available soon after the introduction of the PC, with the Compaq Portable being a leading example of the type. Later models included a hard drive to give roughly equivalent performance to contemporary desktop computers. The development of thin plasma display and LCD screens permitted a somewhat smaller form factor, called the "lunchbox" computer. The screen formed one side of the enclosure, with a detachable keyboard and one or two half-height floppy disk drives, mounted facing the ends of the computer. Some variations included a battery, allowing operation away from AC outlets. Notebook computers such as the TRS-80 Model 100 and Epson HX-20 had roughly the plan dimensions of a sheet of typing paper (ANSI A or ISO A4). These machines had a keyboard with slightly reduced dimensions compared to a desktop system, and a fixed LCD display screen coplanar with the keyboard. These displays were usually small, with 8 to 16 lines of text, sometimes only 40 columns line length. However, these machines could operate for extended times on disposable or rechargeable batteries. Although they did not usually include internal disk drives, this form factor often included a modem for telephone communication and often had provisions for external cassette or disk storage. Later, clam-shell format laptop computers with similar small plan dimensions were also called "notebooks". Laptop A laptop computer is designed for portability with "clamshell" design, where the keyboard and computer components are on one panel, with a hinged second panel containing a flat display screen. Closing the laptop protects the screen and keyboard during transportation. Laptops generally have a rechargeable battery, enhancing their portability. To save power, weight and space, laptop graphics chips are in many cases integrated into the CPU or chipset and use system RAM, resulting in reduced graphics performance when compared to desktop machines, that more typically have a graphics card installed. For this reason, desktop computers are usually preferred over laptops for gaming purposes. Unlike desktop computers, only minor internal upgrades (such as memory and hard disk drive) are feasible owing to the limited space and power available. Laptops have the same input and output ports as desktops, for connecting to external displays, mice, cameras, storage devices and keyboards. Laptops are also a little more expensive compared to desktops, as the miniaturized components for laptops themselves are expensive. A desktop replacement computer is a portable computer that provides the full capabilities of a desktop computer. Such computers are currently large laptops. This class of computers usually includes more powerful components and a larger display than generally found in smaller portable computers, and may have limited battery capacity or no battery. Netbooks, also called mini notebooks or subnotebooks, were a subgroup of laptops suited for general computing tasks and accessing web-based applications. Initially, the primary defining characteristic of netbooks was the lack of an optical disc drive, smaller size, and lower performance than full-size laptops. By mid-2009 netbooks had been offered to users "free of charge", with an extended service contract purchase of a cellular data plan. Ultrabooks and Chromebooks have since filled the gap left by Netbooks. Unlike the generic Netbook name, Ultrabook and Chromebook are technically both specifications by Intel and Google respectively. Tablet A tablet uses a touchscreen display, which can be controlled using either a stylus pen or finger. Some tablets may use a "hybrid" or "convertible" design, offering a keyboard that can either be removed as an attachment, or a screen that can be rotated and folded directly over top the keyboard. Some tablets may use desktop-PC operating system such as Windows or Linux, or may run an operating system designed primarily for tablets. Many tablet computers have USB ports, to which a keyboard or mouse can be connected. Smartphone Smartphones are often similar to tablet computers, the difference being that smartphones always have cellular integration. They are generally smaller than tablets, and may not have a slate form factor. Ultra-mobile PC The ultra-mobile PC (UMP) is a small tablet computer. It was developed by Microsoft, Intel and Samsung, among others. Current UMPCs typically feature the Windows XP, Windows Vista, Windows 7, or Linux operating system, and low-voltage Intel Atom or VIA C7-M processors. Pocket PC A pocket PC is a hardware specification for a handheld-sized computer (personal digital assistant, PDA) that runs the Microsoft Windows Mobile operating system. It may have the capability to run an alternative operating system like NetBSD or Linux. Pocket PCs have many of the capabilities of desktop PCs. Numerous applications are available for handhelds adhering to the Microsoft Pocket PC specification, many of which are freeware. Microsoft-compliant Pocket PCs can also be used with many other add-ons like GPS receivers, barcode readers, RFID readers and cameras. In 2007, with the release of Windows Mobile 6, Microsoft dropped the name Pocket PC in favor of a new naming scheme: devices without an integrated phone are called Windows Mobile Classic instead of Pocket PC, while devices with an integrated phone and a touch screen are called Windows Mobile Professional. Palmtop and handheld computers Palmtop PCs were miniature pocket-sized computers running DOS that first came about in the late 1980s, typically in a clamshell form factor with a keyboard. Non-x86 based devices were often called palmtop computers, examples being Psion Series 3. In later years a hardware specification called Handheld PC was later released by Microsoft that run the Windows CE operating system. Hardware Computer hardware is a comprehensive term for all physical and tangible parts of a computer, as distinguished from the data it contains or operates on, and the software that provides instructions for the hardware to accomplish tasks. Some sub-systems of a personal computer may contain processors that run a fixed program, or firmware, such as a keyboard controller. Firmware usually is not changed by the end user of the personal computer. Most 2010s-era computers require users only to plug in the power supply, monitor, and other cables. A typical desktop computer consists of a computer case (or "tower"), a metal chassis that holds the power supply, motherboard, hard disk drive, and often an optical disc drive. Most towers have empty space where users can add additional components. External devices such as a computer monitor or visual display unit, keyboard, and a pointing device (mouse) are usually found in a personal computer. The motherboard connects all processor, memory and peripheral devices together. The RAM, graphics card and processor are in most cases mounted directly onto the motherboard. The central processing unit (microprocessor chip) plugs into a CPU socket, while the ram modules plug into corresponding ram sockets. Some motherboards have the video display adapter, sound and other peripherals integrated onto the motherboard, while others use expansion slots for graphics cards, network cards, or other I/O devices. The graphics card or sound card may employ a break out box to keep the analog parts away from the electromagnetic radiation inside the computer case. Disk drives, which provide mass storage, are connected to the motherboard with one cable, and to the power supply through another cable. Usually, disk drives are mounted in the same case as the motherboard; expansion chassis are also made for additional disk storage. For large amounts of data, a tape drive can be used or extra hard disks can be put together in an external case. The keyboard and the mouse are external devices plugged into the computer through connectors on an I/O panel on the back of the computer case. The monitor is also connected to the input/output (I/O) panel, either through an onboard port on the motherboard, or a port on the graphics card. Capabilities of the personal computer's hardware can sometimes be extended by the addition of expansion cards connected via an expansion bus. Standard peripheral buses often used for adding expansion cards in personal computers include PCI, PCI Express (PCIe), and AGP (a high-speed PCI bus dedicated to graphics adapters, found in older computers). Most modern personal computers have multiple physical PCI Express expansion slots, with some having PCI slots as well. A peripheral is "a device connected to a computer to provide communication (such as input and output) or auxiliary functions (such as additional storage)". Peripherals generally connect to the computer through the use of USB ports or inputs located on the I/O panel. USB flash drives provide portable storage using flash memory which allows users to access the files stored on the drive on any computer. Memory cards also provide portable storage for users, commonly used on other electronics such as mobile phones and digital cameras, the information stored on these cards can be accessed using a memory card reader to transfer data between devices. Webcams, which are either built into computer hardware or connected via USB are video cameras that records video in real time to either be saved to the computer or streamed somewhere else over the internet. Game controllers can be plugged in via USB and can be used as an input device for video games as an alternative to using keyboard and mouse. Headphones and speakers can be connected via USB or through an auxiliary port (found on I/O panel) and allow users to listen to audio accessed on their computer; however, speakers may also require an additional power source to operate. Microphones can be connected through an audio input port on the I/O panel and allow the computer to convert sound into an electrical signal to be used or transmitted by the computer. Software Computer software is any kind of computer program, procedure, or documentation that performs some task on a computer system. The term includes application software such as word processors that perform productive tasks for users, system software such as operating systems that interface with computer hardware to provide the necessary services for application software, and middleware that controls and co-ordinates distributed systems. Software applications are common for word processing, Internet browsing, Internet faxing, e-mail and other digital messaging, multimedia playback, playing of computer game, and computer programming. The user may have significant knowledge of the operating environment and application programs, but is not necessarily interested in programming nor even able to write programs for the computer. Therefore, most software written primarily for personal computers tends to be designed with simplicity of use, or "user-friendliness" in mind. However, the software industry continuously provide a wide range of new products for use in personal computers, targeted at both the expert and the non-expert user. Operating system An operating system (OS) manages computer resources and provides programmers with an interface used to access those resources. An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system. An operating system performs basic tasks such as controlling and allocating memory, prioritizing system requests, controlling input and output devices, facilitating computer networking, and managing files. Common contemporary desktop operating systems are Microsoft Windows, macOS, Linux, Solaris and FreeBSD. Windows, macOS, and Linux all have server and personal variants. With the exception of Microsoft Windows, the designs of each of them were inspired by or directly inherited from the Unix operating system. Early personal computers used operating systems that supported command line interaction, using an alphanumeric display and keyboard. The user had to remember a large range of commands to, for example, open a file for editing or to move text from one place to another. Starting in the early 1960s, the advantages of a graphical user interface began to be explored, but widespread adoption required lower-cost graphical display equipment. By 1984, mass-market computer systems using graphical user interfaces were available; by the turn of the 21st century, text-mode operating systems were no longer a significant fraction of the personal computer market. Applications Generally, a computer user uses application software to carry out a specific task. System software supports applications and provides common services such as memory management, network connectivity and device drivers, all of which may be used by applications but are not directly of interest to the end user. A simplified analogy in the world of hardware would be the relationship of an electric light bulb (an application) to an electric power generation plant (a system): the power plant merely generates electricity, not itself of any real use until harnessed to an application like the electric light that performs a service that benefits the user. Typical examples of software applications are word processors, spreadsheets, and media players. Multiple applications bundled together as a package are sometimes referred to as an application suite. Microsoft Office and LibreOffice, which bundle together a word processor, a spreadsheet, and several other discrete applications, are typical examples. The separate applications in a suite usually have a user interface that has some commonality making it easier for the user to learn and use each application. Often, they may have some capability to interact with each other in ways beneficial to the user; for example, a spreadsheet might be able to be embedded in a word processor document even though it had been created in the separate spreadsheet application. End-user development tailors systems to meet the user's specific needs. User-written software include spreadsheet templates, word processor macros, scientific simulations, graphics and animation scripts; even email filters are a kind of user software. Users create this software themselves and often overlook how important it is. Gaming PC gaming is popular among the high-end PC market. According to an April 2018 market analysis done by Newzoo, PC gaming has fallen behind both console and mobile gaming in terms of market share sitting at a 24% share of the entire market. The market for PC gaming still continues to grow and is expected to generate $32.3 billion in revenue in the year 2021. PC gaming is at the forefront of competitive gaming, known as esports, with games such as Overwatch and Counter-Strike: Global Offensive leading the industry that is suspected to surpass a trillion dollars in revenue in 2019. Sales Market share In 2001, 125 million personal computers were shipped in comparison to 48,000 in 1977. More than 500 million personal computers were in use in 2002 and one billion personal computers had been sold worldwide from the mid-1970s up to this time. Of the latter figure, 75% were professional or work related, while the rest were sold for personal or home use. About 81.5% of personal computers shipped had been desktop computers, 16.4% laptops and 2.1% servers. The United States had received 38.8% (394 million) of the computers shipped, Europe 25% and 11.7% had gone to the Asia-Pacific region, the fastest-growing market as of 2002. The second billion was expected to be sold by 2008. Almost half of all households in Western Europe had a personal computer and a computer could be found in 40% of homes in United Kingdom, compared with only 13% in 1985. The global personal computer shipments were 350.9 million units in 2010, 308.3 million units in 2009 and 302.2 million units in 2008. The shipments were 264 million units in the year 2007, according to iSuppli, up 11.2% from 239 million in 2006. In 2004, the global shipments were 183 million units, an 11.6% increase over 2003. In 2003, 152.6 million computers were shipped, at an estimated value of $175 billion. In 2002, 136.7 million PCs were shipped, at an estimated value of $175 billion. In 2000, 140.2 million personal computers were shipped, at an estimated value of $226 billion. Worldwide shipments of personal computers surpassed the 100-million mark in 1999, growing to 113.5 million units from 93.3 million units in 1998. In 1999, Asia had 14.1 million units shipped. As of June 2008, the number of personal computers in use worldwide hit one billion, while another billion is expected to be reached by 2014. Mature markets like the United States, Western Europe and Japan accounted for 58% of the worldwide installed PCs. The emerging markets were expected to double their installed PCs by 2012 and to take 70% of the second billion PCs. About 180 million computers (16% of the existing installed base) were expected to be replaced and 35 million to be dumped into landfill in 2008. The whole installed base grew 12% annually. Based on International Data Corporation (IDC) data for Q2 2011, for the first time China surpassed US in PC shipments by 18.5 million and 17.7 million respectively. This trend reflects the rising of emerging markets as well as the relative stagnation of mature regions. In the developed world, there has been a vendor tradition to keep adding functions to maintain high prices of personal computers. However, since the introduction of the One Laptop per Child foundation and its low-cost XO-1 laptop, the computing industry started to pursue the price too. Although introduced only one year earlier, there were 14 million netbooks sold in 2008. Besides the regular computer manufacturers, companies making especially rugged versions of computers have sprung up, offering alternatives for people operating their machines in extreme weather or environments. In 2011, Deloitte consulting firm predicted that, smartphones and tablet computers as computing devices would surpass the PCs sales (as has happened since 2012). As of 2013, worldwide sales of PCs had begun to fall as many consumers moved to tablets and smartphones. Sales of 90.3 million units in the 4th quarter of 2012 represented a 4.9% decline from sales in the 4th quarter of 2011. Global PC sales fell sharply in the first quarter of 2013, according to IDC data. The 14% year-over-year decline was the largest on record since the firm began tracking in 1994, and double what analysts had been expecting. The decline of Q2 2013 PC shipments marked the fifth straight quarter of falling sales. "This is horrific news for PCs," remarked an analyst. "It's all about mobile computing now. We have definitely reached the tipping point." Data from Gartner showed a similar decline for the same time period. China's Lenovo Group bucked the general trend as strong sales to first-time buyers in the developing world allowed the company's sales to stay flat overall. Windows 8, which was designed to look similar to tablet/smartphone software, was cited as a contributing factor in the decline of new PC sales. "Unfortunately, it seems clear that the Windows 8 launch not only didn't provide a positive boost to the PC market, but appears to have slowed the market," said IDC Vice President Bob O’Donnell. In August 2013, Credit Suisse published research findings that attributed around 75% of the operating profit share of the PC industry to Microsoft (operating system) and Intel (semiconductors). According to IDC, in 2013 PC shipments dropped by 9.8% as the greatest drop-ever in line with consumers trends to use mobile devices. In the second quarter of 2018, PC sales grew for the first time since the first quarter of 2012. According to research firm Gartner, the growth mainly came from the business market while the consumer market experienced decline. Average selling price Selling prices of personal computers steadily declined due to lower costs of production and manufacture, while the capabilities of computers increased. In 1975, an Altair kit sold for around only US$400, but required customers to solder components into circuit boards; peripherals required to interact with the system in alphanumeric form instead of blinking lights would add another $2,000, and the resultant system was of use only to hobbyists. At their introduction in 1981, the US$1,795 price of the Osborne 1 and its competitor Kaypro was considered an attractive price point; these systems had text-only displays and only floppy disks for storage. By 1982, Michael Dell observed that a personal computer system selling at retail for about $3,000 US was made of components that cost the dealer about $600; typical gross margin on a computer unit was around $1,000. The total value of personal computer purchases in the US in 1983 was about $4 billion, comparable to total sales of pet food. By late 1998, the average selling price of personal computer systems in the United States had dropped below $1,000. For Microsoft Windows systems, the average selling price (ASP) showed a decline in 2008/2009, possibly due to low-cost netbooks, drawing $569 for desktop computers and $689 for laptops at U.S. retail in August 2008. In 2009, ASP had further fallen to $533 for desktops and to $602 for notebooks by January and to $540 and $560 in February. According to research firm NPD, the average selling price of all Windows portable PCs has fallen from $659 in October 2008 to $519 in October 2009. Environmental impact External costs of environmental impact are not fully included in the selling price of personal computers. Personal computers have become a large contributor to the 50 million tons of discarded electronic waste generated annually, according to the United Nations Environment Programme. To address the electronic waste issue affecting developing countries and the environment, extended producer responsibility (EPR) acts have been implemented in various countries and states. In the absence of comprehensive national legislation or regulation on the export and import of electronic waste, the Silicon Valley Toxics Coalition and BAN (Basel Action Network) teamed up with electronic recyclers in the US and Canada to create an e-steward program for the orderly disposal of electronic waste. Some organizations oppose EPR regulation, and claim that manufacturers naturally move toward reduced material and energy use. See also List of home computers Public computer Portable computer Desktop replacement computer Quiet PC Pocket PC Market share of personal computer vendors Personal Computer Museum Enthusiast computer References Further reading Accidental Empires: How the boys of Silicon Valley make their millions, battle foreign competition, and still can't get a date, Robert X. Cringely, Addison-Wesley Publishing, (1992), PC Magazine, Vol. 2, No. 6, November 1983, ‘'SCAMP: The Missing Link in the PC's Past?‘’ External links How Stuff Works pages: Dissecting a PC How PCs Work How to Upgrade Your Computer How to Build a Computer Global archive with product data-sheets of PCs and Workstations American inventions Classes of computers Home appliances Office equipment
Operating System (OS)
431
System programming language A system programming language is a programming language used for system programming; such languages are designed for writing system software, which usually requires different development approaches when compared with application software. Edsger Dijkstra refers to these language as Machine Oriented High Order Languages, or mohol. General-purpose programming languages tend to focus on generic features to allow programs written in the language to use the same code on different platforms. Examples of such languages include ALGOL and Pascal. This generic quality typically comes at the cost of denying direct access to the machine's internal workings, and this often has negative effects on performance. System languages, in contrast, are designed not for compatibility, but for performance and ease of access to the underlying hardware while still providing high-level programming concepts like structured programming. Examples include SPL and ESPOL, both of which are similar to ALGOL in syntax but tuned to their respective platforms. Others are cross-platform but designed to work close to the hardware, like BLISS, JOVIAL and BCPL. Some languages straddle the system and application domains, bridging the gap between these uses. The canonical example is C, which is used widely for both system and application programming. Some modern languages also do this such as Rust and Swift. Features In contrast with application languages, system programming languages typically offer more-direct access to the physical hardware of the machine: an archetypical system programming language in this sense was BCPL. System programming languages often lack built-in input/output (I/O) facilities because a system-software project usually develops its own I/O mechanisms or builds on basic monitor I/O or screen management facilities. The distinction between languages used for system programming and application programming became blurred over time with the widespread popularity of PL/I, C and Pascal. History The earliest system software was written in assembly language primarily because there was no alternative, but also for reasons including efficiency of object code, compilation time, and ease of debugging. Application languages such as FORTRAN were used for system programming, although they usually still required some routines to be written in assembly language. Mid-level languages Mid-level languages "have much of the syntax and facilities of a higher level language, but also provide direct access in the language (as well as providing assembly language) to machine features." The earliest of these was ESPOL on Burroughs mainframes in about 1960, followed by Niklaus Wirth's PL360 (first written on a Burroughs system as a cross compiler), which had the general syntax of ALGOL 60 but whose statements directly manipulated CPU registers and memory. Other languages in this category include MOL-360 and PL/S. As an example, a typical PL360 statement is R9 := R8 and R7 shll 8 or R6, signifying that registers 8 and 7 should be and'ed together, the result shifted left 8 bits, the result of that or'ed with the contents of register 6, and the final result placed into register 9. Higher-level languages While PL360 is at the semantic level of assembly language, another kind of system programming language operates at a higher semantic level, but has specific extensions designed to make the language suitable for system programming. An early example of this kind of language is LRLTRAN, which extended Fortran with features for character and bit manipulation, pointers, and directly addressed jump tables. Subsequently, languages such as C were developed, where the combination of features was sufficient to write system software, and a compiler could be developed that generated efficient object programs on modest hardware. Such a language generally omits features that cannot be implemented efficiently, and adds a small number of machine-dependent features needed to access specific hardware abilities; inline assembly code, such as C's statement, is often used for this purpose. Although many such languages were developed, C and C++ are the ones which survived. System Programming Language (SPL) is also the name of a specific language on the HP 3000 computer series, used for its operating system HP Multi-Programming Executive (MPE), and other parts of its system software. Major languages See also Ousterhout's dichotomy PreScheme Notes References External links System Programming Languages Programming language topics System software Systems programming languages
Operating System (OS)
432
System bus A system bus is a single computer bus that connects the major components of a computer system, combining the functions of a data bus to carry information, an address bus to determine where it should be sent or read from, and a control bus to determine its operation. The technique was developed to reduce costs and improve modularity, and although popular in the 1970s and 1980s, more modern computers use a variety of separate buses adapted to more specific needs. The system level bus (as distinct from a CPU's internal datapath busses) connects the CPU to memory and I/O devices. Typically a system level bus is designed for use as a backplane. Background scenario Many of the computers were based on the First Draft of a Report on the EDVAC report published in 1945. In what became known as the Von Neumann architecture, a central control unit and arithmetic logic unit (ALU, which he called the central arithmetic part) were combined with computer memory and input and output functions to form a stored program computer. The Report presented a general organization and theoretical model of the computer, however, not the implementation of that model. Soon designs integrated the control unit and ALU into what became known as the central processing unit (CPU). Computers in the 1950s and 1960s were generally constructed in an ad-hoc fashion. For example, the CPU, memory, and input/output units were each one or more cabinets connected by cables. Engineers used the common techniques of standardized bundles of wires and extended the concept as backplanes were used to hold printed circuit boards in these early machines. The name "bus" was already used for "bus bars" that carried electrical power to the various parts of electric machines, including early mechanical calculators. The advent of integrated circuits vastly reduced the size of each computer unit, and buses became more standardized. Standard modules could be interconnected in more uniform ways and were easier to develop and maintain. Description To provide even more modularity with reduced cost, memory and I/O buses (and the required control and power buses) were sometimes combined into a single unified system bus. Modularity and cost became important as computers became small enough to fit in a single cabinet (and customers expected similar price reductions). Digital Equipment Corporation (DEC) further reduced cost for mass-produced minicomputers, and memory-mapped I/O into the memory bus, so that the devices appeared to be memory locations. This was implemented in the Unibus of the PDP-11 around 1969, eliminating the need for a separate I/O bus. Even computers such as the PDP-8 without memory-mapped I/O were soon implemented with a system bus, which allowed modules to be plugged into any slot. Some authors called this a new streamlined "model" of computer architecture. Many early microcomputers (with a CPU generally on a single integrated circuit) were built with a single system bus, starting with the S-100 bus in the Altair 8800 computer system in about 1975. The IBM PC used the Industry Standard Architecture (ISA) bus as its system bus in 1981. The passive backplanes of early models were replaced with the standard of putting the CPU and RAM on a motherboard, with only optional daughterboards or expansion cards in system bus slots. The Multibus became a standard of the Institute of Electrical and Electronics Engineers as IEEE standard 796 in 1983. Sun Microsystems developed the SBus in 1989 to support smaller expansion cards. The easiest way to implement symmetric multiprocessing was to plug in more than one CPU into the shared system bus, which was used through the 1980s. However, the shared bus quickly became the bottleneck and more sophisticated connection techniques were explored. Even in very simple systems, at various times the data bus is driven by the program memory, by RAM, and by I/O devices. To prevent bus contention on the data bus, at any one instant only one device drives the data bus. In very simple systems, only the data bus is required to be a bidirectional bus. In very simple systems, the memory address register always drives the address bus, the control unit always drives the control bus, and an address decoder selects which particular device is allowed to drive the data bus during this bus cycle. In very simple systems, every instruction cycle starts with a READ memory cycle where program memory drives the instruction onto the data bus while the instruction register latches that instruction from the data bus. Some instructions continue with a WRITE memory cycle where the memory data register drives data onto the data bus into the chosen RAM or I/O device. Other instructions continue with another READ memory cycle where the chosen RAM, program memory, or I/O device drives data onto the data bus while the memory data register latches that data from the data bus. More complex systems have a multi-master bus—not only do they have many devices that each drive the data bus, but also have many bus masters that each drive the address bus. The address bus as well as the data bus in bus snooping systems is required to be a bidirectional bus, often implemented as a three-state bus. To prevent bus contention on the address bus, a bus arbiter selects which particular bus master is allowed to drive the address bus during this bus cycle. Dual Independent Bus As CPU design evolved into using faster local buses and slower peripheral buses, Intel adopted the dual independent bus (DIB) terminology, using the external front-side bus to the main system memory, and the internal back-side bus between one or more CPUs and the CPU caches. This was introduced in the Pentium Pro and Pentium II products in the mid to late 1990s. The primary bus for communicating data between the CPU and main memory and input and output devices is called the front-side bus, and the back-side bus accesses the level 2 cache. Since 2005/2006, considering an architecture in which 4 processors share a chipset, the DIB is composed by two buses, each of them is shared among two CPUs. The theoretical bandwidth is doubled compared to a shared front-side bus up to 12.8 GB/s in the best case. However, the snoop information useful to guarantee the cache coherence of shared data located in different caches have to be sent in broadcast, reducing the available bandwidth. To mitigate this limitation, a snoop filter was inserted in the chipset, in order to cache the snoop information. Modern personal and server computers use higher-performance interconnection technologies such as HyperTransport and Intel QuickPath Interconnect, while the system bus architecture continued to be used on simpler embedded microprocessors. The systems bus can even be internal to a single integrated circuit, producing a system-on-a-chip. Examples include AMBA, CoreConnect, and Wishbone. See also Bus (computing) External Bus Interface Expansion bus References Computer buses
Operating System (OS)
433
System image In computing, a system image is a serialized copy of the entire state of a computer system stored in some non-volatile form such as a file. A system is said to be capable of using system images if it can be shut down and later restored to exactly the same state. In such cases, system images can be used for backup. Hibernation is an example that uses an image of the entire machine's RAM. Disk images If a system has all its state written to a disk, then a system image can be produced by simply copying that disk to a file elsewhere, often with disk cloning applications. On many systems a complete system image cannot be created by a disk cloning program running within that system because information can be held outside of disks and volatile memory, for example in non-volatile memory like boot ROMs. Process images A process image is a copy of a given process's state at a given point in time. It is often used to create persistence within an otherwise volatile system. A common example is a database management system (DBMS). Most DBMS can store the state of its database or databases to a file before being closed down (see database dump). The DBMS can then be restarted later with the information in the database intact and proceed as though the software had never stopped. Another example would be the hibernate feature of many operating systems. Here, the state of all RAM memory is stored to disk, the computer is brought into an energy saving mode, then later restored to normal operation. Some emulators provide a facility to save an image of the system being emulated. In video gaming this is often referred to as a savestate. Another use is code mobility: a mobile agent can migrate between machines by having its state saved, then copying the data to another machine and restarting there. Programming language support Some programming languages provide a command to take a system image of a program. This is normally a standard feature in Smalltalk (inspired by FLEX) and Lisp, among other languages. Development in these languages is often quite different from many other programming languages. For example, in Lisp the programmer may load packages or other code into a running Lisp implementation using the read-eval-print loop, which usually compiles the programs. Data is loaded into the running Lisp system. The programmer may then dump a system image, containing that pre-compiled and possibly customized code—and also all loaded application data. Often this image is an executable, and can be run on other machines. This system image can be the form in which executable programs are distributed—this method has often been used by programs (such as TeX and Emacs) largely implemented in Lisp, Smalltalk, or idiosyncratic languages to avoid spending time repeating the same initialization work every time they start up. Similar, Lisp Machines were booted from Lisp images, called Worlds. The World contains the complete operating system, its applications and its data in a single file. It was also possible to save incremental Worlds, that contain only the changes from some base World. Before saving the World, the Lisp Machine operating system could optimize the contents of memory (better memory layout, compacting data structures, sorting data, ...). Although its purpose is different, a "system image" is often similar in structure to a core dump. See also Disk image ISO image External links CryoPID — A Process Freezer for Linux Operating system technology
Operating System (OS)
434
EmuTOS EmuTOS is a replacement for TOS (the operating system of the Atari ST and its successors), released as free software. It is mainly intended to be used with Atari emulators and clones, such as Hatari or FireBee. EmuTOS provides support for more modern hardware and avoids the use of the old, proprietary TOS as it is usually difficult to obtain. Features and compatibility Unlike the original TOS, the latest EmuTOS can work (sometimes with limited support) on all Atari hardware, even on some Amiga computers, and has support for features not available before: ColdFire CPU, IDE, FAT partitions and emulators' "Native Features" support. Support lacks for some deprecated OS APIs, though all Line-A API functions are included. By design, EmuTOS lacks support for non-documented OS features. It has some support for Atari Falcon sound matrix, including DSP support since version 1.1, and while VDI supports 1-, 2-, 4- and 8-bit interleaved graphics modes, support for Atari Falcon (or Amiga) 16-bit resolutions is completely missing. Therefore, certain old games, demos and applications, and also some Falcon-specific software may not work. Gallery Releases Release 0.9.1: support for Firebee evaluation boards, 256 colours display for VIDEL systems and XBIOS DMA sound functions. EmuCON2 shell with TAB completion, and renaming of folders was added. A full-featured desktop is now included also with the smallest 192k ROM version. Release 0.9.2 (and its bugfix release 0.9.3): support for SD/MMC Cards, the external IDE connector and poweroff functions on the Firebee platform. CompactFlash can be used, IDE media handling, FAT partition and media change detection were enhanced. Fixes and improvements for EmuTOS-RAM booting, fVDI compatibility and general VDI speed, ACSI and XHDI support (see Atari TOS). Release 0.9.4: compiled with -O2 by default for better performance (except for 192k version), use less RAM and add new variant for ColdFire Evaluation Boards with BaS_gcc ("BIOS"). Desktop can now display text files and move files/folders with Control key. Release 0.9.5: fix issues with STeem emulator hard disk emulation, add Alt+arrow mouse emulation, Pexec mode 7 support, dual keyboard support, user can specify boot partition at startup, recovery from exceptions in user programs, stack initialization on Amiga, translated text object alignment improvements, support for all line-A functions completed. Release 0.9.6: Fixes for real TT HW and full VDI support for Atari TT video and all resolutions. Enable MIDI input, add EmuCON 'mode' command and support for etv_term() function. Many fixes. Release 0.9.7: support for extended MBR partitions, MonSTer board, Eiffel on CAN bus on ColdFire EVB and Apollo Core 68080. FreeMiNT support on non-Atari hardware. Desktop 'Install devices', 'Install icon' and 'Remove desktop icon' features. Standalone version of EmuCON2. Release 1.1: Add support for colour icons, colour windows, Falcon DSP, interrupt-driven I/O for MFP and TT-MFP serial ports, improve Nova video card support in several areas, online manual for EmuTOS, support for Hungarian & Turkish languages See also MiNT Atari TOS References External links EmuTOS project - internationalized GPL version of TOS ROMs (based on open-sourced GEM sources Caldera bought from Novell in 1996 along with DR-DOS) EmuTOS source code moved from SourceForge to GitHub after 0.9.7 release Atari ST software Disk operating systems Free software operating systems Atari operating systems GEM software Hobbyist operating systems
Operating System (OS)
435
BSS BSS may stand for: Computing and telecommunications .bss ("Block Started by Symbol"), in compilers and linkers Backup sync share, an emerging category of software products Base station subsystem, in mobile telephone networks Basic Service Set, the basic building block of a wireless local area network (WLAN) Boeing Satellite Systems, see Boeing Satellite Development Center Blum–Shub–Smale machine, a model of computation Broadcasting Satellite Service, in television Broadcasting System of San-in, Japanese TV station broadcast Business support system, components used by Telecom Service Providers Entertainment Best Selling Secrets, a sitcom BSS 01, a dedicated first-generation home video game console Brain Salad Surgery, a 1973 Emerson, Lake & Palmer album Brave Saint Saturn, an American Christian rock band Broken Social Scene, a Canadian indie rock band Buraka Som Sistema, an electronic dance music project from Portugal Beyond Scared Straight, an A&E television series based on the 1978 film Scared Straight! British Strong Style, a professional wrestling group Media Bangladesh Sangbad Sangstha, the official news agency of Bangladesh Budavári Schönherz Stúdió, an online television and radio station of BUTE Medicine Bismuth subsalicylate, the active ingredient in several medications Bernard–Soulier syndrome, a bleeding disorder Bristol stool scale, a medical aid designed to classify the form of human faeces Balanced salt solution British Sleep Society, a charity that represents sleep health and sleep medicine Organizations Bevara Sverige Svenskt, a Swedish racist movement Biciklisticki Savez Srbije, the cycling federation of Serbia Botanical Society of Scotland, the national learning society for botanists of Scotland Schools Bayridge Secondary School, Canada Bayview Secondary School, Canada Beaconhouse School System, Pakistan Bishop Strachan School, Canada Blessed Sacrament School (disambiguation) Bramalea Secondary School, Canada Brighton Secondary School, Australia Bombay Scottish School, Mahim, India Other uses British Supersport Championship Bachelor of Social Science, an academic degree in social science awarded by a university Bang senseless, a gene of drosophila melanogaster Basic Surgical Skills, a mandatory 3-day practical course provided by the Royal College of Surgeons for all trainee surgeons in the UK and Ireland Baudhayana Shrauta Sutra, a Hindu text Blind signal separation, a method for the separation of a set of signals in math and statistics Blue Shirts Society, a Fascist clique and secret police or para-military force in the Republic of China between 1931 and 1938 Broad Street railway station (England) Broad Street Subway, alternative name for the Broad Street Line, a rapid transit line in Philadelphia BSS Industrial, a British group of companies in the engineering sector
Operating System (OS)
436
Bootloader A bootloader, also spelled as boot loader or called boot manager and bootstrap loader, is a computer program that is responsible for booting a computer. When a computer is turned off, its softwareincluding operating systems, application code, and dataremains stored on non-volatile memory. When the computer is powered on, it typically does not have an operating system or its loader in random-access memory (RAM). The computer first executes a relatively small program stored in read-only memory (ROM, and later EEPROM, NOR flash) along with some needed data, to initialize RAM (especially on x86 systems), to access the nonvolatile device (usually block device, eg NAND flash) or devices from which the operating system programs and data can be loaded into RAM. Some earlier computer systems, upon receiving a boot signal from a human operator or a peripheral device, may load a very small number of fixed instructions into memory at a specific location, initialize at least one CPU, and then point the CPU to the instructions and start their execution. These instructions typically start an input operation from some peripheral device (which may be switch-selectable by the operator). Other systems may send hardware commands directly to peripheral devices or I/O controllers that cause an extremely simple input operation (such as "read sector zero of the system device into memory starting at location 1000") to be carried out, effectively loading a small number of boot loader instructions into memory; a completion signal from the I/O device may then be used to start execution of the instructions by the CPU. Smaller computers often use less flexible but more automatic boot loader mechanisms to ensure that the computer starts quickly and with a predetermined software configuration. In many desktop computers, for example, the bootstrapping process begins with the CPU executing software contained in ROM (for example, the BIOS of an IBM PC or an IBM PC compatible) at a predefined address (some CPUs, including the Intel x86 series are designed to execute this software after reset without outside help). This software contains rudimentary functionality to search for devices eligible to participate in booting, and load a small program from a special section (most commonly the boot sector) of the most promising device, typically starting at a fixed entry point such as the start of the sector. First-stage boot loader Boot loaders may face peculiar constraints, especially in size; for instance, on the earlier IBM PC and compatibles, a boot sector should typically work in only 32 KB (later relaxed to 64 KB) of system memory and only use instructions supported by the original 8088/8086 processors. The first stage of PC boot loaders (FSBL, first-stage boot loader) located on fixed disks and removable drives must fit into the first 446 bytes of the Master boot record in order to leave room for the default 64-byte partition table with four partition entries and the two-byte boot signature, which the BIOS requires for a proper boot loader — or even less, when additional features like more than four partition entries (up to 16 with 16 bytes each), a disk signature (6 bytes), a disk timestamp (6 bytes), an Advanced Active Partition (18 bytes) or special multi-boot loaders have to be supported as well in some environments. In floppy and superfloppy volume boot records, up to 59 bytes are occupied for the extended BIOS parameter block on FAT12 and FAT16 volumes since DOS 4.0, whereas the FAT32 EBPB introduced with DOS 7.1 requires even 87 bytes, leaving only 423 bytes for the boot loader when assuming a sector size of 512 bytes. Microsoft boot sectors therefore traditionally imposed certain restrictions on the boot process, for example, the boot file had to be located at a fixed position in the root directory of the file system and stored as consecutive sectors, conditions taken care of by the SYS command and slightly relaxed in later versions of DOS. The boot loader was then able to load the first three sectors of the file into memory, which happened to contain another embedded boot loader able to load the remainder of the file into memory. When Microsoft added LBA and FAT32 support, they even switched to a boot loader reaching over two physical sectors and using 386 instructions for size reasons. At the same time other vendors managed to squeeze much more functionality into a single boot sector without relaxing the original constraints on only minimal available memory (32 KB) and processor support (8088/8086). For example, DR-DOS boot sectors are able to locate the boot file in the FAT12, FAT16 and FAT32 file system, and load it into memory as a whole via CHS or LBA, even if the file is not stored in a fixed location and in consecutive sectors. BIOS and UEFI not only load the operating system from a non-volatile device, they also initialize system hardware for operating system. Examples of first-stage bootloaders include BIOS, coreboot, Libreboot and Das U-Boot. Second-stage boot loader Second-stage boot loaders, such as GNU GRUB, rEFInd, BOOTMGR, Syslinux, NTLDR or iBoot, are not themselves operating systems, but are able to load an operating system properly and transfer execution to it; the operating system subsequently initializes itself and may load extra device drivers. The second-stage boot loader does not need drivers for its own operation, but may instead use generic storage access methods provided by system firmware such as the BIOS or Open Firmware, though typically with restricted hardware functionality and lower performance. Many boot loaders can be configured to give the user multiple booting choices. These choices can include different operating systems (for dual or multi-booting from different partitions or drives), different versions of the same operating system (in case a new version has unexpected problems), different operating system loading options (e.g., booting into a rescue or safe mode), and some standalone programs that can function without an operating system, such as memory testers (e.g., memtest86+), a basic shell (as in GNU GRUB), or even games (see List of PC Booter games). Some boot loaders can also load other boot loaders; for example, GRUB loads BOOTMGR instead of loading Windows directly. Usually, a default choice is preselected with a time delay during which a user can press a key to change the choice; after this delay, the default choice is automatically run so normal booting can occur without interaction. The boot process can be considered complete when the computer is ready to interact with the user, or the operating system is capable of running system programs or application programs. Many embedded systems must boot immediately. For example, waiting a minute for a digital television or a GPS navigation device to start is generally unacceptable. Therefore, such devices have software systems in ROM or flash memory so the device can begin functioning immediately; little or no loading is necessary, because the loading can be precomputed and stored on the ROM when the device is made. Large and complex systems may have boot procedures that proceed in multiple phases until finally the operating system and other programs are loaded and ready to execute. Because operating systems are designed as if they never start or stop, a boot loader might load the operating system, configure itself as a mere process within that system, and then irrevocably transfer control to the operating system. The boot loader then terminates normally as any other process would. Network booting Most computers are also capable of booting over a computer network. In this scenario, the operating system is stored on the disk of a server, and certain parts of it are transferred to the client using a simple protocol such as the Trivial File Transfer Protocol (TFTP). After these parts have been transferred, the operating system takes over the control of the booting process. As with the second-stage boot loader, network booting begins by using generic network access methods provided by the network interface's boot ROM, which typically contains a Preboot Execution Environment (PXE) image. No drivers are required, but the system functionality is limited until the operating system kernel and drivers are transferred and started. As a result, once the ROM-based booting has completed it is entirely possible to network boot into an operating system that itself does not have the ability to use the network interface. See also Comparison of boot loaders Notes References External links Bootloader - OSDev Wiki Boot loaders
Operating System (OS)
437
PWB/UNIX The Programmer's Workbench (PWB/UNIX) is an early, now discontinued, version of the Unix operating system that had been created in the Bell Labs Computer Science Research Group of AT&T. Its stated goal was to provide a time-sharing working environment for large groups of programmers, writing software for larger batch processing computers. Prior to 1973 Unix development at AT&T was a project of a small group of researchers in Department 1127 of Bell Labs. As the usefulness of Unix in other departments of Bell Labs was evident, the company decided to develop a version of Unix tailored to support programmers in production work, not just research. The Programmer's Workbench was started in 1973, by Evan Ivie and Rudd Canaday to support a computer center for a 1000-employee Bell Labs division, which would be the largest Unix site for several years. PWB/UNIX was to provide tools for teams of programmers to manage their source code and collaborate on projects with other team members. It also introduced several stability improvements beyond Research Unix, and broadened usage of the Research nroff and troff text formatters, via efforts with Bell Labs typing pools that led to the -mm macros. While PWB users managed their source code on PDP-11 Unix systems, programs were often written to run on other legacy operating systems. For this reason, PWB included software for submitting jobs to IBM System/370, UNIVAC 1100 series, and XDS Sigma 5 computers. In 1977 PWB supported a user community of about 1100 users in the Business Information Systems Programs (BISP) group of Bell Labs. Two major releases of Programmer's Workbench were produced. PWB/UNIX 1.0, released July 1, 1977 was based on Version 6 Unix; PWB 2.0 was based on Version 7 Unix. The operating system was advertised by Bell System Software as late as 1981 and edition 1.0 was still on an AT&T price list for educational institutions in 1984. Most of PWB/UNIX was later incorporated in the commercial UNIX System III and UNIX System V releases. Features Notable firsts in PWB include: The Source Code Control System, the first revision control system, written by Marc J. Rochkind The remote job entry batch-submission system The PWB shell, written by John R. Mashey, which preceded Steve Bourne's Bourne shell The restricted shell (rsh), an option of the PWB shell, used to create widely-available logins for status-checking, trouble-reporting, but made safe by restricting commands The troff -mm (memorandum) macro package, written by John R. Mashey and Dale W. Smith Utilities like find, cpio, expr, all three written by Dick Haight, xargs, egrep and fgrep yacc and lex, which, though not written specifically for PWB, were available outside of Bell Labs for the first time in the PWB distribution See also Research Unix Writer's Workbench ("WWB") References External links Unix ad mentioning PWB, from a 1981 issue of Datamation (on Dennis Ritchie's homepage) PWB distributions, from the Ancient UNIX Archive Unix history Unix variants Bell Labs Unices
Operating System (OS)
438
NTFS New Technology File System (NTFS) is a proprietary journaling file system developed by Microsoft. Starting with Windows NT 3.1, it is the default file system of the Windows NT family. It superseded File Allocation Table (FAT) as the preferred filesystem on Windows and is supported in Linux and BSD as well. NTFS reading and writing support is provided using a free and open-source kernel implementation known as NTFS3 in Linux and the NTFS-3G driver in BSD. Windows can convert FAT32/16/12 into NTFS without the need to rewrite all files. NTFS uses several files typically hidden from the user to store metadata about other files stored on the drive which can help improve speed and performance when reading data. Unlike FAT and High Performance File System (HPFS), NTFS supports access control lists (ACLs), filesystem encryption, transparent compression, sparse files and file system journaling. NTFS also supports shadow copy to allow backups of a system while it is running, but the functionality of the shadow copies varies between different versions of Windows. History In the mid-1980s, Microsoft and IBM formed a joint project to create the next generation of graphical operating system; the result was OS/2 and HPFS. Because Microsoft disagreed with IBM on many important issues, they eventually separated; OS/2 remained an IBM project and Microsoft worked to develop Windows NT and NTFS. The HPFS file system for OS/2 contained several important new features. When Microsoft created their new operating system, they "borrowed" many of these concepts for NTFS. The original NTFS developers were Tom Miller, Gary Kimura, Brian Andrew, and David Goebel. Probably as a result of this common ancestry, HPFS and NTFS use the same disk partition identification type code (07). Using the same Partition ID Record Number is highly unusual, since there were dozens of unused code numbers available, and other major file systems have their own codes. For example, FAT has more than nine (one each for FAT12, FAT16, FAT32, etc.). Algorithms identifying the file system in a partition type 07 must perform additional checks to distinguish between HPFS and NTFS. Versions Microsoft has released five versions of NTFS: The version number (e.g. v5.0 in Windows 2000) is based on the operating system version; it should not be confused with the NTFS version number (v3.1 since Windows XP). Although subsequent versions of Windows added new file system-related features, they did not change NTFS itself. For example, Windows Vista implemented NTFS symbolic links, Transactional NTFS, partition shrinking, and self-healing. NTFS symbolic links are a new feature in the file system; all the others are new operating system features that make use of NTFS features already in place. Scalability NTFS is optimized for 4 KB clusters, but supports a maximum cluster size of 2MB. (Earlier implementations support up to 64KB) The maximum NTFS volume size that the specification can support is clusters, but not all implementations achieve this theoretical maximum, as discussed below. The maximum NTFS volume size implemented in Windows XP Professional is clusters, partly due to partition table limitations. For example, using 64KB clusters, the maximum size Windows XP NTFS volume is 256TB minus 64KB. Using the default cluster size of 4KB, the maximum NTFS volume size is 16TB minus 4KB. Both of these are vastly higher than the 128GB limit in Windows XP SP1. Because partition tables on master boot record (MBR) disks support only partition sizes up to 2TB, multiple GUID Partition Table (GPT or "dynamic") volumes must be combined to create a single NTFS volume larger than 2TB. Booting from a GPT volume to a Windows environment in a Microsoft supported way requires a system with Unified Extensible Firmware Interface (UEFI) and 64-bit support. The NTFS maximum theoretical limit on the size of individual files is 16EB ( or ) minus 1KB, which totals 18,446,744,073,709,550,592 bytes. With Windows 10 version 1709 and Windows Server 2019, the maximum implemented file size is 8PB minus 2MB or 9,007,199,252,643,840 bytes. Interoperability Windows While the different NTFS versions are for the most part fully forward- and backward-compatible, there are technical considerations for mounting newer NTFS volumes in older versions of Microsoft Windows. This affects dual-booting, and external portable hard drives. For example, attempting to use an NTFS partition with "Previous Versions" (Volume Shadow Copy) on an operating system that does not support it will result in the contents of those previous versions being lost. A Windows command-line utility called convert.exe can convert supporting file systems to NTFS, including HPFS (only on Windows NT 3.1, 3.5, and 3.51), FAT16 and FAT32 (on Windows 2000 and later). FreeBSD FreeBSD 3.2 released in May 1999 included read-only NTFS support written by Semen Ustimenko. This implementation was ported to NetBSD by Christos Zoulas and Jaromir Dolecek and released with NetBSD 1.5 in December 2000. The FreeBSD implementation of NTFS was also ported to OpenBSD by Julien Bordet and offers native read-only NTFS support by default on i386 and amd64 platforms as of version 4.9 released 1 May 2011. Linux Linux kernel versions 2.1.74 and later include a driver written by Martin von Löwis which has the ability to read NTFS partitions; kernel versions 2.5.11 and later contain a new driver written by Anton Altaparmakov (University of Cambridge) and Richard Russon which supports file read. The ability to write to files was introduced with kernel version 2.6.15 in 2006 which allows users to write to existing files but does not allow the creation of new ones. Paragon's NTFS driver (see below) has been merged into kernel version 5.15, and it supports read/write on normal, compressed and sparse files, as well as journal replaying. NTFS-3G is a free GPL-licensed FUSE implementation of NTFS that was initially developed as a Linux kernel driver by Szabolcs Szakacsits. It was re-written as a FUSE program to work on other systems that FUSE supports like macOS, FreeBSD, NetBSD, OpenBSD, Solaris, QNX, and Haiku and allows reading and writing to NTFS partitions. A performance enhanced commercial version of NTFS-3G, called "Tuxera NTFS for Mac", is also available from the NTFS-3G developers. Captive NTFS, a 'wrapping' driver that uses Windows' own driver , exists for Linux. It was built as a Filesystem in Userspace (FUSE) program and released under the GPL but work on Captive NTFS ceased in 2006. Linux kernel versions 5.15 onwards carry NTFS3, a fully functional NTFS Read-Write driver which works on NTFS versions up to 3.1 and is maintained primarily by the Paragon Software Group with the source code found here. Mac OS Mac OS X 10.3 included Ustimenko's read-only implementation of NTFS from FreeBSD. Then in 2006 Apple hired Anton Altaparmakov to write a new NTFS implementation for Mac OS X 10.6. Native NTFS write support is included in 10.6 and later, but is not activated by default, although workarounds do exist to enable the functionality. However, user reports indicate the functionality is unstable and tends to cause kernel panics. Paragon Software Group sells a read-write driver named NTFS for Mac OS X, which is also included on some models of Seagate hard drives. OS/2 The NetDrive package for OS/2 (and derivatives such as eComStation and ArcaOS) supports a plugin which allows read and write access to NTFS volumes. DOS There is a free-for-personal-use read/write driver for MS-DOS by Avira called "NTFS4DOS". Ahead Software developed a "NTFSREAD" driver (version 1.200) for DR-DOS 7.0x between 2002 and 2004. It was part of their Nero Burning ROM software. Security NTFS uses access control lists and user-level encryption to help secure user data. Access control lists (ACLs) In NTFS, each file or folder is assigned a security descriptor that defines its owner and contains two access control lists (ACLs). The first ACL, called discretionary access control list (DACL), defines exactly what type of interactions (e.g. reading, writing, executing or deleting) are allowed or forbidden by which user or groups of users. For example, files in the folder may be read and executed by all users but modified only by a user holding administrative privileges. Windows Vista adds mandatory access control info to DACLs. DACLs are the primary focus of User Account Control in Windows Vista and later. The second ACL, called system access control list (SACL), defines which interactions with the file or folder are to be audited and whether they should be logged when the activity is successful, failed or both. For example, auditing can be enabled on sensitive files of a company, so that its managers get to know when someone tries to delete them or make a copy of them, and whether he or she succeeds. Encryption Encrypting File System (EFS) provides user-transparent encryption of any file or folder on an NTFS volume. EFS works in conjunction with the EFS service, Microsoft's CryptoAPI and the EFS File System Run-Time Library (FSRTL). EFS works by encrypting a file with a bulk symmetric key (also known as the File Encryption Key, or FEK), which is used because it takes a relatively small amount of time to encrypt and decrypt large amounts of data than if an asymmetric key cipher is used. The symmetric key that is used to encrypt the file is then encrypted with a public key that is associated with the user who encrypted the file, and this encrypted data is stored in an alternate data stream of the encrypted file. To decrypt the file, the file system uses the private key of the user to decrypt the symmetric key that is stored in the data stream. It then uses the symmetric key to decrypt the file. Because this is done at the file system level, it is transparent to the user. Also, in case of a user losing access to their key, support for additional decryption keys has been built into the EFS system, so that a recovery agent can still access the files if needed. NTFS-provided encryption and NTFS-provided compression are mutually exclusive; however, NTFS can be used for one and a third-party tool for the other. The support of EFS is not available in Basic, Home, and MediaCenter versions of Windows, and must be activated after installation of Professional, Ultimate, and Server versions of Windows or by using enterprise deployment tools within Windows domains. Features Journaling NTFS is a journaling file system and uses the NTFS Log ($LogFile) to record metadata changes to the volume. It is a feature that FAT does not provide and critical for NTFS to ensure that its complex internal data structures will remain consistent in case of system crashes or data moves performed by the defragmentation API, and allow easy rollback of uncommitted changes to these critical data structures when the volume is remounted. Notably affected structures are the volume allocation bitmap, modifications to MFT records such as moves of some variable-length attributes stored in MFT records and attribute lists, and indices for directories and security descriptors. The ($LogFile) format has evolved through several versions: The incompatibility of the $LogFile versions implemented by Windows 8.1 and Windows 10 prevents Windows 8 (and earlier versions of Windows) from recognizing version 2.0 of the $LogFile. Backward compatibility is provided by downgrading the $LogFile to version 1.1 when an NTFS volume is cleanly dismounted. It is again upgraded to version 2.0 when mounting on a compatible version of Windows. However, when hibernating to disk in the logoff state (a.k.a. Hybrid Boot or Fast Boot, which is enabled by default), mounted file systems are not dismounted, and thus the $LogFiles of any active file systems are not downgraded to version 1.1. The inability to process version 2.0 of the $LogFile by versions of Windows older than 8.1 results in an unnecessary invocation of the CHKDSK disk repair utility. This is particularly a concern in a multi-boot scenario involving pre- and post-8.1 versions of Windows, or when frequently moving a storage device between older and newer versions. A Windows Registry setting exists to prevent the automatic upgrade of the $LogFile to the newer version. The problem can also be dealt with by disabling Hybrid Boot. The USN Journal (Update Sequence Number Journal) is a system management feature that records (in $Extend\$UsnJrnl) changes to files, streams and directories on the volume, as well as their various attributes and security settings. The journal is made available for applications to track changes to the volume. This journal can be enabled or disabled on non-system volumes. Hard links The hard link feature allows different file names to directly refer to the same file contents. Hard links may link only to files in the same volume, because each volume has its own MFT. Hard links were originally included to support the POSIX subsystem in Windows NT. Although Hard links use the same MFT record (inode) which records file metadata such as file size, modification date, and attributes, NTFS also caches this data in the directory entry as a performance enhancement. This means that when listing the contents of a directory using FindFirstFile/FindNextFile family of APIs, (equivalent to the POSIX opendir/readdir APIs) you will also receive this cached information, in addition to the name and inode. However, you may not see up-to-date information, as this information is only guaranteed to be updated when a file is closed, and then only for the directory from which the file was opened. This means where a file has multiple names via hard links, updating a file via one name does not update the cached data associated with the other name. You can always obtain up-to-date data using GetFileInformationByHandle (which is the true equivalent of POSIX stat function). This can be done using a handle which has no access to the file itself (passing zero to CreateFile for dwDesiredAccess), and closing this handle has the incidental effect of updating the cached information. Windows uses hard links to support short (8.3) filenames in NTFS. Operating system support is needed because there are legacy applications that can work only with 8.3 filenames, but support can be disabled. In this case, an additional filename record and directory entry is added, but both 8.3 and long file name are linked and updated together, unlike a regular hard link. The NTFS file system has a limit of 1024 hard links on a file. Alternate data stream (ADS) Alternate data streams allow more than one data stream to be associated with a filename (a fork), using the format "filename:streamname" (e.g., "text.txt:extrastream"). NTFS Streams were introduced in Windows NT 3.1, to enable Services for Macintosh (SFM) to store resource forks. Although current versions of Windows Server no longer include SFM, third-party Apple Filing Protocol (AFP) products (such as GroupLogic's ExtremeZ-IP) still use this feature of the file system. Very small ADSs (named "Zone.Identifier") are added by Internet Explorer and recently by other browsers to mark files downloaded from external sites as possibly unsafe to run; the local shell would then require user confirmation before opening them. When the user indicates that he no longer wants this confirmation dialog, this ADS is deleted. Alternate streams are not listed in Windows Explorer, and their size is not included in the file's size. When the file is copied or moved to another file system without ADS support the user is warned that alternate data streams cannot be preserved. No such warning is typically provided if the file is attached to an e-mail, or uploaded to a website. Thus, using alternate streams for critical data may cause problems. Microsoft provides a tool called Streams to view streams on a selected volume. Starting with Windows PowerShell 3.0, it is possible to manage ADS natively with six cmdlets: Add-Content, Clear-Content, Get-Content, Get-Item, Remove-Item, Set-Content. Malware has used alternate data streams to hide code. As a result, malware scanners and other special tools now check for alternate data streams. File compression Compression is enabled on a per-folder or per-file basis and may be compressed or decompressed individually (via changing the advanced attributes). When compression is set on a folder, any files moved or saved to that folder will be automatically compressed. Files are compressed using LZNT1 algorithm (a variant of LZ77). Since Windows 10, Microsoft has introduced additional algorithms, namely XPRESS4K/8K/16K and LZX. Both algorithms are based on LZ77 with Huffman entropy coding, which LZNT1 lacked. These algorithms were taken from the Windows Imaging Format. They are mainly used for new CompactOS feature, which compresses the entire system partition using one of these algorithms. They can also be manually turned on per file with the flag of the command. When used on files, CompactOS algorithm avoids fragmentation by writing compressed data in contiguously allocated chunks. Files are compressed in 16 cluster chunks. With 4 KB clusters, files are compressed in 64 KB chunks. The compression algorithms in NTFS are designed to support cluster sizes of up to 4 KB. When the cluster size is greater than 4 KB on an NTFS volume, NTFS compression is not available. Advantages Users of fast multi-core processors will find improvements in application speed by compressing their applications and data as well as a reduction in space used. Note that SSDs with Sandforce controllers already compress data. However, since less data is transferred, there is a reduction in I/Os. Compression works best with files that have repetitive content, are seldom written, are usually accessed sequentially, and are not themselves compressed. Single-user systems with limited hard disk space can benefit from NTFS compression for small files, from 4KB to 64KB or more, depending on compressibility. Files smaller than approximately 900 bytes are stored within the directory entry of the MFT. Disadvantages Large compressible files become highly fragmented since every chunk smaller than 64KB becomes a fragment. Flash memory, such as SSD drives do not have the head movement delays and high access time of mechanical hard disk drives, so fragmentation has only a smaller penalty. Maximum Compression Size According to research by Microsoft's NTFS Development team, 50–60GB is a reasonable maximum size for a compressed file on an NTFS volume with a 4KB (default) cluster (block) size. This reasonable maximum size decreases sharply for volumes with smaller cluster sizes. If the compression reduces 64KB of data to 60KB or less, NTFS treats the unneeded 4KB pages like empty sparse file clusters—they are not written. This allows for reasonable random-access times as the OS merely has to follow the chain of fragments. Boot failures If system files that are needed at boot time (such as drivers, NTLDR, winload.exe, or BOOTMGR) are compressed, the system may fail to boot correctly, because decompression filters are not yet loaded. Later editions of Windows do not allow important system files to be compressed. Sparse files Sparse files are files interspersed with empty segments for which no actual storage space is used. To the applications, the file looks like an ordinary file with empty regions seen as regions filled with zeros. A sparse file does not necessarily include sparse zeros areas; the "sparse file" attribute just means that the file is allowed to have them. Database applications, for instance, may use sparse files. As with compressed files, the actual sizes of sparse files are not taken into account when determining quota limits. Volume Shadow Copy The Volume Shadow Copy Service (VSS) keeps historical versions of files and folders on NTFS volumes by copying old, newly overwritten data to shadow copy via copy-on-write technique. The user may later request an earlier version to be recovered. This also allows data backup programs to archive files currently in use by the file system. Windows Vista also introduced persistent shadow copies for use with System Restore and Previous Versions features. Persistent shadow copies, however, are deleted when an older operating system mounts that NTFS volume. This happens because the older operating system does not understand the newer format of persistent shadow copies. Transactions As of Windows Vista, applications can use Transactional NTFS (TxF) to group multiple changes to files together into a single transaction. The transaction will guarantee that either all of the changes happen, or none of them do, and that no application outside the transaction will see the changes until they are committed. It uses similar techniques as those used for Volume Shadow Copies (i.e. copy-on-write) to ensure that overwritten data can be safely rolled back, and a CLFS log to mark the transactions that have still not been committed, or those that have been committed but still not fully applied (in case of system crash during a commit by one of the participants). Transactional NTFS does not restrict transactions to just the local NTFS volume, but also includes other transactional data or operations in other locations such as data stored in separate volumes, the local registry, or SQL databases, or the current states of system services or remote services. These transactions are coordinated network-wide with all participants using a specific service, the DTC, to ensure that all participants will receive same commit state, and to transport the changes that have been validated by any participant (so that the others can invalidate their local caches for old data or rollback their ongoing uncommitted changes). Transactional NTFS allows, for example, the creation of network-wide consistent distributed file systems, including with their local live or offline caches. Microsoft now advises against using TxF: "Microsoft strongly recommends developers utilize alternative means" since "TxF may not be available in future versions of Microsoft Windows". Quotas Disk quotas were introduced in NTFS v3. They allow the administrator of a computer that runs a version of Windows that supports NTFS to set a threshold of disk space that users may use. It also allows administrators to keep track of how much disk space each user is using. An administrator may specify a certain level of disk space that a user may use before they receive a warning, and then deny access to the user once they hit their upper limit of space. Disk quotas do not take into account NTFS's transparent file-compression, should this be enabled. Applications that query the amount of free space will also see the amount of free space left to the user who has a quota applied to them. Reparse points Introduced in NTFS v3, NTFS reparse points are used by associating a reparse tag in the user space attribute of a file or directory. Microsoft includes several default tags including symbolic links, directory junction points and volume mount points. When the Object Manager parses a file system name lookup and encounters a reparse attribute, it will reparse the name lookup, passing the user controlled reparse data to every file system filter driver that is loaded into Windows. Each filter driver examines the reparse data to see whether it is associated with that reparse point, and if that filter driver determines a match, then it intercepts the file system request and performs its special functionality. Limitations Resizing Starting with Windows Vista Microsoft added the built-in ability to shrink or expand a partition. However, this ability does not relocate page file fragments or files that have been marked as unmovable, so shrinking a volume will often require relocating or disabling any page file, the index of Windows Search, and any Shadow Copy used by System Restore. Various third-party tools are capable of resizing NTFS partitions. OneDrive Since 2017, Microsoft requires the OneDrive file structure to reside on an NTFS disk. This is because OneDrive Files On-Demand feature uses NTFS reparse points to link files and folders that are stored in OneDrive to the local filesystem, making the file or folder unusable with any previous version of Windows, with any other NTFS file system driver, or any file system and backup utilities not updated to support it. Structure NTFS is made up of several components including: a partition boot sector (PBS) that holds boot information; the master file table that stores a record of all files and folders in the filesystem; a series of meta files that help structure meta data more efficiently; data streams and locking mechanisms. Internally, NTFS uses B-trees to index file system data. A file system journal is used to guarantee the integrity of the file system metadata but not individual files' content. Systems using NTFS are known to have improved reliability compared to FAT file systems. NTFS allows any sequence of 16-bit values for name encoding (e.g. file names, stream names or index names) except 0x0000. This means UTF-16 code units are supported, but the file system does not check whether a sequence is valid UTF-16 (it allows any sequence of short values, not restricted to those in the Unicode standard). In Win32 namespace, any UTF-16 code units are case insensitive whereas in POSIX namespace they are case sensitive. File names are limited to 255 UTF-16 code units. Certain names are reserved in the volume root directory and cannot be used for files. These are $MFT, $MFTMirr, $LogFile, $Volume, $AttrDef, . (dot), $Bitmap, $Boot, $BadClus, $Secure, $UpCase, and $Extend. . (dot) and $Extend are both directories; the others are files. The NT kernel limits full paths to 32,767 UTF-16 code units. There are some additional restrictions on code points and file names. Partition Boot Sector (PBS) This boot partition format is roughly based upon the earlier FAT filesystem, but the fields are in different locations. Some of these fields, especially the "sectors per track", "number of heads" and "hidden sectors" fields may contain dummy values on drives where they either do not make sense or are not determinable. The OS first looks at the 8 bytes at 0x30 to find the cluster number of the $MFT, then multiplies that number by the number of sectors per cluster (1 byte found at 0x0D). This value is the sector offset (LBA) to the $MFT, which is described below. Master File Table In NTFS, all file, directory and metafile data—file name, creation date, access permissions (by the use of access control lists), and size—are stored as metadata in the Master File Table (MFT). This abstract approach allowed easy addition of file system features during Windows NT's development—an example is the addition of fields for indexing used by the Active Directory software. This also enables fast file search software to locate named local files and folders included in the MFT very quickly, without requiring any other index. The MFT structure supports algorithms which minimize disk fragmentation. A directory entry consists of a filename and a "file ID" (analogous to the inode number), which is the record number representing the file in the Master File Table. The file ID also contains a reuse count to detect stale references. While this strongly resembles the W_FID of Files-11, other NTFS structures radically differ. A partial copy of the MFT, called the MFT mirror, is stored to be used in case of corruption. If the first record of the MFT is corrupted, NTFS reads the second record to find the MFT mirror file. Locations for both files are stored in the boot sector. Metafiles NTFS contains several files that define and organize the file system. In all respects, most of these files are structured like any other user file ($Volume being the most peculiar), but are not of direct interest to file system clients. These metafiles define files, back up critical file system data, buffer file system changes, manage free space allocation, satisfy BIOS expectations, track bad allocation units, and store security and disk space usage information. All content is in an unnamed data stream, unless otherwise indicated. These metafiles are treated specially by Windows, handled directly by the NTFS.SYS driver and are difficult to directly view: special purpose-built tools are needed. As of Windows 7, the NTFS driver completely prohibits user access, resulting in a BSoD whenever an attempt to execute a metadata file is made. One such tool is the nfi.exe ("NTFS File Sector Information Utility") that is freely distributed as part of the Microsoft "OEM Support Tools". For example, to obtain information on the "$MFT"-Master File Table Segment the following command is used: nfi.exe c:\$MFT Another way to bypass the restriction is to use 7-Zip's file manager and go to the low-level NTFS path \\.\X:\ (where X:\ resembles any drive/partition). Here, 3 new folders will appear: $EXTEND, [DELETED] (a pseudo-folder that 7-Zip uses to attach files deleted from the file system to view), and [SYSTEM] (another pseudo-folder that contains all the NTFS metadata files). This trick can be used from removable devices (USB flash drives, external hard drives, SD Cards, etc.) inside Windows, but doing this on the active partition requires offline access (namely WinRE). Attribute lists, attributes, and streams For each file (or directory) described in the MFT record, there is a linear repository of stream descriptors (also named attributes), packed together in one or more MFT records (containing the so-called attributes list), with extra padding to fill the fixed 1 KB size of every MFT record, and that fully describes the effective streams associated with that file. Each attribute has an attribute type (a fixed-size integer mapping to an attribute definition in file $AttrDef), an optional attribute name (for example, used as the name for an alternate data stream), and a value, represented in a sequence of bytes. For NTFS, the standard data of files, the alternate data streams, or the index data for directories are stored as attributes. According to $AttrDef, some attributes can be either resident or non-resident. The $DATA attribute, which contains file data, is such an example. When the attribute is resident (which is represented by a flag), its value is stored directly in the MFT record. Otherwise, clusters are allocated for the data, and the cluster location information is stored as data runs in the attribute. For each file in the MFT, the attributes identified by attribute type, attribute name must be unique. Additionally, NTFS has some ordering constraints for these attributes. There is a predefined null attribute type, used to indicate the end of the list of attributes in one MFT record. It must be present as the last attribute in the record (all other storage space available after it will be ignored and just consists of padding bytes to match the record size in the MFT). Some attribute types are required and must be present in each MFT record, except unused records that are just indicated by null attribute types. This is the case for the $STANDARD_INFORMATION attribute that is stored as a fixed-size record and contains the timestamps and other basic single-bit attributes (compatible with those managed by FAT in DOS or Windows 9x). Some attribute types cannot have a name and must remain anonymous. This is the case for the standard attributes, or for the preferred NTFS "filename" attribute type, or the "short filename" attribute type, when it is also present (for compatibility with DOS-like applications, see below). It is also possible for a file to contain only a short filename, in which case it will be the preferred one, as listed in the Windows Explorer. The filename attributes stored in the attribute list do not make the file immediately accessible through the hierarchical file system. In fact, all the filenames must be indexed separately in at least one other directory on the same volume. There it must have its own MFT record and its own security descriptors and attributes that reference the MFT record number for this file. This allows the same file or directory to be "hardlinked" several times from several containers on the same volume, possibly with distinct filenames. The default data stream of a regular file is a stream of type $DATA but with an anonymous name, and the ADSs are similar but must be named. On the other hand, the default data stream of directories has a distinct type, but are not anonymous: they have an attribute name ("$I30" in NTFS 3+) that reflects its indexing format. All attributes of a given file may be displayed by using the nfi.exe ("NTFS File Sector Information Utility") that is freely distributed as part of the Microsoft "OEM Support Tools". Windows system calls may handle alternate data streams. Depending on the operating system, utility and remote file system, a file transfer might silently strip data streams. A safe way of copying or moving files is to use the BackupRead and BackupWrite system calls, which allow programs to enumerate streams, to verify whether each stream should be written to the destination volume and to knowingly skip unwanted streams. Resident vs. non-resident attributes To optimize the storage and reduce the I/O overhead for the very common case of attributes with very small associated value, NTFS prefers to place the value within the attribute itself (if the size of the attribute does not then exceed the maximum size of an MFT record), instead of using the MFT record space to list clusters containing the data; in that case, the attribute will not store the data directly but will just store an allocation map (in the form of data runs) pointing to the actual data stored elsewhere on the volume. When the value can be accessed directly from within the attribute, it is called "resident data" (by computer forensics workers). The amount of data that fits is highly dependent on the file's characteristics, but 700 to 800 bytes is common in single-stream files with non-lengthy filenames and no ACLs. Some attributes (such as the preferred filename, the basic file attributes) cannot be made non-resident. For non-resident attributes, their allocation map must fit within MFT records. Encrypted-by-NTFS, sparse data streams, or compressed data streams cannot be made resident. The format of the allocation map for non-resident attributes depends on its capability of supporting sparse data storage. In the current implementation of NTFS, once a non-resident data stream has been marked and converted as sparse, it cannot be changed back to non-sparse data, so it cannot become resident again, unless this data is fully truncated, discarding the sparse allocation map completely. When a non-resident attribute is so fragmented, that its effective allocation map cannot fit entirely within one MFT record, NTFS stores the attribute in multiple records. The first one among them is called the base record, while the others are called extension records. NTFS creates a special attribute $ATTRIBUTE_LIST to store information mapping different parts of the long attribute to the MFT records, which means the allocation map may be split into multiple records. The $ATTRIBUTE_LIST itself can also be non-resident, but its own allocation map must fit within one MFT record. When there are too many attributes for a file (including ADS's, extended attributes, or security descriptors), so that they cannot fit all within the MFT record, extension records may also be used to store the other attributes, using the same format as the one used in the base MFT record, but without the space constraints of one MFT record. The allocation map is stored in a form of data runs with compressed encoding. Each data run represents a contiguous group of clusters that store the attribute value. For files on a multi-GB volume, each entry can be encoded as 5 to 7 bytes, which means a 1 KB MFT record can store about 100 such data runs. However, as the $ATTRIBUTE_LIST also has a size limit, it is dangerous to have more than 1 million fragments of a single file on an NTFS volume, which also implies that it is in general not a good idea to use NTFS compression on a file larger than 10GB. The NTFS file system driver will sometimes attempt to relocate the data of some of the attributes that can be made non-resident into the clusters, and will also attempt to relocate the data stored in clusters back to the attribute inside the MFT record, based on priority and preferred ordering rules, and size constraints. Since resident files do not directly occupy clusters ("allocation units"), it is possible for an NTFS volume to contain more files on a volume than there are clusters. For example, a 74.5GB partition NTFS formats with 19,543,064 clusters of 4KB. Subtracting system files (a 64MB log file, a 2,442,888-byte Bitmap file, and about 25 clusters of fixed overhead) leaves 19,526,158 clusters free for files and indices. Since there are four MFT records per cluster, this volume theoretically could hold almost 4 × 19,526,158= 78,104,632 resident files. Opportunistic locks Opportunistic file locks (oplocks) allow clients to alter their buffering strategy for a given file or stream in order to increase performance and reduce network use. Oplocks apply to the given open stream of a file and do not affect oplocks on a different stream. Oplocks can be used to transparently access files in the background. A network client may avoid writing information into a file on a remote server if no other process is accessing the data, or it may buffer read-ahead data if no other process is writing data. Windows supports four different types of oplocks: Level 2 (or shared) oplock: multiple readers, no writers (i.e. read caching). Level 1 (or exclusive) oplock: exclusive access with arbitrary buffering (i.e. read and write caching). Batch oplock (also exclusive): a stream is opened on the server, but closed on the client machine (i.e. read, write and handle caching). Filter oplock (also exclusive): applications and file system filters can "back out" when others try to access the same stream (i.e. read and write caching) (since Windows 2000) Opportunistic locks have been enhanced in Windows 7 and Windows Server 2008 R2 with per-client oplock keys. Time Windows NT and its descendants keep internal timestamps as UTC and make the appropriate conversions for display purposes; all NTFS timestamps are in UTC. For historical reasons, the versions of Windows that do not support NTFS all keep time internally as local zone time, and therefore so do all file systems – other than NTFS – that are supported by current versions of Windows. This means that when files are copied or moved between NTFS and non-NTFS partitions, the OS needs to convert timestamps on the fly. But if some files are moved when daylight saving time (DST) is in effect, and other files are moved when standard time is in effect, there can be some ambiguities in the conversions. As a result, especially shortly after one of the days on which local zone time changes, users may observe that some files have timestamps that are incorrect by one hour. Due to the differences in implementation of DST in different jurisdictions, this can result in a potential timestamp error of up to 4 hours in any given 12 months. See also Comparison of file systems NTFSDOS ntfsresize WinFS (a canceled Microsoft filesystem) ReFS, a newer Microsoft filesystem References Further reading Compression file systems Windows disk file systems 1993 software
Operating System (OS)
439
IPadOS iPadOS is a mobile operating system developed by Apple Inc. for its iPad line of tablet computers. It is a rebranded variant of iOS, the operating system used by Apple's iPhones, renamed to reflect the diverging features of the two product lines, particularly the iPad's multitasking capabilities and support for keyboard use. It was introduced as iPadOS 13 in 2019, reflecting its status as the successor to iOS 12 for the iPad, at the company's 2019 Worldwide Developers Conference. iPadOS was released to the public on September 24, 2019. The current public release is iPadOS 15.3.1, released on February 10, 2022. History The first iPad was released in 2010 and ran iPhone OS 3.2, which added support for the larger device to the operating system, previously only used on the iPhone and iPod Touch. This shared operating system was rebranded as "iOS" with the release of iOS 4. The operating system initially had rough feature parity running on the iPhone, iPod Touch, and iPad, with variations in user interface depending on screen size, and minor differences in the selection of apps included. However, over time, the variant of iOS for the iPad incorporated a growing set of differentiating features, such as picture-in-picture, the ability to display multiple running apps simultaneously (both introduced with iOS 9 in 2015), drag and drop, and a dock that more closely resembled the one in macOS than the one on the iPhone (added in 2017 with iOS 11). Standard iPad apps were increasingly designed to support the optional use of a physical keyboard. To emphasize the different feature set available on the iPad, and to signal their intention to develop the platforms in divergent directions, at WWDC 2019, Apple announced that the variant of iOS that runs on the iPad would be rebranded as "iPadOS". The new naming strategy began with iPadOS 13.1, in 2019. On June 22, 2020, at WWDC 2020, Apple announced iPadOS 14, with compact designs for search, Siri, and calls, improved app designs, handwriting recognition, better AR features, enhanced privacy protections, and app widgets. iPadOS 14 was released to the public on September 16, 2020. On June 7, 2021, at WWDC 2021, iPadOS 15 was announced with widgets on the Home Screen and App Library, the same features that came to the iPhone with iOS 14 in 2020. The update also brought stricter privacy measurements with Safari such as IP Address blocking so other websites cannot see it. iPadOS 15 was released to the public on September 20, 2021. Features Many features of iPadOS are also available on iOS; however, iPadOS contains some features that are not available in iOS and lacks some features that are available in iOS. iPadOS 13 Home Screen Unlike previous versions of iOS, the icon grid displays up to five rows and six columns of apps, regardless of whether the device is in portrait or landscape orientation. The first page of the home screen can be configured to show a column of widgets from applications for easy access. Spotlight Search is no longer part of the widgets but can still be accessed by swiping down from the center of the home screen or pressing Command + Space on a connected keyboard. Multitasking iPadOS features a multitasking system developed with more capabilities compared to iOS, with features like Slide Over and Split View that make it possible to use multiple different applications simultaneously. Double-clicking the Home Button or swiping up from the bottom of the screen and pausing will display all currently active spaces. Each space can feature a single app, or a Split View featuring two apps. The user can also swipe left or right on the Home Indicator to go between spaces at any time, or swipe left/right with four fingers. While using an app, swiping up slightly from the bottom edge of the screen will summon the Dock, where apps stored within can be dragged to different areas of the current space to be opened in either Split View or Slide Over. Dragging an app to the left or right edge of the screen will create a Split View, which will allow both apps to be used side by side. The size of the two apps in Split View can be adjusted by dragging a pill-shaped icon in the center of the vertical divider and dragging the divider all the way to one side of the screen closes the respective app. If the user drags an app from the dock over the current app, it will create a floating window called Slide Over which can be dragged to either the left or right side of the screen. A Slide Over window can be hidden by swiping it off the right side of the screen, and swiping left from the right edge of the screen will restore it. Slide Over apps can also be cycled between by swiping left or right on the Home Indicator in the Slide Over window and pulling up on it will open an app switcher for Slide Over windows. A pill-shaped icon at the top of apps in Split View or Slide Over allows them to be switched in an out of Split View and Slide Over. The user can now have several instances of a single app open at once. A new App Exposé mode has been added which allows the user to see all of the instances of an app. In many applications, a notable exception being YouTube, videos can be shrunk down into a picture-in-picture window so the user can continue watching it while using other apps. This window containing the video can be resized by pinching and spreading and can be docked to any of the four corners of the screen. It can also be hidden by swiping it off the side of the screen and is denoted by an arrow at the edge where the video is hidden and swiping it will bring it back onscreen. Safari Safari now shows desktop versions of websites by default, includes a download manager, and has 30 new keyboard shortcuts if an external keyboard is connected. Sidecar Sidecar allows for an iPad to function as a second monitor for macOS, named in reference to articulated motorcycles. When using Sidecar, the Apple Pencil can be used to emulate a graphics tablet for applications like Photoshop. This feature is only supported on iPads that support the Apple Pencil. However, earlier versions of iPadOS 13 allowed all iPads compatible with iPadOS 13 to work with Sidecar. Storage iPadOS allows external storage, such as USB flash drives, portable hard drives, and solid state drives to be connected to an iPad via the Files app. iPad Pros from the 3rd generation above connects over USB-C, but the Lightning camera connection kit also works to connect external drives with previous iPads. Mouse and trackpad support Mouse and trackpad support was added in version 13.4. iPadOS 14 Scribble Introduced in iPadOS 14, Scribble converts text handwritten by an Apple Pencil into typed text in most text fields. iPadOS 15 Widgets Beginning with iPadOS 15, you can place widgets on the home screen. Translate Beginning with iPadOS 15, Translate is available. The feature was announced on June 7, 2021 at WWDC 2021. Translation works with 11 languages. References External links – official site – official developer site iOS Reference Library at the Apple Developer site IPad Apple Inc. operating systems Mach (kernel) Mobile operating systems Products introduced in 2019 Tablet operating systems
Operating System (OS)
440
Data General RDOS The Data General RDOS (Real-time Disk Operating System) was a real-time operating system released in 1970. The software was only sold bundled with the company's popular Nova and Eclipse minicomputers. Overview RDOS was capable of multitasking, with the ability to run up to 32 what were called "tasks" (similar to the current term threads) simultaneously on each of two grounds (foreground and background) within a 64 KB memory space. Later versions of RDOS were compatible with Data General's 16-bit Eclipse minicomputer line. A cut-down version of RDOS, without real-time background and foreground capability but still capable of running multiple threads and multi-user Data General Business Basic, was called Data General Diskette Operating System (DG-DOS or now—somewhat confusingly—simply DOS); another related operating system was RTOS, a Real-Time Operating System for diskless environments. RDOS on microNOVA-based "Micro Products" micro-minicomputers was sometimes called DG/RDOS. RDOS was superseded in the early 1980s by Data General's AOS family of operating systems, including AOS/VS and MP/AOS (MP/OS on smaller systems). Commands The following list of commands are supported by the RDOS/DOS CLI. ALGOL APPEND ASM BASIC BATCH BOOT BPUNCH BUILD CCONT CDIR CHAIN CHATR CHLAT CLEAR CLG COPY CPART CRAND CREATE DEB DELETE DIR DISK DUMP EDIT ENDLOG ENPAT EQUIV EXFG FDUMP FGND FILCOM FLOAD FORT FORTRAN FPRINT GDIR GMEM GSYS GTOD INIT LDIR LFE LINK LIST LOAD LOG MAC MCABOOT MDIR MEDIT MESSAGE MKABS MKSAVE MOVE NSPEED OEDIT OVLDR PATCH POP PRINT PUNCH RDOSSORT RELEASE RENAME REPLACE REV RLDR SAVE SDAY SEDIT SMEM SPDIS SPEBL SPEED SPKILL STOD SYSGEN TPRINT TUOFF TUON TYPE VFU XFER Antitrust lawsuit In the late 1970s Data General was sued (under the Sherman and Clayton antitrust acts) by competitors for their practice of bundling RDOS with the Data General Nova or Eclipse minicomputer. When Data General introduced the Data General Nova, a company called Digidyne wanted to use its RDOS operating system on its own hardware clone. Data General refused to license their software and claimed their "bundling rights". In 1985, courts including the United States Court of Appeals for the Ninth Circuit ruled against Data General in a case called Digidyne v. Data General. The Supreme Court of the United States declined to hear Data General's appeal, although Justices White and Blackmun would have heard it. The precedent set by the lower courts eventually forced Data General to license the operating system because restricting the software to only Data General's hardware was an illegal tying arrangement. In 1999, Data General was taken over by EMC Corporation. References External links RDOS documentation at the Computer History Museum RDOS 7.50 User Parameters definition SimuLogics' ReNOVAte - Emulator to run NOVA/Eclipse Software on DOS / WindowsNT / UN*X / VMS Data General Disk operating systems Real-time operating systems
Operating System (OS)
441
BLIS/COBOL BLIS/COBOL is a discontinued operating system that was written in COBOL. It is the only such system to gain reasonably wide acceptance. It was optimised to compile business applications written in COBOL. BLIS was available on a range of Data General Nova and Data General Eclipse 16-bit minicomputers. It was marketed by Information Processing, Inc. (IPI), who regularly exhibited the product at the National Computer Conference in the 1970s and 80s. It was priced between US$830 and $10,000 depending on the number of supported users and features. In 1977, IPI boasted over 100 operational installations of the system worldwide. By 1985, a version for the IBM PC existed called PC-BLIS. Originally, most operating systems were written in assembly language for a particular processor or family of processors. Non-assembler operating systems were comparatively slow, but were easier for revision and repair. One of the reasons for the C programming language's low-level features, which resemble assembly language in some ways, is an early intent to use it for writing operating systems. Similar goals led to IBM's development of PL/S. The high-level nature of COBOL, which created some problems for operating system development, was partially addressed in BLIS, since it was deliberately optimized for COBOL. References Discontinued operating systems COBOL Assembly language software
Operating System (OS)
442
OSS OSS or Oss may refer to: Places Oss, a city and municipality in the Netherlands Osh Airport, IATA code OSS People with the name Oss (surname), a surname Arts and entertainment O.S.S. (film), a 1946 World War II spy film about Office of Strategic Services agents O.S.S. (TV series), a British spy series which aired in 1957 in the UK and the US Open Source Shakespeare, a non-commercial website with texts and statistics on Shakespeare's plays Old Syriac Sinaiticus, a Bible manuscript Organization of Super Spies, a fictional organization in the Spy Kids franchise Education ÖSS (Öğrenci Seçme Sınavı), a former university entrance exam in Turkey Options Secondary School, Chula Vista, California Otto Stern School for Integrated Doctoral Education, Frankfurt am Main, Germany Outram Secondary School, Singapore Organizations Observatoire du Sahara et du Sahel, dedicated to fighting desertification and drought; based in Tunis, Tunisia Office for Science and Society, Science Education from Montreal's McGill University Office of Strategic Services, World War II forerunner of the Central Intelligence Agency Office of the Supervising Scientist, an Australian Government body under the Supervising Scientist Offshore Super Series, an offshore powerboat racing organization Open Spaces Society, a UK registered charity championing public paths and open spaces Operations Support Squadron, a United States Air Force support squadron Optimized Systems Software, a former software company Science and technology Ohio Sky Survey Optical SteadyShot, a lens-based image stabilization system by Sony Optimal Stereo Sound, another name for the Jecklin Disk recording technique Oriented spindle stop, a type of spindle motion used within some G-code cycles Ovary Sparing Spay (OSS) Overspeed Sensor System (OSS), part of the Train Protection & Warning System for railroad trains Computer software and hardware OpenSearchServer, search engine software Open Sound System, a standard interface for making and capturing sound in Unix operating systems Open-source software, software with its source code made freely available Operations support systems, computers used by telecommunications service providers to administer and maintain network systems Other uses OSS Fighters, a Romania-based kickboxing promotion Order of St. Sava, a Serbian decoration Ossetic language code See also AAS (disambiguation) Hoz (disambiguation) OS (disambiguation)
Operating System (OS)
443
OpenStreetMap OpenStreetMap (OSM) is a collaborative project to create a free editable geographic database of the world. The geodata underlying the maps is considered the primary output of the project. The creation and growth of OSM has been motivated by restrictions on use or availability of map data across much of the world, and the advent of inexpensive portable satellite navigation devices. Created by Steve Coast in the UK in 2004, it was inspired by the success of Wikipedia and the predominance of proprietary map data in the UK and elsewhere. Since then, it has grown to over two million registered users. Users may collect data using manual survey, GPS devices, aerial photography, and other free sources, or use their own local knowledge of the area. This crowdsourced data is then made available under the Open Database License. The site is supported by the OpenStreetMap Foundation, a non-profit organisation registered in England and Wales. The data from OSM can be used in various ways including production of paper maps and electronic maps, geocoding of address and place names, and route planning. Prominent users include Facebook, Apple, Microsoft, Amazon Logistics, Uber, Craigslist, Snapchat, OsmAnd, Wikimedia Maps, Maps.me, MapQuest Open, JMP statistical software, and Foursquare. Many users of GPS devices use OSM data to replace the built-in map data on their devices. OpenStreetMap data has been favourably compared with proprietary datasources, although data quality varied across the world. History Steve Coast founded the project in 2004, initially focusing on mapping the United Kingdom. In the UK and elsewhere, government-run and tax-funded projects like the Ordnance Survey created massive datasets but failed to freely and widely distribute them. The first contribution, made in the British city of London in 2005, was thought to be a road by the Directions Mag. In April 2006, the OpenStreetMap Foundation was established to encourage the growth, development and distribution of free geospatial data and provide geospatial data for anybody to use and share. In December 2006, Yahoo! confirmed that OpenStreetMap could use its aerial photography as a backdrop for map production. In April 2007, Automotive Navigation Data (AND) donated a complete road data set for the Netherlands and trunk road data for India and China to the project and by July 2007, when the first OSM international The State of the Map conference was held, there were 9,000 registered users. Sponsors of the event included Google, Yahoo! and Multimap. In October 2007, OpenStreetMap completed the import of a US Census TIGER road dataset. In December 2007, Oxford University became the first major organisation to use OpenStreetMap data on their main website. Ways to import and export data have continued to grow – by 2008, the project developed tools to export OpenStreetMap data to power portable GPS units, replacing their existing proprietary and out-of-date maps. In March, two founders announced that they have received venture capital funding of €2.4million for CloudMade, a commercial company that uses OpenStreetMap data. In November 2010, Bing changed their licence to allow use of their satellite imagery for making maps. In 2012, the launch of pricing for Google Maps led several prominent websites to switch from their service to OpenStreetMap and other competitors. Chief among these were Foursquare and Craigslist, which adopted OpenStreetMap, and Apple, which ended a contract with Google and launched a self-built mapping platform using TomTom and OpenStreetMap data. In 2017, DigitalGlobe started providing satellite imagery to aid OpenStreetMap contributions. In June 2021, OpenStreetMap Foundation announced plans to move from the United Kingdom to a country in the European Union, citing Brexit as the inciting factor. According to the organisation, there are several reasons for the move, including "the failure of the UK and EU to agree on mutual recognition of database rights", the rising difficulties in "banking, finance and using PayPal in the UK" and the loss of .eu domains. Map production Map data is collected from scratch by volunteers performing systematic ground surveys using tools such as a handheld GPS unit, a notebook, digital camera, or a voice recorder. The data is then entered into the OpenStreetMap database using a number of software tools including JOSM and Mercator. Mapathon competition events are also held by OpenStreetMap team and by non-profit organisations and local governments to map a particular area. The availability of aerial photography and other data from commercial and government sources has added important sources of data for manual editing and automated imports. Special processes are in place to handle automated imports and avoid legal and technical problems. Software for editing maps Editing of maps can be done using the default web browser editor called iD, an HTML5 application using D3.js and written by Mapbox, which was originally financed by the Knight Foundation. JOSM, Potlatch, and Merkaartor are more powerful desktop editing applications that are better suited for advanced users. Vespucci is the primary full-featured editor for Android; it was released in 2009. StreetComplete is an Android app launched in 2016, which allows users without any OpenStreetMap knowledge to answer simple quests for existing data in OpenStreetMap, and thus contribute data. Maps.me and OsmAnd, two offline map mobile applications available for Android and iOS, both include limited OSM data editors. Go Map!! is an iOS app that lets users create and edit information in OpenStreetMap. Pushpin is another iOS app that lets users add POI on the go. Contributors The project has a geographically diverse user-base, due to emphasis of local knowledge and "on-the-ground" sitation in the process of data collection. Many early contributors were cyclists who survey with and for bicyclists, charting cycleroutes and navigable trails. Others are GIS professionals who contribute data with Esri tools. Contributors are predominately men, with only 3–5% being women. By August 2008, shortly after the second The State of the Map conference was held, there were over 50,000 registered contributors; by March 2009, there were 100,000 and by the end of 2009 the figure was nearly 200,000. In April 2012, OpenStreetMap cleared 600,000 registered contributors. On 6 January 2013, OpenStreetMap reached one million registered users. Around 30% of users have contributed at least one point to the OpenStreetMap database. Surveys and personal knowledge Ground surveys are performed by a mapper, on foot, bicycle, or in a car, motorcycle, or boat. Map data was typically recorded on a GPS unit. In late 2006 Yahoo! made their aerial imagery available for tracing to OSM contributors, which simplified mapping of readily visible and identifiable features. The project still makes use of GPS traces from volunteers which are used to delineate the more difficult to identify and classify features such as footpath, as well as providing ground-truth for aerial imagery alignment. Once the data has been collected, it is entered into the database by uploading it onto the project's website together with appropriate attribute data. As collecting and uploading data may be separated from editing objects, contribution to the project is possible without using a GPS unit. Some committed contributors adopt the task of mapping whole towns and cities, or organising mapping parties to gather the support of others to complete a map area. A large number of less active users contribute corrections and small additions to the map. Street-level image data In addition to several different sets of satellite image backgrounds available to OSM editors, data from several street-level image platforms are available as map data photo overlays: Bing Streetside 360° image tracks, and the open and crowdsourced Mapillary and KartaView platforms, generally smartphone and other windshield-mounted camera images. Additionally, a Mapillary traffic sign data layer can be enabled; it is the product of user-submitted images. Government data Some government agencies have released official data on appropriate licences. This includes the United States, where works of the federal government are placed under public domain. Globally, OSM initially used the Prototype Global Shoreline from NOAA. Due to it being oversimplified and crude, it has been mainly replaced by other government sources or manual tracing. In the United States, most roads originate from TIGER from the Census Bureau. Geographic names were initially sourced from Geographic Names Information System, and some areas contain water features from the National Hydrography Dataset. In the UK, some Ordnance Survey OpenData is imported. In Canada Natural Resources Canada's CanVec vector data and GeoBase provide landcover and streets. Out-of-copyright maps can be good sources of information about features that do not change frequently. Copyright periods vary, but in the UK Crown copyright expires after 50 years and hence Ordnance Survey maps until the 1960s can legally be used. A complete set of UK 1 inch/mile maps from the late 1940s and early 1950s has been collected, scanned, and is available online as a resource for contributors. Route planning In February 2015, OpenStreetMap added route planning functionality to the map on its official website. The routing uses external services, namely OSRM, GraphHopper and MapQuest. There are other routing providers and applications listed in the official Routing wiki. Usage Software for viewing maps Web browser Data provided by the OpenStreetMap project can be viewed in a web browser with JavaScript support via Hypertext Transfer Protocol (HTTP) on its official website. The basic map views offered are: Standard, Cycle map, Transport map and Humanitarian. Map display and category options are available using OpenStreetBrowser. OsmAnd OsmAnd is free software for Android and iOS mobile devices that can use offline vector data from OSM. It also supports layering OSM vector data with prerendered raster map tiles from OpenStreetMap and other sources. Locus Map Locus Map is both free software and premium for Android mobile devices that can use offline vector data from OSM. It also supports layering OSM vector data with prerendered raster map tiles from OpenStreetMap and other sources. Maps.me Maps.me is free software for Android and iOS mobile devices that provides offline maps based on OSM data. Organic Maps Organic Maps is a mobile map and navigation app with a focus on privacy. It is free software for Android and iOS mobile devices, and provides offline maps based on OSM data. GNOME Maps GNOME Maps is a graphical front-end written in JavaScript and introduced in GNOME 3.10. It provides a mechanism to find the user's location with the help of GeoClue, finds directions via GraphHopper and it can deliver a list as answer to queries. Marble Marble is a KDE virtual globe application which received support for OpenStreetMap. FoxtrotGPS FoxtrotGPS is a GTK+-based map viewer, that is especially suited to touch input. It is available in the SHR or Debian repositories. The web site OpenStreetMap.org provides a slippy map interface based on the Leaflet JavaScript library (and formerly built on OpenLayers), displaying map tiles rendered by the Mapnik rendering engine, and tiles from other sources including OpenCycleMap.org. Custom maps can also be generated from OSM data through various software including Jawg Maps, Mapnik, Mapbox Studio, Mapzen's Tangrams. OpenStreetMap maintains lists of online and offline routing engines available, such as the Open Source Routing Machine. OSM data is popular with routing researchers, and is also available to open-source projects and companies to build routing applications (or for any other purpose). Humanitarian aid The 2010 Haiti earthquake has established a model for non-governmental organisations (NGOs) to collaborate with international organisations. OpenStreetMap and Crisis Commons volunteers using available satellite imagery to map the roads, buildings and refugee camps of Port-au-Prince in just two days, building "the most complete digital map of Haiti's roads". The resulting data and maps have been used by several organisations providing relief aid, such as the World Bank, the European Commission Joint Research Centre, the Office for the Coordination of Humanitarian Affairs, UNOSAT and others. NGOs, like the Humanitarian OpenStreetMap Team and others, have worked with donors like United States Agency for International Development (USAID) to map other parts of Haiti and parts of many other countries, both to create map data for places that were blank, and to engage and build capacity of local people. After Haiti, the OpenStreetMap community continued mapping to support humanitarian organisations for various crises and disasters. After the Northern Mali conflict (January 2013), Typhoon Haiyan in the Philippines (November 2013), and the Ebola virus epidemic in West Africa (March 2014), the OpenStreetMap community has shown it can play a significant role in supporting humanitarian organisations. The Humanitarian OpenStreetMap Team acts as an interface between the OpenStreetMap community and the humanitarian organisations. Along with post-disaster work, the Humanitarian OpenStreetMap Team has worked to build better risk models and grow the local OpenStreetMap communities in multiple countries including Uganda, Senegal, the Democratic Republic of the Congo in partnership with the Red Cross, Médecins Sans Frontières, World Bank, and other humanitarian groups. Scientific research OpenStreetMap data was used in scientific studies. For example, road data was used for research of remaining roadless areas and in the creation of the annual Forest Landscape Integrity Index. Another example includes analysis of OpenStreetMap data for determination of spatial development and socio-economic factors in a developing country. "State of the Map" annual conference Since 2007, the OSM community has held an annual, international conference called State of the Map. Venues have been: (July 4–5). (The event was originally planned to take place in Cape Town, South Africa. It was turned into an online conference due to the COVID-19 pandemic). (July 9–11). (Again, the event was held as an online conference due to the COVID-19 pandemic. The event had both a private section with panels and workshops, only accessible to people who had purchased an online ticket, and a public one where most of the talks were made. Additionally, people who had purchased a ticket had the opportunity to ask questions to the speakers and to the sponsors). (In-person & online). There are also various national, regional and continental SotM conferences, such as SotM U.S., SotM Baltics, SotM Asia & SotM Africa. Legal aspects Licensing terms OpenStreetMap data was originally published under the Creative Commons Attribution-ShareAlike licence (CC BY-SA) with the intention of promoting free use and redistribution of the data. In September 2012, the licence was changed to the Open Database Licence (ODbL) published by Open Data Commons (ODC) in order to more specifically define its bearing on data rather than representation. As part of this relicensing process, some of the map data was removed from the public distribution. This included all data contributed by members that did not agree to the new licensing terms, as well as all subsequent edits to those affected objects. It also included any data contributed based on input data that was not compatible with the new terms. Estimates suggested that over 97% of data would be retained globally, but certain regions would be affected more than others, such as in Australia where 24 to 84% of objects would be retained, depending on the type of object. Ultimately, more than 99% of the data was retained, with Australia and Poland being the countries most severely affected by the change. All data added to the project needs to have a licence compatible with the Open Database Licence. This can include out-of-copyright information, public domain or other licences. Contributors agree to a set of terms which require compatibility with the current licence. This may involve examining licences for government data to establish whether it is compatible. Software used in the production and presentation of OpenStreetMap data is available from many different projects and each may have its own licensing. The application what users access to edit maps and view changelogs, is powered by Ruby on Rails. The application also uses PostgreSQL for storage of user data and edit metadata. The default map is rendered by Mapnik, stored in PostGIS, and powered by an Apache module called mod_tile. Certain parts of the software, such as the map editor Potlatch2, have been made available as public domain. Commercial data contributions Some OpenStreetMap data is supplied by companies that choose to freely license either actual street data or satellite imagery sources from which OSM contributors can trace roads and features. Notably, Automotive Navigation Data provided a complete road data set for Netherlands and details of trunk roads in China and India. In December 2006, Yahoo! confirmed that OpenStreetMap was able to make use of their vertical aerial imagery and this photography was available within the editing software as an overlay. Contributors could create their vector based maps as a derived work, released with a free and open licence, until the shutdown of the Yahoo! Maps API on 13 September 2011. In November 2010, Microsoft announced that the OpenStreetMap community could use Bing vertical aerial imagery as a backdrop in its editors. For a period from 2009 to 2011, NearMap Pty Ltd made their high-resolution PhotoMaps (of major Australian cities, plus some rural Australian areas) available for deriving OpenStreetMap data under a CC BY-SA licence. In June 2018, the Microsoft Bing team announced a major contribution of 125 million U.S. building footprints to the project – four times the number contributed by users and government data imports. Operation While OpenStreetMap aims to be a central data source, its map rendering and aesthetics are meant to be only one of many options, some of which highlight different elements of the map or emphasise design and performance. Data format OpenStreetMap uses a topological data structure, with four core elements (also known as data primitives): Nodes are points with a geographic position, stored as coordinates (pairs of a latitude and a longitude) according to WGS 84. Outside of their usage in ways, they are used to represent map features without a size, such as points of interest or mountain peaks. Ways are ordered lists of nodes, representing a polyline, or possibly a polygon if they form a closed loop. They are used both for representing linear features such as streets and rivers, and areas, like forests, parks, parking areas and lakes. Relations are ordered lists of nodes, ways and relations (together called "members"), where each member can optionally have a "role" (a string). Relations are used for representing the relationship of existing nodes and ways. Examples include turn restrictions on roads, routes that span several existing ways (for instance, a long-distance motorway), and areas with holes. Tags are key-value pairs (both arbitrary strings). They are used to store metadata about the map objects (such as their type, their name and their physical properties). Tags are not free-standing, but are always attached to an object: to a node, a way or a relation. A recommended ontology of map features (the meaning of tags) is maintained on a wiki. New tagging schemes can always be proposed by a popular vote of a written proposal in OpenStreetMap wiki, however, there is no requirement to follow this process. There are over 89 million different kinds of tags in use as of June 2017. Data storage The OSM data primitives are stored and processed in different formats. The main copy of the OSM data is stored in OSM's main database. The main database is a PostgreSQL database with PostGIS extension, which has one table for each data primitive, with individual objects stored as rows. All edits happen in this database, and all other formats are created from it. For data transfer, several database dumps are created, which are available for download. The complete dump is called planet.osm. These dumps exist in two formats, one using XML and one using the Protocol Buffer Binary Format (PBF). Popular services A variety of popular services incorporate some sort of geolocation or map-based component. Notable services using OSM for this include: Amazon uses OpenStreetMap for navigation and has a team who revises the map based on feedback from drivers. Apple Inc. unexpectedly created an OpenStreetMap-based map for iPhoto for iOS on , and launched the maps without properly citing the data source – though this was corrected in 1.0.1. OpenStreetMap is one of the many cited sources for Apple's custom maps in iOS 6, though the majority of map data is provided by TomTom. As of February 2021, Apple was the most prolific corporate editor, responsible for 80% of edits to existing roads. Petal Maps is a free mobile map application developed by Huawei. From its copyright statement, OpenStreetMap is one of their map data sources. Craigslist switched to OpenStreetMap in 2012, rendering their own tiles based on the data. Ballardia (games developer) launched World of the Living Dead: Resurrection in October 2013, which has incorporated OpenStreetMap into its game engine, along with census information to create a browser-based game mapping over 14,000 square kilometres of greater Los Angeles and survival strategy gameplay. Its previous incarnation had used Google Maps, which had proven incapable of supporting high volumes of players, so during 2013 they shut down the Google Maps version and ported the game to OSM. Facebook uses the map directly in its website/mobile app (depending on the zoom level, the area and the device), with a rendering style designed by Stamen Design as of 2021. Facebook has also used AI technology to detect roads absent from OpenStreetMap but visible in aerial imagery ("mapwith.ai" / "Map with AI"), and has developed an OpenStreetMap editing tool ("RapiD") for adding these roads to OpenStreetMap. The "Daylight Map Distribution" is a snapshot of OpenStreetMap data created by Facebook that claims to be clean of vandalism. Flickr uses OpenStreetMap data for various cities around the world, including Baghdad, Beijing, Kabul, Santiago, Sydney and Tokyo. In 2012, the maps switched to use Nokia data primarily, with OSM being used in areas where the commercial provider lacked performance. Foursquare started using OpenStreetMap via Mapbox's rendering and infrastructure of OSM. Geotab uses OpenStreetMap data in their Vehicle Tracking Software platform, MyGeotab. Hasbro, the toy company behind the real estate-themed board game Monopoly, launched Monopoly City Streets, a massively multiplayer online game (MMORPG) which allowed players to "buy" streets all over the world. The game used map tiles from Google Maps and the Google Maps API to display the game board, but the underlying street data was obtained from OpenStreetMap. The online game was a limited time offering, its servers were shut down in the end of January 2010. Komoot, a route planning service for running, cycling and hiking uses OpenStreetMap data Mapbox, a provider of custom online maps for websites and applications MapQuest announced a service based on OpenStreetMap in 2010, which eventually became MapQuest Open. Mapy.cz is based on OpenStreetMap and extends it by allowing users to upload photos, by making web searches by categories like travel tips, restaurant, accommodation, and by featuring 3D views, areal views, historical photos and haptic mode for blind people. It has apps for both Android and iOS with offline maps. Moovit uses maps based on OpenStreetMap in their free mobile application for public transit navigation. Niantic switched to OSM based maps from Google Maps on 1 December 2017 for their games Ingress and Pokémon Go. Nominatim (from the Latin, 'by name') is a tool to search OSM data by name and address (geocoding) and then to generate synthetic addresses of OSM points (reverse geocoding). OpenTopoMap renders topographic maps based on OSM data and on SRTM data. Snapchat's June 2017 update introduced its Snap Map with data from Mapbox, OpenStreetMap, and DigitalGlobe. Strava switched to OpenStreetMap rendered and hosted by Mapbox from Google Maps in July 2015. Tableau has integrated OSM for all their mapping needs. It has been integrated in all of their products. TCDD Taşımacılık uses OpenStreetMap as a location map on passenger seats on YHTs. Tesla Smart Summon feature released widely in US in October 2019 uses OSM data to navigate vehicles in private parking areas autonomously (without a safety driver) Wahoo uses OpenStreetMap for mapping and giving turn-by-turn navigation in their ELEMNT cycling computers. Webots uses OpenStreetMap data to create virtual environment for autonomous vehicle simulations. Gurtam uses OpenStreetMap data in their GPS Tracking Software platform, Wialon. Wikimedia projects uses OpenStreetMap as a locator map for cities and travel points of interest. Wikipedia uses OpenStreetMap data to render custom maps used by the articles. Many languages are included in the WIWOSM project (Wikipedia Where in OSM) which aims to show OSM objects on a slippy map, directly visible on the article page. Sister projects Several open collaborative mapping projects integrate with the OpenStreetMap database or are otherwise affiliated with the OpenStreetMap project: OpenHistoricalMap is a world historical map based on the OpenStreetMap software platform. OpenSeaMap is a world nautical chart built as a mashup of OpenStreetMap, crowdsourced water depth tracks, and third-party weather and bathymetric data. Wheelmap.org is a portal for mapping, browsing, and reviewing wheelchair-accessible places. See also Building information modeling Collaborative mapping Comparison of web map services Counter-mapping Neogeography Turn-by-turn navigation Volunteered geographic information Other collaborative mapping projects HERE Map Creator Google Map Maker Wikimapia Yandex.Map editor Mobile applications OsmAnd Karta GPS MAPS.ME Street map References Further reading External links 2004 establishments in the United Kingdom British websites Internet properties established in 2004 Wikis about geography Social information processing Open data Web mapping
Operating System (OS)
444
Harmony (operating system) Harmony is an experimental computer operating system (OS) developed at the National Research Council Canada in Ottawa. It is a second-generation message passing system that was also used as the basis for several research projects, including robotics sensing and graphical workstation development. Harmony was actively developed throughout the 1980s and into the mid-1990s. History Harmony was a successor to the Thoth system developed at the University of Waterloo. Work on Harmony began at roughly the same time as that on the Verex kernel developed at the University of British Columbia. David Cheriton was involved in both Thoth and Verex, and would later go on to develop the V System at Stanford University. Harmony's principal developers included W. Morven Gentleman, Stephen A. MacKay, Darlene A. Stewart, and Marceli Wein. Early ports of the system existed for a variety of Motorola 68000-based computers, including ones using the VMEbus and Multibus backplanes and in particular the Multibus-based Chorus multiprocessor system at Waterloo. Other hosts included the Atari 520 or 1040 ST. A port also existed for the Digital Equipment Corporation VAX. Harmony achieved formal verification in 1995. Features Harmony was designed as a real-time operating system (RTOS) for robot control. It is a multitasking, multiprocessing system. It is not multi-user. Harmony provided a runtime system (environment) only; development took place on a separate system, originally an Apple Macintosh. For each processor in the system, an image is created that combines Harmony with the one multitask program for that processor at link time, an exception being a case where the kernel is programmed into a read-only memory (ROM). Although the term did not appear in the original papers, Harmony was later referred to as a microkernel. A key in Harmony is its use of the term task, which in Harmony is defined as the "unit of sequential and synchronous execution" and "the unit of resource ownership". It is likened to a subroutine, but one that must be explicitly created and which runs independently of the task that created it. Programs are made up of a number of tasks. A task is bound to a given processor, which may be different from that of the instantiating task and which may host many tasks. All system resources are owned and managed by tasks. Intertask communication is provided mostly by synchronous message passing and four associated primitives. Shared memory is also supported. Destruction of a task closes all of its connections. Input/output uses a data stream model. Harmony is connection-oriented in that tasks that communicate with each other often maintain state information about each other. In contrast with some other distributed systems, connections in Harmony are inexpensive. Applications and tools An advanced debugger called Melody was developed for Harmony at the Advanced Real-Time Toolset Laboratory at Carleton University. It was later commercialized as Remedy. The Harmony kernel underpinned the Actra project — a multiprocessing, multitasking Smalltalk. Harmony was used in the multitasking, multiprocessor Adagio robotics simulation workstation. Concepts from both Harmony and Adagio influenced the design of the Smalltalk-based Eva event driven user interface builder. Harmony was used as the underlying OS for several experimental robotic systems. Commercial Harmony was commercialized by the Taurus Computer Products division of Canadian industrial computer company Dy4. When Dy4 closed down their software division, four of Taurus' former developers founded Precise Software Technologies and continued developing the OS as Precise/MPX, the predecessor to their later Precise/MQX product. Another commercial operating system derived from Harmony is the Unison OS from Rowebot Research Inc. References Further reading Real-time operating systems National Research Council (Canada) Microkernel-based operating systems Robot operating systems Operating system families
Operating System (OS)
445
PowerLinux PowerLinux is the combination of a Linux-based operating system (OS) running on PowerPC- or Power ISA-based computers from IBM. It is often used in reference along with Linux on Power, and is also the name of several Linux-only IBM Power Systems. IBM and Linux In the late 1990s, IBM began considering the Linux operating system. In 2000, IBM announced it would promote Linux. In 2001, IBM invested $1 billion to back the Linux movement, embracing it as an operating system for IBM servers and software. Within a decade, Linux could be found in virtually every IBM business, geography and workload, and continues to be deeply embedded in IBM hardware, software, services and internal development. A survey released by the Linux Foundation in April 2012 showed IBM as the fifth-leading commercial contributor over the past seven years, with more than 600 developers involved in more than 100 open-source projects. IBM established the Linux Technology Center (LTC) in 1999 to combine its software developers interested in Linux and other open-source software into a single organization. The LTC collaborated with the Linux community to make Linux run optimally on PowerPC, x86, and more recently, the Cell Broadband Engine. Developers in the LTC contribute to various open-source projects as well as projects focused on enabling Linux to use new hardware functions on IBM platforms. Linux has run on IBM POWER systems since 2001, when a team created a new, 64-bit port for the Linux kernel to allow the OS to run on PowerPC processors. The first system to fully support the 64-bit Linux kernel was IBM’s POWER5, created in 2004. It was followed by POWER6 in 2007 and the current POWER7-based systems in 2010. PowerLinux Servers Linux was first ported to POWER in June 2000. Since then PowerLinux was used in a number of supercomputers including MareNostrum 2004 and Roadrunner 2008. Beginning in April 2012, IBM introduced three POWER7 processor-based Linux-specific systems for big data analytics, industry applications and open-source infrastructure services such as Web-serving, email and social media collaboration services. The IBM PowerLinux 7R1 and IBM PowerLinux 7R2 systems are one- and two-socket, rack-mount servers that support either 8 or 16 POWER7 microprocessor cores in 3.55 GHz (7R1 only) or, with the 7R2, 3.55 and 3.3 GHz options with 128 GB maximum memory (for the 7R1) or 256 GB maximum memory (7R2) that can be configured with 8, 16 and 32 GB dual inline memory modules (DIMMs). Both systems run Linux operating systems: Red Hat Enterprise Linux or SUSE Linux Enterprise Server and include a built-in PowerVM [for PowerLinux] hypervisor that supports up to 10 VMs per core and 160 VMs per server. The IBM PowerLinux 7R4 is a POWER7+ processor-based system in a 5U package with two or four sockets and 16 or 32 cores. It can accommodate up to 1 TB of 1066 MHz DDR3 Active Memory Sharing. PowerVM for Linux dynamically adjusts system resources to partitions based on workload demands-across up to 640 VMs per server (20 micropartitions per core). In a study on systems and architecture for big data, IBM Research found that a 10-node Hadoop cluster of PowerLinux 7R2 nodes with POWER7+ processors, running InfoSphere BigInsights software, can sort through a terabyte of data in less than 8 minutes. IBM also introduced the IBM Flex System p24L Compute Node, a Linux-specific two-socket compute node for the recently announced IBM PureFlex System, which contains 12 or 16 POWER7 microprocessor cores, up to 256 GB of memory, the option of Red Hat Enterprise Linux or SUSE Linux Enterprise Server operating systems and built-in PowerVM for PowerLinux. In addition to these specific products, Linux is capable of running on any Power series hardware. PowerLinux versus Linux/x86 The April 2012 releases by IBM of PowerLinux were designed specifically to run the Linux OS on the company’s POWER7-based systems. Unlike servers built on the Intel Xeon processor, an x86 descendant with two threads per core, the POWER7 processor provides four threads per core. POWER-based servers are virtualized to provide 60 to 80 percent utilization, compared to a typical 40-percent rate for x86 processors. The PowerVM virtualization program has a Common Criteria Evaluation Assurance (CC) level of 4+, with zero security vulnerabilities reported, as well as unlimited memory use. About PowerVM virtualization Power-based IBM systems have built in virtualization capabilities derived from mainframe technology. On System P, this virtualization package is referred to as PowerVM. PowerVM includes virtualization capabilities such as micro-partitioning, active memory sharing, active memory deduplication, a virtual I/O server for virtual networks and storage, and live partition mobility. View technical details about PowerVM for PowerLinux here. Systems PowerLinux runs on: AmigaOne AmigaOne X1000 Cell blade server from Mercury Computer Systems IBM Power Systems JS43, JS23, JS20, JS21, QS20, QS21, QS22 Blade Center Linux on the PlayStation 3 Pegasos Sam440ep Sam460ex References External links Enterprise Linux on IBM Power Systems Linux on Power users and kernel devel mailing lists Diagnostic aids, productivity tools, and installation and developer toolkits Linux information for IBM systems Linux at IBM Developer Linux IBM software Power ISA Linux distributions Linux distributions
Operating System (OS)
446
Systems Programming Language Systems Programming Language, often shortened to SPL but sometimes known as SPL/3000, was a procedurally-oriented programming language written by Hewlett-Packard for the HP 3000 minicomputer line and first introduced in 1972. SPL was used to write the HP 3000's primary operating system, Multi-Programming Executive (MPE). Similar languages on other platforms were generically referred to as system programming languages, confusing matters. Originally known as Alpha Systems Programming Language, named for the development project that produced the 3000-series, SPL was designed to take advantage of the Alpha's stack-based processor design. It is patterned on ESPOL, a similar ALGOL-derived language used by the Burroughs B5000 mainframe systems, which also influenced a number of 1960s languages like PL360 and JOVIAL. Through the mid-1970s, the success of the HP systems produced a number of SPL offshoots. Examples include ZSPL for the Zilog Z80 processor, and Micro-SPL for the Xerox Alto. The later inspired Action! for the Atari 8-bit family, which was fairly successful. The latter more closely followed Pascal syntax, losing some of SPL's idiosyncrasies. SPL was widely used during the lifetime of the original integrated circuit-based versions HP 3000 platform. In the 1980s, the HP 3000 and MPE were reimplemented in an emulator running on the PA-RISC-based HP 9000 platforms. HP promoted Pascal as the favored system language on PA-RISC and did not provide an SPL compiler. This caused code maintenance concerns, and 3rd party SPL compilers were introduced to fill this need. History Hewlett-Packard introduced their first minicomputers, the HP 2100 series, in 1967. The machines had originally been designed by an external team working for Union Carbide and intended mainly for industrial embedded control uses, not the wider data processing market. HP saw this as a natural fit with their existing instrumentation business and initially pitched it to those users. In spite of this, HP found that the machine's price/performance ratio was making them increasingly successful in the business market. During this period, the concept of time sharing was becoming popular, especially as core memory costs fell and systems began to ship with more memory. In 1968, HP introduced a bundled system using two 2100-series machine running HP Time-Shared BASIC, which provided a complete operating system as well as the BASIC programming language. These two-machine systems, collectively known as HP 2000s, were an immediate success. HP BASIC was highly influential for many years, and its syntax can be seen in a number microcomputer BASICs, including Palo Alto TinyBASIC, Integer BASIC, North Star BASIC, Atari BASIC, and others. Designers at HP began to wonder "If we can produce a time-sharing system this good using a junky computer like the 2116, think what we could accomplish if we designed our own computer." To this end, in 1968 the company began putting together a larger team to design a new mid-sized architecture. New team members included those who had worked on Burroughs and IBM mainframe systems, and the resulting concepts bore a strong resemblance to the highly successful Burroughs B5000 system. The B5000 used a stack machine processor that made multiprogramming simpler to implement, and this same architecture was also selected for the new HP concept. Two implementations were considered, a 32-bit mainframe-scale machine known as Omega, and a 16-bit design known as Alpha. Almost all effort was on the Omega, but in June 1970, Omega was canceled. This led to an extensive redesign of Alpha to differentiate it from the 2100's, and it eventually emerged with plans for an even more aggressive operating system design. Omega had intended to run in batch mode and use a smaller computer, the "front end", to process interactions with the user. This was the same operating concept as the 2000 series. However, yet-another-2000 would not be enough for Alpha, and the decision was made to have a single operating for batch, interactive and even real time operation. To make this work, it needed an advanced computer bus design with extensive direct memory access (DMA) and required an advanced operating system (OS) to provide quick responses to user actions. The B5000 was also unique, for its time, in that its operating system and core utilities were all programmed in a high-level language, ESPOL. ESPOL was a derivative of the ALGOL language tuned to work on the B5000's, a concept that was highly influential in the 1960s and led to new languages like JOVIAL, PL/360 and BCPL. The HP team decided they would also use an ALGOL-derived language for their operating systems work. HP's similar language was initially known as the Alpha Systems Programming Language. Alpha took several years to develop before emerging in 1972 as the HP 3000. The machine was on the market for only a few months before it was clear it simply wasn't working right, and HP was forced to recall all 3000's already sold. It was reintroduced in late 1973 with most of its problems having been fixed. A major upgrade to the entire system, the CX machine, and MPE-C to run on it, reformed its image and the 3000 went on to be another major success during the second half of the 1970s. This success made SPL almost as widespread as the 2000 series' BASIC, and like that language, SPL resulted in a number of versions for other platforms. Notable among them was Micro-SPL, a version written for the Xerox Alto workstation. This machine had originally used BCPL as its primary language, but dissatisfaction with its performance led Henry Baker to design a non-recursive language that he implemented with Clinton Parker in 1979. Clinton would then further modify Micro-SPL to produce Action! for the Atari 8-bit family in 1983. HP reimplemented the HP 3000 system on the PA-RISC chipset, running a new version of the operating system known as MPE/iX. MPE/iX had two modes, in "native mode" it ran applications that had been recompiled for the PA-RISC using newer Pascal compilers, while under "compatible mode" it could run all existing software via emulation. HP did not supply a native mode compiler for MPE/iX so it was not an easy process to move existing software to the new platform. To fill the need, Allegro Consultants wrote an SPL-compatible language named "SPLash!" that could compile to original HP 3000 code to run within the emulator, or to native mode. This offered a porting pathway for existing SPL software. Language Basic syntax SPL generally follows ALGOL 60 syntax conventions, and will be familiar to anyone with experience in ALGOL or its descendants, like Pascal and Modula-2. Like those languages, program statements can span multiple physical lines and end with a semicolon. Comments are denoted with the keyword, or by surrounding the comment text in << and >>. Statements are grouped into blocks using BEGIN and END, although, as in Pascal, the END of a program must be followed by a period. The program as a whole is surrounded by BEGIN and END., similar to Pascal, but lacking a PROGRAM keyword or similar statement at the top. The reason for this is that SPL allows any block of code to be used as a program on its own, or compiled into another program to act as a library. The creation of code as a program or subprogram was not part of the language itself, handled instead by placing the compiler directive at the top of the file. The language used the INTRINSIC keyword to allow external code to be called directly by giving it a local name. For instance, a machine language library exposing a function to run the console bell could be imported to an SPL program as and then the bell could be operated by using the keyword as if it was a native command. In contrast to Pascal, where and were separate concepts, SPL uses a more C-like approach where any can be prefixed with a type to turn it into a function. In keeping with the syntax of other ALGOL-like languages, the types of the parameters were listed after the name, not part of it. For instance: INTEGER PROCEDURE FACT(N); VALUE N; INTEGER N; Declares a function FACT that takes a value N that is an integer. The indicates that this variable is also the return value for the procedure. Although frowned upon, ALGOL and Pascal allowed code to be labeled using a leading name ending with a colon, which could then be used for the target of loops and statements. One minor difference is that SPL required the label names to be declared in the variable section using the keyword. SPL added to this concept with the statement which allowed these labels to be further defined as "entry points" that could be accessed from the command line. Labels named in the entry statement(s) were exposed to the operating system and could be called from the RUN command. For instance, one could write a program containing string functions to convert to uppercase or lowercase, and then provide ENTRY points for these two. This could be called from the command line as . Data types Where SPL differs most noticeably from ALGOL is that its data types are very machine specific, based on the 3000's 16-bit big endian word format. The type is a 16-bit signed type, with 15 bits of value and the least significant bit as the sign. is a 32-bit integer, not a floating-point type. is a 32-bit floating-point value with 22 bits for the mantissa and 9 for the exponent, while is a 64-bit floating-point value with 54 bits of mantissa and 9 bits exponent. is used for character processing, consisting of a 16-bit machine word holding two 8-bit characters. is a boolean type that stores a single bit in the most significant bit. There is no equivalent of a modifier as found in Pascal, so is somewhat wasteful of memory. Like C, data is weakly typed, memory locations and variable storage are intermixed concepts, and one can access values directly through their locations. For instance, the code: INTEGER A,B,C LOGICAL D=A+2 defines three 16-bit integer variables, A, B and C, and then a LOGICAL, also a 16-bit value. The , like Pascal, means "is equivalent to", not "gets the value of", which uses in Algol-like languages. So the second line states "declare a variable D that is in the same memory location as A+2", which in this case is also the location of the variable C. This allows the same value to be read as an integer via C or a logical through D. This syntax may seem odd to modern readers where memory is generally a black box, but it has a number of important uses in systems programming where particular memory locations hold values from the underlying hardware. In particular, it allows one to define a variable that points to the front of a table of values, and then declare additional variables that point to individual values within the table. If the table location changes, only a single value has to change, the initial address, and all of the individual variables will automatically follow in their proper relative offsets. Pointers were declared by adding the modifier to any variable declaration, and the memory location of a variable dereferenced with the . Thus declares a pointer whose value contains the address of the variable A, not the value of A. can be used on either side of the assignment; puts the value of A into P, likely resulting in a dangling pointer, makes P point to A, while puts the value of A into the location currently pointed to by P. In a similar fashion, SPL includes C-like array support in which the index variable is a number-of-words offset from the memory location set for the initial variable. Unlike C, SPL only provided one-dimensional arrays, and used parentheses as opposed to brackets. Variables could also be declared , in which case no local memory was set aside for them and the storage was assumed to be declared in another library. This mirrors the keyword in C. Literals can be specified with various suffixes, and those without a suffix are assumed to be . For instance, would be interpreted as an , while was a . denoted a and a . String constants were delimited by double-quotes, and double-quotes within a line were escaped with a second double-quote. Variable declarations could use constants to define an initial value, as in . Note the use of the assign-to rather than is-a. Additionally, SPL had a keyword that allowed a string of text to be defined as a variable, and then replaced any instances of that variable in the code with the literal string during compiles. This is similar to the keyword in C. Memory segmentation As was common in the era, the HP 3000 used a byte-oriented segmented memory model in which an address was a single 16-bit word, allowing code to access up to 65,536 bytes (or as they termed it, "half-words"). To allow larger amounts of memory to be accessed, a virtual memory system was used. When memory was accessed, the 16-bit address was prefixed with one of two 8-bit segment values, one for the program code (PB) and another for variable data. The result was a 24-bit address. Thus, while each program had access to a total of 128 kB at any one time, it could swap the segments to access a full 16 MB memory space. SPL included a variety of support systems to allow programs to be easily segmented and then make that segmentation relatively invisible in the code. The primary mechanism was to use the compiler directive which defined which segment the following code should be placed in. The default was , but the programmer could add any number of additional named segments to organize the code into blocks. Other features SPL included a "bit-extraction" feature that allowed simplified bit fiddling. Any bit, or string of bits, in a word could be accessed using the syntax, where x and y were the start and end bit positions from 0 to 15. Thus returned the lower byte of the word storing A. This format could be used to split and merge bits as needed. Additionally, additional operations were provided for shifts and rotates, and could be applied to any variable with the , for instance . Example This simple program, from the 1984 version of the reference manual, shows most of the features of the SPL language. The program as a whole is delimited between the and . It begins with the definition of a series of global variables, A, B and C, defines a single procedure and then calls it twenty times. Note that the procedure does not have a BEGIN and END of its own because it contains only one line of actual code, the is not considered part of the code itself, it is indicating the type of the three parameters being passed in on the line above and is considered part of that line. BEGIN INTEGER A:=0, B, C:=1; PROCEDURE N(X,Y,Z); INTEGER X,Y,Z; X:=X*(Y+Z); FOR B:=1 UNTIL 20 DO N(A,B,C); END. References Citations Bibliography Systems programming languages HP software
Operating System (OS)
447
Kronos (computer) Kronos is a series of 32-bit processor equipped printed circuit board systems, and the workstations based thereon, of a proprietary hardware architecture developed in the mid-1980s in Akademgorodok, a research city in Siberia, by the Academy of Sciences of the Soviet Union, Siberian branch, Novosibirsk Computing Center, Modular Asynchronous Developable Systems (MARS) project, Kronos Research Group (KRG). History In 1984, the Kronos Research Group (KRG) was founded by four students of the Novosibirsk State University, two from the mathematics department (Dmitry "Leo" Kuznetsov, Alex Nedoria) and two from the physics department (Eugene Tarasov, Vladimir Vasekin). At that time, the main objective was to build home computers for the KRG members. In 1985, the group joined the Russian fifth generation computer project START, in which Kronos became a platform for developing multiprocessor reconfigurable Modular Asynchronous Developable Systems (MARS), and played a lead role in developing the first Russian full 32-bit workstation and its software. During 7 years (1984–1991) the group designed and implemented: Kronos 2.1 and 2.2 – 32-bit processor boards for DEC LSI-11 Kronos 2.5 – 32-bit processor board for Labtam computers Kronos 2.6 – 32-bit workstation The project START was finished in 1988. During the post-START years (1988–1991), several Russian industrial organizations expressed interest in continuing the Kronos development and some had been involved in facilitating the construction of Kronos and MARS prototypes, including the design of a Kronos-on-chip. However, changing funding levels and the chaotic economic situation during perestroika kept those plans from being realized. Architecture The Kronos instruction set architecture was based on Niklaus Wirth's Modula-2 workstation Lilith, developed at the Swiss Federal Institute of Technology (ETH Zurich) of Zurich Switzerland, which in turn was inspired by the Xerox Alto developed at Xerox PARC. The Modula-2-based Kronos was quite amenable to the basic principles of MARS, as Modula-2 is fundamentally modular, allowing programs to be partitioned into units with relatively well defined interfaces. These interfaces supported separate compiling of modules, and separating of module specifications from their implementation. The primary difference between Lilith and Kronos was that the processor of Lilith was 16-bit, while Kronos was 32-bit and incorporated several extensions to the instruction set to accommodate the inter-processor communication needed in MARS. Kronos satisfied many aspects of the reduced instruction set computer (RISC) design, although it was not pure RISC: the evaluation stack was used to evaluate expressions and to hold parameters for procedure calls. Since most executed instructions were encoded in a single byte, the object code for Kronos was very compact. Although Kronos was a proprietary processor, it was well suited to applications which were sensitive to high programmability rather than to software compatibility. For example, embedded control systems require fast and reliable design of new original applications for controlling unique objects and processes. Modula-2 was then a perfect language for this purpose, and Kronos was a perfect processor to effectively run the Modula-2 software. Software The Kronos software included: Versions of the proprietary operating system Excelsior Compilers for Modula-2, C, and Fortran CAD systems Other applications Operating system The Kronos workstation includes an operating system named Excelsior, developed by the Kronos Research Group (KRG). It is a single user system based on Modula-2 modules. In design, it is similar to the OS Medos-2, developed for the Lilith workstation, at ETH Zurich, by Svend Erik Knudsen with advice from Niklaus Wirth. References External links , history in Russian The Kronos Research Group recovered from Internet Archive A Brief History of Modula and Lilith Acquisition of a Kronos workstation and more by the National Museum of Science and Industry in London Historical source code from Kronos 198x USSR 32-bit workstation Emulator for the Kronos workstation (via Internet Archive) runs on Windows-NT; tested thereon successfully. Two logins are possible: sys or guest, both password free. See also: More Documentation of Kronos in Russian Computer workstations Soviet computer systems
Operating System (OS)
448
System resource In computing, a system resource, or simply resource, is any physical or virtual component of limited availability within a computer system. All connected devices and internal system components are resources. Virtual system resources include files (concretely file handles), network connections (concretely network sockets), and memory areas. Managing resources is referred to as resource management, and includes both preventing resource leaks (not releasing a resource when a process has finished using it) and dealing with resource contention (when multiple processes wish to access a limited resource). Computing resources are used in cloud computing to provide services through networks. Major resource types Interrupt request (IRQ) lines Direct memory access (DMA) channels Port-mapped I/O Memory-mapped I/O Locks External devices External memory or objects, such as memory managed in native code, from Java; or objects in the Document Object Model (DOM), from JavaScript General resources CPU, both time on a single CPU and use of multiple CPUs – see multitasking Random-access memory and virtual memory – see memory management Hard disk drives, include space generally, contiguous free space (such as for swap space), and use of multiple physical devices ("spindles"), since using multiple devices allows parallelism Cache space, including CPU cache and MMU cache (translation lookaside buffer) Network throughput Electrical power Input/output operations Randomness Categories Some resources, notably memory and storage space, have a notion of "location", and one can distinguish contiguous allocations from non-contiguous allocations. For example, allocating 1 GB of memory in a single block, versus allocating it in 1,024 blocks each of size 1 MB. The latter is known as fragmentation, and often severely impacts performance, so contiguous free space is a subcategory of the general resource of storage space. One can also distinguish compressible resources from incompressible resources. Compressible resources, generally throughput ones such as CPU and network bandwidth, can be throttled benignly: the user will be slowed proportionally to the throttling, but will otherwise proceed normally. Other resources, generally storage ones such as memory, cannot be throttled without either causing failure (if a process cannot allocate enough memory, it typically cannot run) or severe performance degradation, such as due to thrashing (if a working set does not fit into memory and requires frequent paging, progress will slow significantly). The distinction is not always sharp; as mentioned, a paging system can allow main memory (primary storage) to be compressed (by paging to hard drive (secondary storage)), and some systems allow discardable memory for caches, which is compressible without disastrous performance impact. Electrical power is to some degree compressible: without power (or without sufficient voltage) an electrical device cannot run, and will stop or crash, but some devices, notably mobile phones, can allow degraded operation at reduced power consumption, or can allow the device to be suspended but not terminated, with much lower power consumption. See also Computational resource Linear scheduling method Sequence step algorithm System monitor References Resources Computing terminology
Operating System (OS)
449
Quantian Quantian OS was a remastering of Knoppix/Debian for computational sciences. The environment was self-configuring and directly bootable CD/DVD that turns any PC or laptop (provided it can boot from cdrom/DVD) into a Linux workstation. Quantian also incorporated clusterKnoppix and added support for openMosix, including remote booting of light clients in an openMosix terminal server context permitting rapid setup of a SMP cluster computer. Applications Numerous software packages for usual or scientific aims come with Quantian. After the installation, total package volume is about 2.7 GB (For the detailed package list see: List of all the available packages). The packages for "home users" include: KDE, the default desktop environment and their components XMMS, Kaffeine, xine media players Internet access software, including the KPPP dialer, ISDN utilities and WLAN The Mozilla, Mozilla Firefox and Konqueror web browsers K3b, for CD (and DVD) management The GIMP, an image-manipulation program Tools for data rescue and system repair Network analysis and administration tools OpenOffice.org Kile, Lyx Additionally, some of the scientific applications/programs in Quantian are such like: R, statistical computing software Octave, a Matlab clone Scilab, another Matlab clone GSL, GNU Scientific Library Maxima computer algebra system Python programming language with Scipy Fityk curve fitter Ghemical for computational chemistry Texmacs for wysiwyg scientific editing Grass geographic information system OpenDX and MayaVi data visualisation systems Gnuplot, a command-line driven interactive data and function plotting utility LabPlot, an application for plotting of data sets and functions References Quantian Home Page While discontinued, available without support on this archiveOS page Knoppix Linux distributions
Operating System (OS)
450
Computer configuration In communications or computer systems, a configuration of a system refers to the arrangement of each of its functional units, according to their nature, number and chief characteristics. Often, configuration pertains to the choice of hardware, software, firmware, and documentation. Along with its architecture, the configuration of a computer system affects both its function and performance See also Auto-configuration Configuration management - In multiple disciplines, a practice for managing change Software configuration management Configuration file - In software, a data resource used for program initialization Configure script (computing) Configurator Settings (Windows) References Federal Standard 1037C External links Elektra Initiative for Linux configurations
Operating System (OS)
451
MacOS version history The history of macOS, Apple's current Mac operating system formerly named Mac OS X until 2012 and then OS X until 2016, began with the company's project to replace its "classic" Mac OS. That system, up to and including its final release Mac OS 9, was a direct descendant of the operating system Apple had used in its Macintosh computers since their introduction in 1984. However, the current macOS is a Unix operating system built on technology that had been developed at NeXT from the 1980s until Apple purchased the company in early 1997. Although it was originally marketed as simply "version 10" of the Mac OS (indicated by the Roman numeral "X"), it has a completely different codebase from Mac OS 9, as well as substantial changes to its user interface. The transition was a technologically and strategically significant one. To ease the transition, versions through 10.4 were able to run Mac OS 9 and its applications in a compatibility layer. MacOS was first released in 1999 as Mac OS X Server 1.0, with a widely released desktop version—Mac OS X 10.0—following in March 2001. Since then, several more distinct desktop and server editions of macOS have been released. Starting with Mac OS X 10.7 Lion, macOS Server is no longer offered as a separate operating system; instead, server management tools are available for purchase as an add-on. Starting with the Intel build of Mac OS X 10.5 Leopard, most releases have been certified as Unix systems conforming to the Single Unix Specification. Lion was sometimes referred to by Apple as "Mac OS X Lion" and sometimes referred to as "OS X Lion", without the "Mac"; Mountain Lion was consistently referred to as just "OS X Mountain Lion", with the "Mac" being completely dropped. The operating system was further renamed to "macOS" starting with macOS Sierra. macOS retained the major version number 10 throughout its development history until the release of macOS 11 Big Sur in 2020; releases of macOS have also been named after big cats (versions 10.0–10.8) or locations in California (10.9–present). A new macOS, Monterey, was announced during WWDC on June 7, 2021. Development Development outside Apple After Apple removed Steve Jobs from management in 1985, he left the company and attempted to create the "next big thing", with funding from Ross Perot and himself. The result was the NeXT Computer. As the first workstation to include a digital signal processor (DSP) and a high-capacity optical disc drive, NeXT hardware was advanced for its time, but was expensive relative to the rapidly commoditizing workstation market and marred by design problems. The hardware was phased out in 1993; however, the company's object-oriented operating system NeXTSTEP had a more lasting legacy. NeXTSTEP was based on the Mach kernel developed at CMU (Carnegie Mellon University) and BSD, an implementation of Unix dating back to the 1970s. It featured an object-oriented programming framework based on the Objective-C language. This environment is known today in the Mac world as Cocoa. It also supported the innovative Enterprise Objects Framework database access layer and WebObjects application server development environment, among other notable features. All but abandoning the idea of an operating system, NeXT managed to maintain a business selling WebObjects and consulting services, only ever making modest profits in its last few quarters as an independent company. NeXTSTEP underwent an evolution into OPENSTEP which separated the object layers from the operating system below, allowing it to run with less modification on other platforms. OPENSTEP was, for a short time, adopted by Sun and HP. However, by this point, a number of other companies — notably Apple, IBM, Microsoft, and even Sun itself — were claiming they would soon be releasing similar object-oriented operating systems and development tools of their own. Some of these efforts, such as Taligent, did not fully come to fruition; others, like Java, gained widespread adoption. On February 4, 1997, Apple Computer acquired NeXT for $427 million, and used OPENSTEP as the basis for Mac OS X, as it was called at the time. Traces of the NeXT software heritage can still be seen in macOS. For example, in the Cocoa development environment, the Objective-C library classes have "NS" prefixes, and the HISTORY section of the manual page for the defaults command in macOS straightforwardly states that the command "First appeared in NeXTStep." Internal development Meanwhile, Apple was facing commercial difficulties of its own. The decade-old Macintosh System Software had reached the limits of its single-user, co-operative multitasking architecture, and its once-innovative user interface was looking increasingly outdated. A massive development effort to replace it, known as Copland, was started in 1994, but was generally perceived outside Apple to be a hopeless case due to political infighting and conflicting goals. By 1996, Copland was nowhere near ready for release, and the project was eventually cancelled. Some elements of Copland were incorporated into Mac OS 8, released on July 26, 1997. After considering the purchase of BeOS — a multimedia-enabled, multi-tasking OS designed for hardware similar to Apple's, the company decided instead to acquire NeXT and use OPENSTEP as the basis for their new OS. Avie Tevanian took over OS development, and Steve Jobs was brought on as a consultant. At first, the plan was to develop a new operating system based almost entirely on an updated version of OPENSTEP, with the addition of a virtual machine subsystem — known as the Blue Box — for running "classic" Macintosh applications. The result was known by the code name Rhapsody, slated for release in late 1998. Apple expected that developers would port their software to the considerably more powerful OPENSTEP libraries once they learned of its power and flexibility. Instead, several major developers such as Adobe told Apple that this would never occur, and that they would rather leave the platform entirely. This "rejection" of Apple's plan was largely the result of a string of previous broken promises from Apple; after watching one "next OS" after another disappear and Apple's market share dwindle, developers were not interested in doing much work on the platform at all, let alone a re-write. Changed direction under Jobs Apple's financial losses continued and the board of directors lost confidence in CEO Gil Amelio, asking him to resign. The board asked Steve Jobs to lead the company on an interim basis, essentially giving him carte blanche to make changes to return the company to profitability. When Jobs announced at the World Wide Developer's Conference that what developers really wanted was a modern version of the Mac OS, and Apple was going to deliver it, he was met with applause. Over the next two years, a major effort was applied to porting the original Macintosh APIs to Unix libraries known as Carbon. Mac OS applications could be ported to Carbon without the need for a complete re-write, making them operate as native applications on the new operating system. Meanwhile, applications written using the older toolkits would be supported using the "Classic" Mac OS 9 environment. Support for C, C++, Objective-C, Java, and Python were added, furthering developer comfort with the new platform. During this time, the lower layers of the operating system (the Mach kernel and the BSD layers on top of it) were re-packaged and released under the Apple Public Source License. They became known as Darwin. The Darwin kernel provides a stable and flexible operating system, which takes advantage of the contributions of programmers and independent open-source projects outside Apple; however, it sees little use outside the Macintosh community. During this period, the Java programming language had increased in popularity, and an effort was started to improve Mac Java support. This consisted of porting a high-speed Java virtual machine to the platform, and exposing macOS-specific "Cocoa" APIs to the Java language. The first release of the new OS — Mac OS X Server 1.0 — used a modified version of the Mac OS GUI, but all client versions starting with Mac OS X Developer Preview 3 used a new theme known as Aqua. Aqua was a substantial departure from the Mac OS 9 interface, which had evolved with little change from that of the original Macintosh operating system: it incorporated full color scalable graphics, anti-aliasing of text and graphics, simulated shading and highlights, transparency and shadows, and animation. A new feature was the Dock, an application launcher which took advantage of these capabilities. Despite this, Mac OS X maintained a substantial degree of consistency with the traditional Mac OS interface and Apple's own [Human Interface Guidelines], with its pull-down menu at the top of the screen, familiar keyboard shortcuts, and support for a single-button mouse. The development of Aqua was delayed somewhat by the switch from OpenStep's Display PostScript engine to one developed in-house that was free of any license restrictions, known as Quartz. Releases With the exception of Mac OS X Server 1.0 and the original public beta, the first several macOS versions were named after big cats. Prior to its release, version 10.0 was code named "Cheetah" internally at Apple, and version 10.1 was code named internally as "Puma". After the code name "Jaguar" for version 10.2 received publicity in the media, Apple began openly using the names to promote the operating system: 10.3 was marketed as "Panther", 10.4 as "Tiger", 10.5 as "Leopard", 10.6 as "Snow Leopard", 10.7 as "Lion", and 10.8 as "Mountain Lion". "Panther", "Tiger", and "Leopard" were registered as trademarks. Apple registered "Lynx" and "Cougar", but these were allowed to lapse. Apple instead used the name of iconic locations in California for subsequent releases: 10.9 Mavericks is named after Mavericks, a popular surfing destination; 10.10 Yosemite is named after Yosemite National Park; 10.11 El Capitan is named for the El Capitan rock formation in Yosemite National Park; 10.12 Sierra is named for the Sierra Nevada mountain range; and 10.13 High Sierra is named for the area around the High Sierra Camps. Public Beta: "Kodiak" On September 13, 2000, Apple released a $29.95 "preview" version of Mac OS X (internally codenamed Kodiak) in order to gain feedback from users. It marked the first public availability of the Aqua interface, and Apple made many changes to the UI based on customer feedback. Mac OS X Public Beta expired and ceased to function in spring 2001. Version 10.0: "Cheetah" On March 24, 2001, Apple released Mac OS X 10.0 (internally codenamed Cheetah). The initial version was slow, incomplete, and had very few applications available at the time of its launch, mostly from independent developers. While many critics suggested that the operating system was not ready for mainstream adoption, they recognized the importance of its initial launch as a base on which to improve. Simply releasing Mac OS X was received by the Macintosh community as a great accomplishment, for attempts to completely overhaul the Mac OS had been underway since 1996, and delayed by countless setbacks. Following some bug fixes, kernel panics became much less frequent. Version 10.1: "Puma" Mac OS X 10.1 (internally codenamed Puma) was released on September 25, 2001. It has better performance and provided missing features, such as DVD playback. Apple released 10.1 as a free upgrade CD for 10.0 users. Apple released a upgrade CD for Mac OS 9. On January 7, 2002, Apple announced that Mac OS X was to be the default operating system for all Macintosh products by the end of that month. Version 10.2: "Jaguar" On August 23, 2002, Apple followed up with Mac OS X 10.2 Jaguar, the first release to use its code name as part of the branding. It brought great raw performance improvements, a sleeker look, and many powerful user-interface enhancements (over 150, according to Apple ), including Quartz Extreme for compositing graphics directly on an ATI Radeon or Nvidia GeForce2 MX AGP-based video card with at least 16 MB of VRAM, a system-wide repository for contact information in the new Address Book, and an instant messaging client named iChat. The Happy Mac which had appeared during the Mac OS startup sequence for almost 18 years was replaced with a large grey Apple logo with the introduction of Mac OS X 10.2. Version 10.3: "Panther" Mac OS X Panther was released on October 24, 2003. In addition to providing much improved performance, it also incorporated the most extensive update yet to the user interface. Panther included as many or more new features as Jaguar had the year before, including an updated Finder, incorporating a brushed-metal interface, Fast user switching, Exposé (Window manager), FileVault, Safari, iChat AV (which added videoconferencing features to iChat), improved Portable Document Format (PDF) rendering and much greater Microsoft Windows interoperability. Support for some early G3 computers such as the Power Macintosh and PowerBook was discontinued. Version 10.4: "Tiger" Mac OS X Tiger was released on April 29, 2005. Apple stated that Tiger contained more than 200 new features. As with Panther, certain older machines were no longer supported; Tiger requires a Mac with a built-in FireWire port. Among the new features, Tiger introduced Spotlight, Dashboard, Smart Folders, updated Mail program with Smart Mailboxes, QuickTime 7, Safari 2, Automator, VoiceOver, Core Image and Core Video. The initial release of the Apple TV used a modified version of Tiger with a different graphical interface and fewer applications and services. On January 10, 2006, Apple released the first Intel-based Macs along with the 10.4.4 update to Tiger. This operating system functioned identically on the PowerPC-based Macs and the new Intel-based machines, with the exception of the Intel release dropping support for the Classic environment. Only PowerPC Macs can be booted from retail copies of the Tiger client DVD, but there is a Universal DVD of Tiger Server 10.4.7 (8K1079) that can boot both PowerPC and Intel Macs. Version 10.5: "Leopard" Mac OS X Leopard was released on October 26, 2007. Apple called it "the largest update of Mac OS X". Leopard supports both PowerPC- and Intel x86-based Macintosh computers; support for the G3 processor was dropped and the G4 processor required a minimum clock rate of 867 MHz, and at least 512 MB of RAM to be installed. The single DVD works for all supported Macs (including 64-bit machines). New features include a new look, an updated Finder, Time Machine, Spaces, Boot Camp pre-installed, full support for 64-bit applications (including graphical applications), new features in Mail and iChat, and a number of new security features. Leopard is an Open Brand UNIX 03 registered product on the Intel platform. It was also the first BSD-based OS to receive UNIX 03 certification. Leopard dropped support for the Classic Environment and all Classic applications, and was the final version of Mac OS X to support the PowerPC architecture. Version 10.6: "Snow Leopard" Mac OS X Snow Leopard was released on August 28, 2009, the last version to be available on disc. Rather than delivering big changes to the appearance and end user functionality like the previous releases of , the development of Snow Leopard was deliberately focused on "under the hood" changes, increasing the performance, efficiency, and stability of the operating system. For most users, the most noticeable changes are these: the disk space that the operating system frees up after a clean installation compared to Mac OS X 10.5 Leopard, a more responsive Finder rewritten in Cocoa, faster Time Machine backups, more reliable and user friendly disk ejects, a more powerful version of the Preview application, as well as a faster Safari web browser. An update introduced support for the Mac App Store, Apple's digital distribution platform for macOS applications and subsequent macOS upgrades. Snow Leopard only supports machines with Intel CPUs, requires at least 1 GB of RAM, and drops default support for applications built for the PowerPC architecture (Rosetta can be installed as an additional component to retain support for PowerPC-only applications). Version 10.7: "Lion" Mac OS X Lion was released on July 20, 2011. It brought developments made in Apple's iOS, such as an easily navigable display of installed applications (Launchpad) and (a greater use of) multi-touch gestures, to the Mac. This release removed Rosetta, making it incapable of running PowerPC applications. It dropped support for 32-bit Intel processors and requires 2GB of memory. Changes made to the GUI (Graphical User Interface) include the Launchpad (similar to the home screen of iOS devices), auto-hiding scrollbars that only appear when they are being used, and Mission Control, which unifies Exposé, Spaces, Dashboard, and full-screen applications within a single interface. Apple also made changes to applications: they resume in the same state as they were before they were closed (similar to iOS). Documents auto-save by default. Version 10.8: "Mountain Lion" OS X Mountain Lion was released on July 25, 2012. It incorporates some features seen in iOS 5, which include Game Center, support for iMessage in the new Messages messaging application, and Reminders as a to-do list app separate from iCal (which is renamed as Calendar, like the iOS app). It also includes support for storing iWork documents in iCloud. 2GB of memory is required. Notification Center, which makes its debut in Mountain Lion, is a desktop version similar to the one in iOS 5.0 and higher. Application pop-ups are now concentrated on the corner of the screen, and the Center itself is pulled from the right side of the screen. Mountain Lion also includes more Chinese features, including support for Baidu as an option for Safari search engine. Notification Center is added, providing an overview of alerts from applications. Notes is added, as an application separate from Mail, synching with its iOS counterpart through the iCloud service. Messages, an instant messaging software application, replaces iChat. Version 10.9: "Mavericks" OS X Mavericks was released on October 22, 2013, as a free update through the Mac App Store worldwide. It placed emphasis on battery life, Finder enhancements, other enhancements for power users, and continued iCloud integration, as well as bringing more of Apple's iOS apps to the OS X platform. iBooks and Apple Maps applications were added. Mavericks requires 2GB of memory to operate. It is the first version named under Apple's then-new theme of places in California, dubbed Mavericks after the surfing location. Unlike previous versions of OS X, which had progressively decreasing prices since 10.6, 10.9 was available at no charge to all users of compatible systems running Snow Leopard (10.6) or later, beginning Apple's policy of free upgrades for life on its operating system and business software. Version 10.10: "Yosemite" OS X Yosemite was released to the general public on October 16, 2014, as a free update through the Mac App Store worldwide. It featured a major overhaul of user interface, replaced skeuomorphism with flat graphic design and blurred translucency effects, following the aesthetic introduced with iOS 7. It introduced features called Continuity and Handoff, which allow for tighter integration between paired OS X and iOS devices: the user can handle phone calls or text messages on either their Mac or their iPhone, and edit the same Pages document on either their Mac or their iPad. A later update of the OS included Photos as a replacement for iPhoto and Aperture. Version 10.11: "El Capitan" OS X El Capitan was revealed on June 8, 2015, during the WWDC keynote speech. It was made available as a public beta in July and was made available publicly on September 30, 2015. Apple described this release as containing "Refinements to the Mac Experience" and "Improvements to System Performance" rather than new features. Refinements include public transport built into the Maps application, GUI improvements to the Notes application, as well as adopting San Francisco as the system font. Metal API, an application enhancing software, had debuted in this operating system, being available to "all Macs since 2012". Version 10.12: "Sierra" macOS Sierra was announced on June 13, 2016, during the WWDC keynote speech. The update brought Siri to macOS, featuring several Mac-specific features, like searching for files. It also allowed websites to support Apple Pay as a method of transferring payment, using either a nearby iOS device or Touch ID to authenticate. iCloud also received several improvements, such as the ability to store a user's Desktop and Documents folders on iCloud so they could be synced with other Macs on the same Apple ID. It was released publicly on September 20, 2016. Version 10.13: "High Sierra" macOS High Sierra was announced on June 5, 2017, during the WWDC keynote speech. It was released on September 25, 2017. The release includes many under-the-hood improvements, including a switch to Apple File System (APFS), the introduction of Metal 2, support for HEVC video, and improvements to VR support. In addition, numerous changes were made to standard applications including Photos, Safari, Notes, and Spotlight. Version 10.14: "Mojave" macOS Mojave was announced on June 4, 2018, during the WWDC keynote speech. It was released on September 24, 2018. Some of the key new features were the Dark mode, Desktop stacks and Dynamic Desktop, which changes the desktop background image to correspond to the user's current time of day. Version 10.15: "Catalina" macOS Catalina was announced on June 3, 2019, during the WWDC keynote speech. It was released on October 7, 2019. It primarily focuses on updates to built-in apps, such as replacing iTunes with separate Music, Podcasts, and TV apps, redesigned Reminders and Books apps, and a new Find My app. It also features Sidecar, which allows the user to use an iPad as a second screen for their computer, or even simulate a graphics tablet with an Apple Pencil. It is the first version of macOS not to support 32-bit applications. The Dashboard application was also removed in the update. Since macOS Catalina, iOS apps can run on macOS with Project Catalyst but requires the app to be made compatible unlike ARM-powered Silicon Macs that can run all iOS apps by default. Version 11: "Big Sur" macOS Big Sur was announced on June 22, 2020, during the WWDC keynote speech. It was released November 12, 2020. The major version number is changed, for the first time since "Mac OS X" was released, making it macOS 11. It brings ARM support, new icons, GUI changes to the system, and other bug fixes. Version 12: "Monterey" macOS Monterey was announced on June 7, 2021, during the WWDC keynote speech. It was released on October 25, 2021. macOS Monterey introduces new features such as Universal Control, AirPlay to Mac, Shortcuts application, and more. Universal Control allows users to use a single Keyboard and Mouse to move between devices. Airplay now allows users to present and share almost anything. The Shortcuts app was also introduced to macOS, giving users access to galleries of pre-built shortcuts, designed for Macs, a service brought from iOS. Users can now also set up shortcuts, among other things. Timeline of Macintosh operating systems Timeline of macOS versions See also Macintosh operating systems Architecture of macOS List of macOS components iOS version history References External links MacOS Lists of operating systems History Software version histories
Operating System (OS)
452
Picotux The Picotux is a single-board computer launched in 2005, running Linux. There are several different kinds of picotux available, but the main one is the picotux 100. The Picotux was released for availability on 18 May 2005. It is 35 mm × 19 mm × 19 mm and just barely larger than an 8P8C modular connector. Technology The picotux 100 operates a 55 MHz 32-bit ARM7 Netsilicon NS7520 processor, with 2 MB of Flash Memory (750 KB of which contains the OS) and 8 MB SDRAM Memory. The operating system is μClinux 2.4.27 Big Endian. BusyBox 1.0 is used as main shell. The picotux system runs at 250 mA only and 3.3 V +/- 5%. Two communication interfaces are provided, 10/100 Mbit/s half/full duplex Ethernet and a serial port with up to 230,400 bit/s. Five additional lines can be used for either general input/output or serial handshaking. External links Picotux.com See also Microcontroller References Linux-based devices
Operating System (OS)
453
Windows shell The Windows shell is the graphical user interface for the Microsoft Windows operating system. Its readily identifiable elements consist of the desktop, the taskbar, the Start menu, the task switcher and the AutoPlay feature. On some versions of Windows, it also includes Flip 3D and the charms. In Windows 10, the Windows Shell Experience Host interface drives visuals like the Start Menu, Action Center, Taskbar, and Task View/Timeline. However, the Windows shell also implements a shell namespace that enables computer programs running on Windows to access the computer's resources via the hierarchy of shell objects. "Desktop" is the top object of the hierarchy; below it there are a number of files and folders stored on the disk, as well as a number of special folders whose contents are either virtual or dynamically created. Recycle Bin, Libraries, Control Panel, This PC and Network are examples of such shell objects. The Windows shell, as it is known today, is an evolution of what began with Windows 95, released in 1995. It is intimately identified with File Explorer, a Windows component that can browse the whole shell namespace. Features Desktop Windows Desktop is a full-screen window rendered behind all other windows. It hosts the user's wallpaper and an array of computer icons representing: Files and folders: Users and software may store computer files and folders on Windows desktop. Naturally, on a newly-installed version of Windows, such items do not exist. Software installers commonly place files known as shortcuts on the desktop, allowing users to launch installed software. Users may store personal documents on the desktop. Special folders: Apart from ordinary files and folders, special folders (also known as "shell folders") may appear on the desktop. Unlike ordinary folders, special folders do not point to an absolute location on a hard disk drive. Rather, they may open a folder whose location differs from computer to computer (e.g. Documents), a virtual folder whose contents is an aggregate of several folders on disk (e.g. Recycle Bin or Libraries) or a folder window whose content is not files, but rather user interface elements rendered as icons for convenience (e.g. Network). They may even open windows that do not resemble a folder at all (e.g. Control Panel). Windows Vista and Windows 7 (and the corresponding versions of Windows Server) allowed Windows Desktop Gadgets to appear on the desktop. Taskbar Windows taskbar is a toolbar-like element that, by default, appears as a horizontal bar at the bottom of the desktop. It may be relocated to the top, left or right edges of the screen. Starting with Windows 98, its size can be changed. The taskbar can be configured to stay on top of all applications or to collapse and hide when it is not used. Depending on the version of operating system installed, the following elements may appear on the taskbar respectively from left to right: Start button: Provides access to the Start menu. Removed in Windows 8 (but can be added using third-party software), in favor of the Start charm (see below), only to be reinstated in Windows 8.1. Pictured as a Windows logo. Quick Links menu: Added in Windows 8 and Windows Server 2012. Invoked by right-clicking on the Start button, or pressing . Grants access to several frequently used features of Windows, such as accessing the desktop, Settings, Windows Command Processor, Windows Power Shell, and File Explorer. List of open windows: Along the length of the taskbar, open windows are represented by their corresponding program icons. And once pinned, they will remain even after their respective windows are closed. Until Windows 7, the operating system displayed active windows as depressed buttons in this list. Starting with Windows 7, the icon for each open window is framed by a translucent box, and multiple open windows for the same program can be accessed by clicking the program's icon. When the open window icon is hovered over with the mouse, a preview of the open window is shown above the icon. However, the taskbar can be changed to function more as it does with older versions of Windows. Starting from Windows 7, the open windows icons can be configured to show the program icon only, referred to as "combining taskbar buttons", or give the program name alongside the program icon. Shortcuts: An update to Windows 95 and Windows NT 4 added a Quick Launch Bar that can hold file, program, and action shortcuts, including by default the "show desktop" command. Windows 7 merged this area into the list of open windows by adding "pinning" and "jump list" features. Deskbands: Toolbars provided by Windows or other programs for easier access to that program's functions; for more information, see Notification area: Allows programs to display icons representing their status as well as pop-up notifications associated with those icons. By default, Windows volume control, network status, Action Center, date and time are displayed in this area. Windows 11 combines the notification center and clock/calendar into one menu. "Show desktop" button: Allows users to access their desktops. It is moved from the left of the Taskbar as a Quick Launch shortcut to the rightmost side as its own dedicated hover button in Windows 7. Not initially visible in Windows 8. Once the mouse cursor is hovered upon for a second, makes all windows transparent as long as the pointer stays over the button, thus showing the desktop without switching to it: this feature requiring Aero. Clicking the button dismisses all open windows and transfers the focus to the desktop. Clicking it again before selecting any other window reverts the action. This feature also available on Windows 8, 8.1, 10, and 11. Task View: A function in Windows 10 and 11 allowing the user to view and manage open windows and virtual desktops. The 1803 version includes the Timeline, adding the ability to view and open previously used apps over a certain period of time. Task View can be accessed by pressing the Task View button on the taskbar, or by pressing Windows Key+Tab on the keyboard. Timeline was removed in Windows 11. Cortana and Search: User can utilize Microsoft's Cortana Virtual Assistant, which enables internet searches, searches for apps and features on the PC, and searches for files and documents. Cortana can be accessed by clicking the search bar, pressing the microphone button, saying "Hey Cortana", or by pressing Windows Key+C on the keyboard. Searches can be initiated by also pressing the search bar, or by pressing Windows Key+Q on the keyboard. Action Center: Introduced in Windows 7, the Action Center gave notifications and tips on boosting computer performance and security. In Windows 10, the Action Center serves as a place for all notifications to reside, as well as the location of frequently used settings, such as screen brightness, wireless connectivity, VPNs, Bluetooth, projector connections, and wireless display connections. Replacing the Charms from Windows 8, the Windows 10 Action Center can be accessed by pressing the speech bubble icon on the taskbar, pressing Windows Key+A on the keyboard, or, if using a touchscreen, swiping from the right. In Windows 11, the Action Center was removed in favor of the Quick Settings menu and the notification center. Windows Key + A now opens Quick Settings, while Windows Key + N opens the notification center. Widgets: Windows 11 introduced a "Widgets" feature which replaces the functionality of live tiles seen in the Windows 8 and 10 Start Menus. By signing in with a Microsoft Account, the user can personalize the information they wish to see in the Widgets panel, including weather, news, sports, calendar events, etc. Widgets are not a replacement for Desktop Gadgets found in Windows Vista and Windows 7. Quick Settings: A taskbar menu introduced in Windows 11 that unifies the functionality of Windows 10's Action Center and system tray icons. Network settings, battery, and sound settings can be accessed by clicking on the Quick Settings menu, as well as accessibility options, Bluetooth toggle, screen brightness, Focus Assist, and other features. Media playback controls are now housed in the Quick Settings menu instead of a hovering menu like in Windows 10. Task switching Task switcher is a feature present in Windows 3.0 and all subsequent versions of Windows. It allows a user to cycle through existing application windows by holding down the key and tapping the key. Starting with Windows 95, as long as the key is pressed, a list of active windows is displayed, allowing the user to cycle through the list by tapping the key. An alternative to this form of switching is using the mouse to click on a visible portion of an inactive window. However, may be used to switch out of a full screen window. This is particularly useful in video games that lock, restrict or alter mouse interactions for the purpose of the game. Starting with Windows Vista, Windows Desktop is included in the list and can be activated this way. Windows 7 introduced Aero Flip (renamed Windows Flip in Windows 8). When the user holds down the key, Aero Flip causes only the contents of the selected window to be displayed. The remaining windows are replaced with transparent glass-like sheets that give an impression where the inactive window is located. Windows 8 introduced Metro-style apps, which did not appear when was pressed. (They have to be switched with their own dedicated task switcher, activated through the combination.) Windows 8.1 extended to manage the Metro-style apps as well. Windows 10 and 11 have a unified task switcher called Task View, which manages not only application windows but virtual desktops as well. Aero Flip 3D Flip 3D is a supplemental task switcher. It was introduced with Windows Vista and removed in Windows 8. It is invoked by holding down the key and tapping the key. As long as the key remains pressed, Windows displays all application windows, including the Desktop, in an isometric view, diagonally across the screen from the top left corner to the bottom right corner. The active window at the time of pressing the key is placed in front of the others. This view is maintained while key is held down. and cycle through the open windows, so that the user can preview them. When the key is released, the Flip 3D view is dismissed and the selected window comes to the front and into focus. Charms Windows 8 added a bar containing a set of five shortcuts known as the "charms", invoked by moving the mouse cursor into the top or bottom right-hand corners of the screen, or by swiping from the right edge of a compatible touchpad or touch screen. This feature was retained in 8.1. Windows 10 removed the charms and moved the commands associated with them into the system menu of each application. For users with touch screens, swiping from the right of the touch screen now shows Action Center. Removed Start Menu Functions Starting with Windows 95, all versions of Windows feature a form of Start menu, usually by this very same name. Depending on the version of Windows, the menu features the following: Launching applications: The menu's primary function is to present a list of shortcuts for installed software, allowing users to launch them. Windows 8 and 10 utilize tiles in the start menu, allowing the user to display icons of different sizes, and arrange icons as the user chooses. Microsoft Store Metro-style apps can utilize live tiles, which are used to add visual effects and provide, for example, notifications for a specific app, such as Email notifications for Windows Mail. Invoking special folders: Until Windows 8, the Start menu was a mean of invoking special folders such as Computer, Network, Control Panel, etc. In Windows 8 and Windows Server 2012, the only special folder that can be invoked from the Start screen is the desktop. Windows 10 restored this functionality. Searching: Starting with Windows Vista, searching for installed software, files and folders became a function of the Start menu. Windows 10 ended this tradition by moving the search into taskbar. Managing power states: Logging off and shutdown has always been a function of the Start menu. In Windows 8, the shutdown function was moved out of the Start screen, only to be brought back in Windows 8.1 Update (in April 2014) with a sufficiently high screen resolution. Computer power states can also be managed by pressing Alt+F4 while focused on desktop, or by pressing Ctrl+Alt+Del. AutoPlay AutoPlay is a feature introduced in Windows XP that examines newly inserted removable media for content and displays a dialog containing options related to the type and content of that media. The possible choices are provided by installed software: it is thus not to be confused with the related AutoRun feature, configured by a file on the media itself, although AutoRun is selectable as an AutoPlay option when both are enabled. Relation with File Explorer File Explorer is a Windows component that can browse the shell namespace. In other words, it can browse disks, files and folder as a file manager would, but can also access Control Panel, dial-up network objects, and other elements introduced above. In addition, the explorer.exe executable, which is responsible for launching File Explorer, is also responsible for launching the taskbar, the Start menu and part of the desktop. However, the task switcher, the charms, or AutoPlay operate even when all instances of the explorer.exe process are closed, and other computer programs can still access the shell namespace without it. Initially called Windows Explorer, its name was changed to File Explorer beginning with Windows 8, although the program name remains explorer.exe. History MS-DOS Executive The first public demonstration of Windows, in 1983, had a simplistic shell called the Session Control Layer, which served as a constantly visible menu at the bottom of the screen. Clicking on Run would display a list of programs that one could launch, and clicking on Session Control would display a list of programs already running so one could switch between them. Windows 1.0, shipped in November 1985, introduced MS-DOS Executive, a simple file manager that differentiated between files and folders by bold type. It lacked support for icons, although this made the program somewhat faster than the file manager that came with Windows 3.0. Programs could be launched by double-clicking on them. Files could be filtered for executable type, or by a user-selected wildcard, and the display mode could be toggled between full and compact descriptions. The file date column was not Y2K compliant. Windows 2.0 made no significant change to MS-DOS Executive. Program Manager Windows 3.0, introduced in May 1990, shipped with a new shell called Program Manager. Based on Microsoft's work with OS/2 Desktop Manager, Program Manager sorted program shortcuts into groups. Unlike Desktop Manager, these groups were housed in a single window, in order to show off Microsoft's new Multiple Document Interface. Program Manager in Windows 3.1 introduced wrappable icon titles, along with the new Startup group, which Program Manager would check on launch and start any programs contained within. Program Manager was also ported to Windows NT 3.1, and was retained through Windows NT 3.51. Start menu Windows 95 introduced a new shell. The desktop became an interactive area that could contain files (including file shortcuts), folders, and special folders such as My Computer, Network Neighborhood and Recycle Bin. Windows Explorer, which replaced File Manager, opened both ordinary and special folders. The taskbar was introduced, which maintained buttons representing open windows, a digital clock, a notifications area for background processes and their notifications, and the Start button, which invoked the Start menu. The Start menu contains links to settings, recently used files and, like its predecessor Program Manager, shortcuts and program groups. Program Manager is also included in Windows 95 for backward compatibility, in case the user disliked the new interface. This is included with all versions of Windows up to and including Windows XP Service Pack 1. In SP2 and SP3, PROGMAN.EXE is just an icon library, and it was completely removed from Windows Vista in 2006. The new shell was also ported to Windows NT, initially released as the NewShell update for Windows NT 3.51 and then fully integrated into Windows NT 4.0. Windows Desktop Update In early 1996, Netscape announced that the next release of its browser, codenamed "Constellation", would completely integrate with Windows and add a new shell, codenamed "HomePort", which would present the same files and shortcuts no matter which machine a user logged into. Microsoft started working on a similar Internet Explorer release, codenamed "Nashville". Internet Explorer 4.0 was redesigned and resulted in two products: the standalone Internet Explorer 4 and Windows Desktop Update, which updated the shell with features such as Active Desktop, Active Channels, Web folders, desktop toolbars such as the Quick Launch bars, ability to minimize windows by clicking their button on the taskbar, HTML-based folder customization, single click launching, image thumbnails, folder infotips, web view in folders, Back and Forward navigation buttons, larger toolbar buttons with text labels, favorites, file attributes in Details view, and an address bar in Windows Explorer, among other features. It also introduced the My Documents shell folder. Future Windows releases, like Windows 95C (OSR 2.5) and Windows 98, included Internet Explorer 4 and the features of the Windows Desktop Update already built in. Improvements were made in Windows 2000 and Windows ME, such as personalized menus, ability to drag and sort menu items, sort by name function in menus, cascading Start menu special folders, customizable toolbars for Explorer, auto-complete in Windows Explorer address bar and Run box, displaying comments in file shortcuts as tooltips, advanced file type association features, extensible columns in Details view (IColumnProvider interface), icon overlays, places bar in common dialogs, high-color notification area icons and a search pane in Explorer. Start menu and taskbar changes Windows XP introduced a new Start Menu, with shortcuts to shell locations on the right and a list of most frequently used applications on the left. It also grouped taskbar buttons from the same program if the taskbar got too crowded, and hid notification icons if they had not been used for a while. For the first time, Windows XP hid most of the shell folders from the desktop by default, leaving only the Recycle Bin (although the user could get them back if they desired). Windows XP also introduced numerous other shell enhancements. In the early days of the Longhorn project, an experimental sidebar, with plugins similar to taskbar plugins and a notifications history was built into the shell. However, when Longhorn was reset the integrated sidebar was discarded in favor of a separate executable file, sidebar.exe, which provided Web-enabled gadgets, thus replacing Active Desktop. Windows Vista introduced a searchable Start menu and live taskbar previews to the Windows shell. It also introduced a redesigned Alt-Tab switcher which included live previews, and Flip 3D, an application switcher that would rotate through application windows in a fashion similar to a Rolodex when the user pressed the Win-Tab key combination. Windows 7 added 'pinned' shortcuts and 'jump lists' to the taskbar, and automatically grouped program windows into one icon (although this could be disabled). Windows Server 2008 introduced the possibility to have a Windows installation without the shell, which results in fewer processes loaded and running. Windows 8 removed Flip 3D in order to repurpose Win-Tab for displaying an application switcher sidebar containing live previews of active Windows Store apps for users without touchscreens. Windows 10 added the possibility to have more than one virtual desktop, known as Task View, to group active program windows to their own virtual desktop. It is possible to navigate through these desktops using Ctrl+Win+Left or Right arrows, or by clicking on an icon in the taskbar, and creating them with Ctrl+Win+D. Win-Tab was repurposed to invoke an overview of all active windows and virtual desktops. Windows 10 also added Cortana to the Start menu, to provide interaction with the shell through vocal commands. Newer versions of Windows 10 include recent Microsoft Edge tabs in the Alt-Tab menu, which can be disabled to only show open programs, as is the behavior in prior versions of the operating system. Shell replacements Windows supports the ability to replace the Windows shell with another program. A number of third party shells exist that can be used in place of the standard Windows shell. See also DOS Shell Command Prompt References External links Shell Graphical user interfaces
Operating System (OS)
454
UNETix uNETix is an early implementation of UNIX for IBM PC systems. It was not a "true" UNIX, but was written from scratch for the PC without using any code from System V. Overview uNETix only supported a single user. However, it maintained closer compatibility with standard versions of UNIX than early versions of QNX. uNETix' multiple windows capability was possibly the first implementation of windowing in a Unix-like operating system. Up to 10 windows were supported, which could each run independent tasks and could have individual foreground and background colors set with a special color command. Published by Lantech Systems, Inc, uNETix had a list price in 1984 of 130 USD, but was discounted and advertised at 99 USD ( USD today). The minimum RAM requirement was 256 kB, but a machine would only be able to support single-tasking; multitasking required 512 kB. It had an emulation environment for MS-DOS that could run DOS 1.1 programs in one window while UNIX programs ran in other windows. Its major weaknesses were slow speed and lack of hard disk support. uNETix came with a full assembly language programming environment, and a C compiler was optional. Lantech claimed that the C compiler was the first available for the x86 architecture. See also Xenix UNIX System V AT&T UNIX PC References Computing platforms Discontinued operating systems Unix variants Lightweight Unix-like systems
Operating System (OS)
455
Windows Vista editions Windows Vista—a major release of the Microsoft Windows operating system—was available in six different product editions: Starter, Home Basic, Home Premium, Business, Enterprise, and Ultimate. On September 5, 2006, Microsoft announced the USD pricing for editions available through retail channels; the operating system was later made available to retail on January 30, 2007. Microsoft also made Windows Vista available for purchase and download from Windows Marketplace; it is the first version of Windows to be distributed through a digital distribution platform. Editions sold at retail were available in both Full and Upgrade versions and later included Service Pack 1 (SP1). Microsoft characterized the retail packaging for Windows Vista as "designed to be user-friendly, a small, hard, plastic container designed to protect the software inside for life-long use"; it opens sideways to reveal the Windows Vista DVD suspended in a clear plastic case. Windows Vista optical media use a holographic design with vibrant colors. With the exception of Windows Vista Starter, all editions support both IA-32 (32-bit) and x64 (64-bit) processor architectures. Microsoft ceased distribution of retail copies of Windows Vista in October 2010; OEM distribution of Windows Vista ended in October 2011. Editions for personal computers Much like its predecessor, Windows XP Starter Edition, Windows Vista Starter was available in emerging markets; it was sold across 139 developing countries in 70 different languages. Microsoft did not make it available in developed technology markets such as the United States, Canada, the European Union, Australia, New Zealand, or other high income markets as defined by the World Bank. Windows Vista Starter has significant limitations; it disallows the concurrent operation of more than three programs (although an unlimited number of windows can be opened for each program unlike in Windows XP Starter); disallows users from sharing files or printers over a home network (or sharing a connection with other computers); does not support Windows Media Player media streaming or sharing; displays a permanent watermark in the bottom right corner of the screen, and imposes a physical memory limit of 1 GB and a maximum amount of 120 GB hard disk space. Peer-to-peer networking is also disabled, and there is no support for simultaneous SMB connections. Consumer-oriented features such as Games Explorer, Parental Controls, Windows Calendar, Windows Mail, Windows Movie Maker (without support for high-definition video), Windows Photo Gallery (without support for sharing photos or themed slideshows), Windows Speech Recognition, and Windows Sidebar are included. Windows Vista Starter is licensed to run only on PCs with AMD's Athlon XP, Duron, Sempron and Geode processors, Intel's Celeron, Pentium III processors, and certain models of Pentium 4. Windows Vista Starter can be installed from optical media including those belonging to other editions of the operating system. Windows Vista Starter includes a different set of desktop wallpapers not found in other editions. Similar to Windows XP Home Edition, the Home Basic edition targets budget-conscious users not requiring advanced multimedia support for home use. The Windows Aero graphical user interface with translucent glass and lighting effects is absent from this edition; however, desktop composition—albeit without Flip 3D or Live Thumbnails—is supported. Home Basic does not include Windows DVD Maker or Windows Media Center (or support for Extenders). Premium games including Chess Titans, Inkball, Mahjong Titans are not included. Windows HotStart is also available. Home Basic supports one physical CPU, but with multiple cores, and the 64-bit version supports up to 8 GB of RAM. Containing all features from Home Basic and similar to Windows XP Media Center Edition, Windows Vista Home Premium includes additional features dedicated to the home market segment. Full Windows Aero and desktop composition is available. Multimedia features include DVD burning with Windows DVD Maker, and HDTV and Xbox 360 support with Windows Media Center. Premium games (Chess Titans, InkBall, and Mahjong Titans) are available. Enhanced networking features include ad hoc support, projectors, and up to 10 simultaneous SMB connections (compared to 5 in Home Basic); Windows Meeting Space, while included in Home Basic, only allowed users to join meetings—in Home Premium, users may either create new meetings or join existing ones. Home Premium also introduces Windows Mobility Center, Windows SideShow, and Windows Tablet PC and Touch features such as support for capacitive touchscreens, flick gestures, Snipping Tool, and Tablet PC Input Panel (which has been updated since Windows XP to include AutoComplete, as well as handwriting personalization and training features). Backup and Restore additionally supports backup schedules, backup to network devices, and incremental backups. Windows Vista Home Premium—like Home Basic—supports only one physical CPU, but it additionally supports multiple cores. The 64-bit version supports up to 16 GB of RAM. Comparable to Windows XP Professional, Windows Vista Business Edition targets the business market. It includes all the features of Home Basic with the exception of Parental Controls and can join a Windows Server domain. It includes Encrypting File System, Internet Information Services, Offline Files, Remote Desktop, Rights Management Services, Shadow Copy, and Windows Fax and Scan. Backup and Restore also allows users to create disk images of operating system installations. Windows Vista Business supports up to two physical CPUs, and the 64-bit version supports 128 GB of RAM. This edition targets the enterprise segment of the market: it comprises a superset of the Vista Business edition. Additional features include BitLocker, Multilingual User Interface (MUI), and UNIX application support. Windows Vista Enterprise was not available through retail or OEM channels, but was instead distributed through Microsoft Software Assurance (SA), with license terms that conferred the right to operate up to four virtual machines with various Windows Vista editions installed, access to Virtual PC Express, and activation via volume licensing. Windows Vista Enterprise supports up to two physical CPUs, and the 64-bit version supports up to 128 GB of RAM. Windows Vista Ultimate includes all features of the Home Premium and Business editions, as well as BitLocker and MUI; it also provides access to optional "Ultimate Extras." Windows Vista Ultimate supports up to two physical CPUs, and the 64-bit version supports up to 128 GB of RAM. Microsoft released two special edition variants of Windows Vista Ultimate: Windows Vista Ultimate Signature Edition featured a unique production number alongside the signature of Bill Gates on the front of the packaging; the edition was limited to 25,000 copies. Windows Vista Product Red was produced as part of the Product Red program, with a portion of sales supporting The Global Fund to Fight AIDS, Tuberculosis and Malaria. The edition was originally distributed as pre-loaded software on a line of Product Red-branded Dell PCs, but was later released at retail. Besides including an additional desktop theme with wallpapers and other content, it is otherwise identical to the main Windows Vista Ultimate SKU. Internally, Microsoft released a Windows Vista Handcrafted variant of the Windows Vista Ultimate SKU for employees involved with the development of Windows Vista; it features a custom box alongside a note to employees, but is otherwise identical to the Ultimate SKU. Distribution Users could purchase and download Windows Vista directly from Microsoft through the Windows Marketplace before the service's discontinuation. Optical media distributed through retail or through OEMs for Windows Vista are identical; Microsoft refers to this as "CD unification." Before Windows Vista, versions of Windows for OEMs and retail were maintained separately. All editions of Windows Vista—excluding Enterprise—are stored on the same optical media; a license key for the edition purchased determines which version on the disc is eligible for installation. To upgrade to a higher edition from a lower edition (such as from Home Basic to Ultimate) Windows Vista includes Windows Anytime Upgrade to facilitate an upgrade. For computers with optical disc drives that supported CDs but not DVDs, Microsoft offered CDs for Windows Vista that could be purchased from its website. The company would later release alternative media for Windows Vista SP1. A Windows Vista Family Discount program enabled United States and Canada customers who purchased the Ultimate edition before June 30, 2007 to purchase additional licenses for Windows Vista Home Premium at a cost of $49.99 each. Microsoft sold these licenses online through its website. In addition, Eligible students in qualifying regions had the option to purchase the upgrade version of the Home Premium edition at a reduced price. A similar offer was later available for Windows Vista Ultimate. 64-bit versions To support x64 platforms such as Intel Xeon, Intel Core 2, AMD Opteron and AMD Athlon 64, Microsoft released x64 versions of every edition of Windows Vista except for the Starter edition. These editions can run 32-bit programs within the WOW64 subsystem. Most 32-bit programs can run natively, though applications that rely on device drivers will not run unless those device drivers have been written for x64 platforms. Reviewers have reported that the x64 editions of Windows Vista outperform their IA-32 counterparts in benchmarks such as PassMark. All 32-bit editions of Windows Vista, excluding Starter, support up to 4 GB of RAM. The 64-bit edition of Home Basic supports 8 GB of RAM, Home Premium supports 16 GB, and Business, Enterprise, and Ultimate support 128 GB of RAM. All 64-bit versions of Microsoft operating systems impose a 16 TB limit on address space. Processes created on the 64-bit editions of Windows Vista can have 8 TB in virtual memory for user processes and 8 TB for kernel processes to create a virtual memory of 16 TB. Editions for specific markets In March 2004, the European Commission fined Microsoft for €497 million (about US$603 million) and ordered the company to provide a version of Windows without Windows Media Player. The Commission concluded that Microsoft "broke European Union competition law by leveraging its near monopoly in the market for PC operating systems onto the markets for work group server operating systems and for media players." Microsoft reached an agreement with the Commission where it would release a court-compliant version, Windows XP Edition N, that does not include the company's Windows Media Player but instead encourages users to download and install their preferred media player. Similarly, in December 2005, the Korean Fair Trade Commission ordered Microsoft to make available editions of Windows XP and Windows Server 2003 that do not contain Windows Media Player or Windows Messenger. Similar to the European Commission, this decision was based on the grounds that Microsoft had abused its dominant position in the market to push other products onto consumers. Unlike that decision, however, Microsoft was also forced to withdraw the non-compliant versions of Windows from the South Korean market. This decision resulted in Microsoft's releasing "K" and "KN" variants of the Home and Professional editions of Windows XP in August 2006. As a continuance of these requirements, Microsoft released "N" and "KN" variants of some editions of Windows Vista that exclude Windows Media Player, as well as "K" and "KN" editions that include links to third-party media player and instant messaging software. "N" editions of Windows Vista require third-party software (or a separate installation of Windows Media Player) to play audio CDs and other media formats such as MPEG-4. Editions for embedded systems Two additional editions of Windows Vista have been released for use by developers of embedded devices. These are binary identical editions to those available in retail, but licensed exclusively for use in embedded devices. Windows Vista Business for Embedded Systems This edition mirrors the feature set of the Business edition of Windows Vista. Windows Vista Ultimate for Embedded SystemsThis edition mirrors the feature set of the Ultimate edition of Windows Vista. Accordingly, it includes capabilities not found in Vista Business for Embedded Systems such as BitLocker Drive Encryption, the Subsystem for UNIX-based Applications, and Virtual PC Express. Upgrading Unlike previous versions of Windows, Windows Vista does not support compliance checking during installation; compliance checking previously allowed users to insert a disc as evidence that the operating system was being upgraded over a previous version, which would allow users to enter an upgrade license to perform a clean install. As a result, Upgrade versions of Windows Vista will not install unless a previous version of Windows is already installed on the machine to be upgraded. A workaround for this limitation was reported by Paul Thurrott, who stated that users should be able to perform a full installation of Windows Vista through Upgrade media by bypassing the prompt to enter a license during setup, and then, once installed, reinstall the operating system over the previous installation—this essentially allows users who purchased the Upgrade version to perform a full retail installation. While the workaround is indeed possible, Microsoft has cautioned that users who perform a full installation of the operating system through this method without a genuine license for a previous version would be in violation of the Windows Vista end-user license agreement. Users can upgrade from Windows XP to Windows Vista, or upgrade from one edition of Windows Vista to another. However, upgrading from a 32-bit edition to a 64-bit edition or downgrading from 64-bit edition to a 32-bit edition requires a clean install. In addition, not all potential upgrade combinations are supported. The following chart indicates the possible upgrade paths: Notes: Only Windows XP can be upgraded to Windows Vista; a clean install is required for PCs running Windows 2000 or earlier versions. While it is possible to upgrade from Windows XP Media Center Edition to Windows Vista Home Premium if the computer was joined to an Active Directory Domain at the time of upgrade, the computer will remain joined to the domain but no users will be able to log into the computer through the domain controller. Windows Vista Home Premium does not support joining an Active Directory Domain. Comparison chart Notes: Home Basic, Business and Enterprise editions are available in the South Korean and European markets as "KN" and "N" editions, respectively, which exclude Windows Media Player and HD components of Windows Movie Maker. All editions except Starter are available in the Korean market as "K" editions, which are sold in place of the standard editions of Windows Vista. Unlike the "KN" editions, the "K" editions do include Windows Media Player and its related components, and also include links to web sites which list third-party media player and instant messaging software. Windows Vista Business N is available in the European market. By default, it does not include Windows Media Player and its related components, or Windows Movie Maker. Windows Movie Maker is not available in Windows Vista Business KN. Windows Mobility Center is available on mobile PCs (notebook PCs, Tablet PCs, and Ultra-mobile PCs) but not on desktop PCs. The rotate screen functionality is offered only on Tablet PCs with an appropriate driver. Presentation settings on Windows Mobility Center are not available on Home Basic. Premium Windows Vista games, including Chess Titans, InkBall, and Mahjong Titans, are available in Windows Vista Home Premium and Windows Vista Ultimate. Windows Vista games are also available as optional components in the Business and Enterprise editions, but are not installed by default. See also Windows Anytime Upgrade Windows Ultimate Extras Windows 2000 editions Windows 7 editions References Editions and pricing
Operating System (OS)
456
MS-DOS 4.0 (multitasking) MS-DOS 4.0 was a multitasking release of MS-DOS developed by Microsoft based on MS-DOS 2.0. Lack of interest from OEMs, particularly IBM (who previously gave Microsoft multitasking code on IBM PC DOS included with TopView), led to it being released only in a scaled-back form. It is sometimes referred to as European MS-DOS 4.0, as it was primarily used there. It should not be confused with PC DOS 4.00 or MS-DOS 4.01 and later, which did not contain the multi-tasking features. History Apricot Computers pre-announced "MS-DOS 4.0" in early 1986, and Microsoft demonstrated it in September of that year at a Paris trade show. However, only a few European OEMs, such as and International Computers Limited (ICL), actually licensed releases of the software. In particular, IBM declined the product, concentrating instead on improvements to MS-DOS 3.x and their new joint development with Microsoft to produce OS/2. As a result, the project was scaled back, and only those features promised to particular OEMs were delivered. In September 1987, a version of multi-tasking MS-DOS 4.1 was reported to be developed for the ICL DRS Professional Workstation (PWS). No further releases were made once the contracts had been fulfilled. In July 1988, IBM announced "IBM DOS 4.0", an unrelated product continuing from DOS 3.3 and 3.4, leading to initial conjecture that Microsoft might release it under a different version number. However, Microsoft eventually released it as "MS-DOS 4.0", with a MS-DOS 4.01 following quickly to fix issues many had reported. Features As well as minor improvements such as support for the New Executable file format, the key feature of the release was its support for preemptive multitasking. This did not use the protected mode available on 80386 processors, but allowed specially-written programs to continue executing in a "background mode", where they had no access to user input and output until returned to the foreground. The OS was reported to include a time-sliced scheduler and interprocess communication via pipes and shared memory. This limited form of multitasking was considered to be more useful in a server rather than workstation environment, particularly coupled with MS-Net 2.0, which was released simultaneously. Other limitations of MS-DOS 3.0 remained, including the inability to use memory above 640 KB, and this contributed to the product's lack of adoption, particularly in light of the need to write programs specifically targeted at the new environment. INT 21h/AH=87h can be used to distinguish between the multitasking MS-DOS 4.x and the later MS-DOS/PC DOS 4.x issues. Microsoft president Jon Shirley described it as a "specialized version" and went as far as saying "maybe we shouldn't have called it DOS 4.0", although it's not clear whether this was always the intention, or if a more enthusiastic response from OEMs would have resulted in it being the true successor to DOS 3.x. The marketing positioned it as an additional option between DOS 3.x for workstations, and Xenix for higher-end servers and multiuser systems. External commands MS-DOS Version 4.10.20 supports the following external commands: APPEND ASSIGN ATTRIB BACKUP CHKDSK COMMAND DEBUG DETACH DISKCOMP DISKCOPY EDLIN EXE2BIN FC FDISK FIND FORMAT GRAFTABL GRAPHICS GWBASIC HEADPARK INSTALLX JOIN LABEL LINK4 MODE MORE MOUS PERM0 PRINT QUEUER RECOVER REPLACE RESTORE SETUP SHARE SORT SUBST SYS TREE XCOPY See also Concurrent DOS, Concurrent DOS 286, Concurrent DOS 386 - Concurrent CP/M-based multiuser multitasking OS with DOS emulator since 1983 DOS Plus - Concurrent PC DOS-based multitasking OS with DOS emulator since 1985 Novell DOS, OpenDOS, DR-DOS - successors of DOS Plus with preemptive multitasking in VDMs since 1993 FlexOS - successor of Concurrent DOS 286 since 1986 4680 OS, 4690 OS - successors of FlexOS 286 and FlexOS 386 since 1986 Multiuser DOS - successor of Concurrent DOS 386 since 1991 REAL/32 - successor of Multiuser DOS since 1995 PC-MOS/386 - multiuser multitasking DOS clone since 1987 VM/386 - multiuser multitasking DOS environment since 1987 TopView - DOS-based multitasking environment since 1985 DESQview, DESQview/X - DOS-based multitasking environment since 1985 Virtual DOS machine Datapac Australasia References Further reading 1986 software Discontinued Microsoft operating systems Disk operating systems DOS variants Proprietary operating systems Assembly language software
Operating System (OS)
457
OMAP The OMAP (Open Multimedia Applications Platform) family, developed by Texas Instruments, was a series of image/video processors. They are proprietary system on chips (SoCs) for portable and mobile multimedia applications. OMAP devices generally include a general-purpose ARM architecture processor core plus one or more specialized co-processors. Earlier OMAP variants commonly featured a variant of the Texas Instruments TMS320 series digital signal processor. The platform was created after December 12, 2002, as STMicroelectronics and Texas Instruments jointly announced an initiative for Open Mobile Application Processor Interfaces (OMAPI) intended to be used with 2.5 and 3G mobile phones, that were going to be produced during 2003. (This was later merged into a larger initiative and renamed the MIPI Alliance.) The OMAP was Texas Instruments' implementation of this standard. (The STMicroelectronics implementation was named Nomadik.) OMAP did enjoy some success in the smartphone and tablet market until 2011 when it lost ground to Qualcomm Snapdragon. On September 26, 2012, Texas Instruments announced they would wind down their operations in smartphone and tablet oriented chips and instead focus on embedded platforms. On November 14, 2012, Texas Instruments announced they would cut 1,700 jobs due to their shift from mobile to embedded platforms. The last OMAP5 chips were released in Q2 2013. OMAP family The OMAP family consists of three product groups classified by performance and intended application: high-performance applications processors basic multimedia applications processors integrated modem and applications processors Further, two main distribution channels exist, and not all parts are available in both channels. The genesis of the OMAP product line is from partnership with cell phone vendors, and the main distribution channel involves sales directly to such wireless handset vendors. Parts developed to suit evolving cell phone requirements are flexible and powerful enough to support sales through less specialized catalog channels; some OMAP 1 parts, and many OMAP 3 parts, have catalog versions with different sales and support models. Parts that are obsolete from the perspective of handset vendors may still be needed to support products developed using catalog parts and distributor-based inventory management. High-performance applications processors These are parts originally intended for use as application processors in smartphones, with processors powerful enough to run significant operating systems (such as Linux, FreeBSD, Android or Symbian), support connectivity to personal computers, and support various audio and video applications. OMAP 1 The OMAP 1 family started with a TI-enhanced ARM925 core (ARM925T), and then changed to a standard ARM926 core. It included many variants, most easily distinguished according to manufacturing technology (130 nm except for the OMAP171x series), CPU, peripheral set, and distribution channel (direct to large handset vendors, or through catalog-based distributors). In March 2009, the OMAP1710 family chips are still available to handset vendors. Products using OMAP 1 processors include hundreds of cell phone models, and the Nokia 770 Internet tablets. OMAP1510 – 168 MHz ARM925T (TI-enhanced) + C55x DSP OMAP161x – 204 MHz ARM926EJ-S + C55x DSP, 130 nm technology OMAP162x – 204 MHz ARM926EJ-S + C55x DSP + 2 MB internal SRAM, 130 nm technology OMAP171x – 220 MHz ARM926EJ-S + C55x DSP, low-voltage 90 nm technology OMAP5910 – catalog availability version of OMAP 1510 OMAP5912 – catalog availability version of OMAP1621 (or OMAP1611b in older versions) OMAP 2 These parts were only marketed to handset vendors. Products using these include both Internet tablets and mobile phones: OMAP2431 – 330 MHz ARM1136 + 220 MHz C64x DSP OMAP2430 – 330 MHz ARM1136 + 220 MHz C64x DSP + PowerVR MBX lite GPU, 90 nm technology OMAP2420 – 330 MHz ARM1136 + 220 MHz C55x DSP + PowerVR MBX GPU, 90 nm technology OMAP 3 The 3rd generation OMAP, the OMAP 3 is broken into 3 distinct groups: the OMAP34x, the OMAP35x, and the OMAP36x. OMAP34x and OMAP36x are distributed directly to large handset (such as cell phone) manufacturers. OMAP35x is a variant of OMAP34x intended for catalog distribution channels. The OMAP36x is a 45 nm version of the 65 nm OMAP34x with higher clock speed. The OMAP 3611 found in devices like the Bookeen's Cybook Odyssey is a licensed crippled version of the OMAP 3621, both are the same silicon (as marking are the same) but officially the 3611 was sold to be only able to drive e-Ink screen and does not have access to IVA & DSP. The video technology in the higher end OMAP 3 parts is derived in part from the DaVinci product line, which first packaged higher end C64x+ DSPs and image processing controllers with ARM9 processors last seen in the older OMAP 1 generation or ARM Cortex-A8. Not highlighted in the list below is that each OMAP 3 SoC has an "Image, Video, Audio" (IVA2) accelerator. These units do not all have the same capabilities. Most devices support 12 megapixel camera images, though some support 5 or 3 megapixels. Some support HD imaging. OMAP 4 The 4th generation OMAPs, OMAP 4430 (used on Google Glass), 4460 (formerly named 4440), and 4470 all use a dual-core ARM Cortex-A9 CPU, with two ARM Cortex-M3 cores, as part of the "Ducati" sub-system, for off-loading low-level tasks. The 4430 and 4460 use a PowerVR SGX540 integrated 3D graphics accelerator, running at a clock frequency of 304 and 384 MHz respectively. 4470 has a PowerVR SGX544 GPU that supports DirectX 9 which enables it for use in Windows 8 as well as a dedicated 2D graphics core for increased power efficiency up to 50-90%. All OMAP 4 come with an IVA3 multimedia hardware accelerator with a programmable DSP that enables 1080p Full HD and multi-standard video encode/decode. OMAP 4 uses ARM Cortex-A9's with ARM's SIMD engine (Media Processing Engine, aka NEON) which may have a significant performance advantage in some cases over Nvidia Tegra 2's ARM Cortex-A9s with non-vector floating point units. It also uses a dual-channel LPDDR2 memory controller compared to Nvidia Tegra 2's single-channel memory controller. OMAP 5 The 5th generation OMAP, OMAP 5 SoC uses a dual-core ARM Cortex-A15 CPU with two additional Cortex-M4 cores to offload the A15s in less computationally intensive tasks to increase power efficiency, two PowerVR SGX544MP graphics cores and a dedicated TI 2D BitBlt graphics accelerator, a multi-pipe display sub-system and a signal processor. They respectively support 24 and 20 megapixel cameras for front and rear 3D HD video recording. The chip also supports up to 8 GB of dual channel LPDDR2/DDR3 memory, output to four HD 3D displays and 3D HDMI 1.4 video output. OMAP 5 also includes three USB 2.0 ports, one lowspeed USB 3.0 OTG port and a SATA 2.0 controller. Basic multimedia applications processors These are marketed only to handset manufacturers. They are intended to be highly integrated, low cost chips for consumer products. The OMAP-DM series are intended to be used as digital media coprocessors for mobile devices with high megapixel digital still and video cameras. These OMAP-DM chips incorporate both an ARM processor and an Image Signal Processor (ISP) to accelerate processing of camera images. OMAP310 – ARM925T OMAP331 – ARM926 OMAP-DM270 – ARM7 + C54x DSP OMAP-DM299 – ARM7 + Image Signal Processor (ISP) + stacked mDDR SDRAM OMAP-DM500 – ARM7 + ISP + stacked mDDR SDRAM OMAP-DM510 – ARM926 + ISP + 128 MB stacked mDDR SDRAM OMAP-DM515 – ARM926 + ISP + 256 MB stacked mDDR SDRAM OMAP-DM525 – ARM926 + ISP + 256 MB stacked mDDR SDRAM Integrated modem and applications processors These are marketed only to handset manufacturers. Many of the newer versions are highly integrated for use in very low cost cell phones. OMAPV1035 – single-chip EDGE (was discontinued in 2009 as TI announced baseband chipset market withdrawal). OMAPV1030 – EDGE digital baseband OMAP850 – 200 MHz ARM926EJ-S + GSM/GPRS digital baseband + stacked EDGE co-processor OMAP750 – 200 MHz ARM926EJ-S + GSM/GPRS digital baseband + DDR Memory support OMAP733 – 200 MHz ARM926EJ-S + GSM/GPRS digital baseband + stacked SDRAM OMAP730 – 200 MHz ARM926EJ-S + GSM/GPRS digital baseband + SDRAM Memory support OMAP710 – 133 MHz ARM925 + GSM/GPRS digital baseband OMAP L-1x The OMAP L-1x parts are marketed only through catalog channels, and have a different technological heritage than the other OMAP parts. Rather than deriving directly from cell phone product lines, they grew from the video-oriented DaVinci product line by removing the video-specific features while using upgraded DaVinci peripherals. A notable feature is use of a floating point DSP, instead of the more customary fixed point one. The Hawkboard uses the OMAP-L138 OMAP-L137 – 300 MHz ARM926EJ-S + C674x floating point DSP OMAP-L138 – 300 MHz ARM926EJ-S + C674x floating point DSP Products using OMAP processors are Many mobile phones use OMAP SoCs, including the Nokia N9, N90, N91, N92, N95, N82, E61, E62, E63 and E90 mobile phones, as well as the Nokia 770, N800, N810 and N900 Internet tablets, Motorola Droid, Droid X, and Droid 2, and several Samsung devices, like Samsung Galaxy Tab 2 7.0 and Galaxy S II variant GT-I9100G. The Palm Pre, Pandora, Touch Book also use an OMAP SoC (the OMAP3430). Others to use an OMAP SoC include Sony Ericsson's Satio (Idou) and Vivaz, most Samsung phones running Symbian (including Omnia HD), the Nook Color, some Archos tablets (such as Archos 80 gen 9 and Archos 101 gen 9), Kindle Fire HD, Blackberry Playbook, Kobo Arc, and B&N Nook HD. Also, there are all-in-one smart displays using OMAP 4 SoCs, such as the Viewsonic VSD220 (OMAP 4430). Motorola MOTOTRBO 2. generation radios use the OMAP-L132 or OMAP-L138 secure CPU. OMAP SoCs are also used as the basis for a number of hobbyist, prototyping and evaluation boards, such as the BeagleBoard, PandaBoard, OMAP3 Board, Gumstix and Presonus digital mixing boards Similar platforms TI Sitara ARM Processor SoC family A31 by AllWinner Atom by Intel Apple silicon by Apple Exynos by Samsung i.MX by Freescale Semiconductor ARMADA 5xx/6xx/15xx by Marvell Technology Group Jaguar and Puma by AMD K3Vx/Kirin by HiSilicon MTxxxx by MediaTek Nomadik by STMicroelectronics NovaThor by ST-Ericsson OCTEON by Cavium R-Car by Renesas RK3xxx by Rockchip Snapdragon, by Qualcomm, the only competing product which also features a DSP unit, the Qualcomm Hexagon Tegra by Nvidia VideoCore by Broadcom See also HiSilicon – by Huawei Comparison of ARMv7-A cores - ARM Comparison of ARMv8-A cores - ARM OpenMAX IL (Open Media Acceleration Integration Layer) a royalty-free cross-platform media abstraction API from the Khronos Group Distributed Codec Engine (libcde) a Texas Instruments API for the video codec engine in OMAP based embedded systems Tiva-C LaunchPad – an inexpensive self-contained, single-board microcontroller, about the size of a credit card but featuring an ARM Cortex M4 32-bit microcontroller at 80 MHz, with signal processing capabilities. References External links OMAP Application Processors OMAPWorld OMAPpedia Linux OMAP Mailing List Archive OMAP3 Boards OMAP4 Boards ARM architecture Digital signal processors Embedded microprocessors System on a chip Texas Instruments microprocessors
Operating System (OS)
458
DOSEMU DOSEMU, stylized as dosemu, is a compatibility layer software package that enables DOS operating systems (e.g., MS-DOS, DR-DOS, FreeDOS) and application software to run atop Linux on x86-based PCs (IBM PC compatible computers). Features It uses a combination of hardware-assisted virtualization features and high-level emulation. It can thus achieve nearly native speed for 8086-compatible DOS operating systems and applications on x86 compatible processors, and for DOS Protected Mode Interface (DPMI) applications on x86 compatible processors as well as on x86-64 processors. DOSEMU includes an 8086 processor emulator for use with real-mode applications in x86-64 long mode. DOSEMU is only available for x86 and x86-64 Linux systems (Linux 3.15 x86-64 systems cannot enter DPMI by default. This is fixed in 3.16). DOSEMU is an option for people who need or want to continue to use legacy DOS software; in some cases virtualisation is good enough to drive external hardware such as device programmers connected to the parallel port. According to its manual, "dosemu" is a user-level program which uses certain special features of the Linux kernel and the 80386 processor to run DOS in a DOS box. The DOS box, relying on a combination of hardware and software, has these abilities: Virtualize all input-output and processor control instructions Supports the word size and addressing modes of the iAPX86 processor family's "real mode", while still running within the full protected mode environment Trap all DOS and BIOS system calls and emulate such calls as needed for proper operation and good performance Simulate a hardware environment over which DOS programs are accustomed to having control. Provide DOS services through native Linux services; for example, dosemu can provide a virtual hard disk drive which is actually a Linux directory hierarchy. API-level support for Packet driver, IPX, Berkeley sockets (dosnet). See also Comparison of platform virtualization software Virtual DOS machine DOSBox Wine FreeDOS References External links dosemu2 DOS emulators Compatibility layers Linux emulation software
Operating System (OS)
459
RT-11 RT-11 ("RT" for real-time) is a discontinued small, low-end, single-user real-time operating system for the Digital Equipment Corporation PDP-11 family of 16-bit computers. RT-11, which stands for Real-Time, was first implemented in 1970 and was widely used for real-time systems, process control, and data acquisition across the full line of PDP-11 computers. It was also used for low-cost general-use computing. Features Multitasking RT-11 systems did not support preemptive multitasking, but most versions could run multiple simultaneous applications. All variants of the monitors provided a background job. The FB, XM and ZM monitors also provided a foreground job, as well as six system jobs if selected via the SYSGEN system generation program. These tasks had fixed priorities, with the background job lowest and the foreground job highest. It was possible to switch between jobs from the system console user interface, and SYSGEN could generate a monitor that provided a single background job (the SB, XB and ZB variants). Source code RT-11 was written in assembly language. Heavy use of the conditional assembly and macro programming features of the MACRO-11 assembler allowed a significant degree of configurability and allowed programmers to specify high-level instructions otherwise unprovided for in machine code. RT-11 distributions included the source code of the operating system and its device drivers with all the comments removed and a program named "SYSGEN" which would build the operating system and drivers according to a user-specified configuration. Developer's documentation included a kernel listing that included comments. Device drivers In RT-11, device drivers were loadable, except that prior to V4.0 the device driver for the system device (boot device) was built into the kernel at configuration time. Because RT-11 was commonly used for device control and data acquisition, it was common for developers to write or enhance device drivers. DEC encouraged such driver development by making their hardware subsystems (from bus structure to code) open, documenting the internals of the operating system, encouraging third-party hardware and software vendors, and by fostering the development of the Digital Equipment Computer Users Society. Human interface Users generally operated RT-11 via a printing terminal or a video terminal, originally via a strap-selectable current-loop (for conventional teletypes) or via an RS-232 (later RS-422 as well) interface on one of the CPU cards; DEC also supported the VT11 and VS60 graphics display devices (vector graphics terminals with a graphic character generator for displaying text, and a light pen for graphical input). A third-party favorite was the Tektronix 4010 family. The Keyboard Monitor (KMON) interpreted commands issued by the user and would invoke various utilities with Command String Interpreter (CSI) forms of the commands. RT-11 command language had many features (such as commands and device names) that can be found later in the DOS line of operating systems which heavily borrowed from RT-11. The CSI form expected input and output filenames and options ('switches' on RT-11) in a precise order and syntax. The command-line switches were separated by a slash ("/") rather than the dash ("-") used in Unix-like operating systems. All commands had a full form and a short one to which they could be contracted. For example, the RENAME command could be contracted to REN. Batch files and the batch processor could be used to issue a series of commands with some rudimentary control flow. Batch files had the extension .BAT. In later releases of RT-11, it was possible to invoke a series of commands using a .COM command file, but they would be executed in sequence with no flow control. Even later, it was possible to execute a series of commands with great control through use of the Indirect Command File Processor (IND), which took .CMD control files as input. Files with the extension .SAV were a sort of executables. They were known as "save files" because the RT-11 SAVE command could be used to save the contents of memory to a disk file which could be loaded and executed at a later time, allowing any session to be saved. The SAVE command, along with GET, START, REENTER, EXAMINE and DEPOSIT were basic commands implemented in the KMON. Some commands and utilities were later borrowed in the DOS line of operating systems. These commands include DIR, COPY, RENAME, ASSIGN, CLS, DELETE, TYPE, HELP and others. The FORMAT command was used for physical disk formatting, although it was not capable of creating file system, for which purpose the INIT command was used (analogue of DOS command FORMAT /Q). Most commands supported using wildcards in file names. Physical device names were specified in the form 'dd{n}:' where 'dd' was a two-character alphabetic device name and the optional 'n' was the unit number (0–7). When the unit number was omitted, unit 0 was assumed. For example, TT: referred to the console terminal, LP: (or LP0:) referred to the parallel line printer, and DX0:, DY1:, DL4: referred to disk volumes (RX01 unit 0, RX02 unit 1, RL01 or RL02 unit 4, respectively). Logical device names consisted of 1–3 alphanumeric characters and were used in the place of a physical device name. This was accomplished using the ASSIGN command. For example, one might issue ASSIGN DL0 ABC which would cause all future references to 'ABC:' to map to 'DL0:'. Reserved logical name DK: referred to the current default device. If a device was not included in a file specification, DK: was assumed. Reserved logical name SY: referred to the system device (the device from which the system had been booted). Later versions of RT-11 allowed specification of up to 64 units (0–77 octal) for certain devices, but the device name was still limited to three alphanumeric characters. This feature was enabled through a SYSGEN selection, and only applied to the DU and LD device handlers. In these two cases, the device name form became 'dnn:' where 'd' was 'D' for the DU device and 'L' for the LD device, and 'nn' was 00–77(octal). Software RT-11 was distributed with utilities for performimg many actions. The utilities DIR, DUP, PIP and FORMAT were for managing disk volumes. TECO, EDIT, and the visual editors KED (for the DEC VT100) and K52 (for the DEC VT52) were used to create and edit source and data files. MACRO, LINK, and LIBR were for building executables. ODT, VDT and the SD device were used to debug programs. DEC's version of Runoff was for producing documents. Finally, VTCOM was used to connect with and use (or transfer files to and from) another computer system over the phone via a modem. The system was complete enough to handle many modern personal computing tasks. Productivity software such as LEX-11, a word processing package, and a spreadsheet from Saturn Software, used under other PDP-11 operating systems, also ran on RT-11. Large amounts of free, user-contributed software for RT-11 were available from the Digital Equipment Computer Users Society (DECUS) including an implementation of C. Although the tools to develop and debug assembly-language programs were provided, other languages including C, Fortran, Pascal, and several versions of BASIC were available from DEC as "layered products" at extra cost. Versions of these and other programming languages were also available from other, third-party, sources. It is even possible to network RT-11 machines using DECNET, the Internet and protocols developed by other, third-party sources. Distributions and minimal system configuration The RT-11 operating system could be booted from, and perform useful work on, a machine consisting of two 8-inch 250KB floppy disks and 56KB of memory, and could support 8 terminals. Other boot options include the RK05 2.5MB removable hard disk platter, or magnetic tape. Distributions were available pre-installed or on punched tape, magnetic tape, cartridge tape, or floppy disk. A minimal but complete system supporting a single real-time user could run on a single floppy disk and in 8K 16-bit words (16KB) of RAM, including user programs. This was facilitated by support for swapping and overlaying. To realize operation on such small memory system, the keyboard command user interface would be swapped out during the execution of a user's program and then swapped into memory upon program termination. The system supported a real-time clock, printing terminal, VT11 vector graphic unit, 16 channel 100 kHz A/D converter with 2 channel D/A, 9600 baud serial port, 16 bit bidirectional boards, etc. File system RT-11 implemented a simple and fast file system employing six-character filenames with three-character extensions ("6.3") encoded in RADIX-50, which packed those nine characters into only three 16-bit words (six bytes). All files were contiguous, meaning that each file occupied consecutive blocks (the minimally addressable unit of disk storage, 512 bytes) on the disk. This meant that an entire file could be read (or written) very quickly. A side effect of this file system structure was that, as files were created and deleted on a volume over time, the unused disk blocks would likely not remain contiguous, which could become the limiting factor in the creation of large files; the remedy was to periodically “squeeze” (or "squish") a disk to consolidate the unused portions. Each volume has only one directory which was preallocated at the beginning of the volume. The directory consists of an array of entries, one per file or unallocated space. Each directory entry is 8 (or more) 16-bit words, though a sysgen option allowed extra application-specific storage. Compatibility with other DEC operating systems Many RT11 programs (those that did not need specialized peripherals or direct access to the hardware) could be directly executed using the RT11 RTS (Run-time system) of the RSTS/E timesharing system or under RTEM (RT Emulator) on various releases of both RSX-11 and VMS. The implementation of DCL for RT-11 increased its compatibility with the other DEC operating systems. Although each operating system had commands and options which were unique to that operating system, there were a number of commands and command options which were common. Other PDP-11 operating systems DEC also sold RSX-11, a multiuser, multitasking operating system with realtime features, and RSTS/E (originally named RSTS-11) a multiuser time-sharing system, but RT-11 remained the operating system of choice for data acquisition systems where real time response was required. The Unix operating system also became popular, but lacked the real-time features and extremely small size of RT-11. Hardware RT-11 ran on all members of the DEC PDP-11 family, both Q-Bus- and Unibus-based, from the PDP-11/05 (its first target, in 1970), to the final PDP-11 implementations (PDP-11/93 and /94). In addition, it ran on the Professional Series and the PDT-11 "Programmed Data Terminal" systems, also from DEC. Since the PDP-11 architecture was implemented in replacement products by other companies (E.g., the M100 and family from Mentec), or as reverse-engineered clones in other countries (E.g., the DVK from the Soviet Union), RT-11 runs on these machines as well. Peripherals Adding driver support for peripherals such as a CalComp plotter, typically involved copying files, and did not require a SYSGEN. Compatible operating systems Fuzzball Fuzzball, routing software for Internet Protocols, was capable of running RT-11 programs. SHAREplus HAMMONDsoftware distributed a number of RT-11 compatible operating systems including STAReleven, an early multi-computer system and SHAREplus, a multi-process/multi-user implementation of RT-11 which borrowed some architectural concepts from the VAX/VMS operating system. RT-11 device drivers were required for operation. Transparent device access to other PDP-11s and VAX/VMS were supported with a network option. Limited RSX-11 application compatibility was also available. SHAREplus had its strongest user base in Europe. TSX-11 TSX-11, developed by S&H Computing, was a multi-user, multi-processing implementation of RT-11. The only thing it didn't do was handle the boot process, so any TSX-Plus machine was required to boot RT-11 first before running TSX-Plus as a user program. Once TSX-Plus was running, it would take over complete control of the machine from RT-11. It provided true memory protection for users from other users, provided user accounts and maintained account separation on disk volumes and implemented a superset of the RT-11 EMT programmed requests. S&H wrote the original TSX because "Spending $25K on a computer that could only support one user bugged" (founder Harry Sanders); the outcome was the initial four-user TSX in 1976. TSX-Plus (released in 1980) was the successor to TSX, released in 1976. The system was popular in the 1980s. RT-11 programs generally ran, unmodified, under TSX-Plus and, in fact, most of the RT-11 utilities were used as-is under TSX-Plus. Device drivers generally required only slight modifications. Depending on which PDP-11 model and the amount of memory, the system could support a minimum of 12 users (14-18 users on a 2Mb 11/73, depending on workload). The last version of TSX-Plus had TCP/IP support. Versions Variants Users could choose from four variants with differing levels of support for multitasking: RT-11SJ (Single Job) allowed only one task. This was the initial distribution. RT-11FB (Foreground/Background) supported two tasks: a high-priority, non-interactive "Foreground" job, and a low-priority, interactive "Background" job. RT-11XM (eXtended Memory), a superset of FB, provided support for memory beyond 64kb, but required a minicomputer with memory management hardware; distributed from approx. 1975-on. RT-11ZM provided support for systems with Separate Instruction and Data space (such as on the Unibus-based 11/44, 45, 55, 70, 84, and 94 and the Q-Bus-based 11/53, 73, 83, and 93) Specialized versions Several specialized PDP-11 systems were sold based on RT-11: LAB-11 provided an LPS-11 analog peripheral for the collection of laboratory data PEAK-11 provided further customization for use with gas chromatographs (analyzing the peaks produced by the GC); data collection ran in RT11's foreground process while the user's data analysis programs ran in the background. GT4x systems added a VT11 vector graphics peripheral. Several very popular demo programs were provided with these systems including Lunar Lander and a version of Spacewar!. GT62 systems added a VS60 vector graphics peripheral (VT11-compatible) in a credenza cabinet. GAMMA-11 was a packaged RT-11 and PDP 11/34 system that was one of the first fully integrated Nuclear Medicine systems. It included fast analog/digital converters, 16 bit colour graphical displays, and an extensive software library for development of applications for the purpose of data collection, analysis and display from a nuclear medicine gamma camera. Clones in the USSR Several clones of RT-11 were made in the USSR: RAFOS ("РАФОС") — SM EVM FOBOS ("ФОБОС") — Elektronika 60 FODOS ("ФОДОС") RUDOS ("РУДОС") OS DVK ("ОС ДВК") — DVK OS BK-11 ("ОС БК-11") — Elektronika BK MASTER-11 ("МАСТЕР-11") — DVK NEMIGA OS ("НЕМИГА") — Nemiga PK 588 See also TSX-32 References External links PDP-11 How-to guide with RT-11 commands reference RT-11 emulator for Windows console DEC operating systems Real-time operating systems PDP-11 Assembly language software Elektronika BK operating systems
Operating System (OS)
460
Windows Fundamentals for Legacy PCs Windows Fundamentals for Legacy PCs ("WinFLP") is a thin client release of the Windows NT operating system developed by Microsoft and optimized for older, less powerful hardware. It was released on July 8, 2006, over four years after its Windows XP Embedded counterpart was released in 2002, and is not marketed as a full-fledged general purpose operating system, although it is functionally able to perform most of the tasks generally associated with one. It includes only certain functionality for local workloads such as security, management, document viewing related tasks and the .NET Framework. It is designed to work as a client–server solution with RDP clients or other third party clients such as Citrix ICA. Windows Fundamentals for Legacy PCs reached end of support on April 8, 2014 along with most other Windows XP editions. History Windows Fundamentals for Legacy PCs was originally announced with the code name "Eiger" on 12 May 2005. ("Mönch" was announced as a potential follow-up project at about the same time.) The name "Windows Fundamentals for Legacy PCs" appeared in a press release in September 2005, when it was introduced as "formerly code-named “Eiger”" and described as "an exclusive benefit to SA [Microsoft Software Assurance] customers". A Gartner evaluation from April 2006 stated that: The RTM version of Windows Fundamentals for Legacy PCs, which was released on July 8, 2006, was based on Windows XP with Service Pack 2. The release was announced to the press on July 12, 2006. In May 2011, Microsoft announced Windows Thin PC as the successor product. Technical specifications Microsoft positions Windows Fundamentals for Legacy PCs as an operating system that provides basic computing services on older hardware, while still providing core management features of more recent Windows releases, such as Windows Firewall, Group Policy, Automatic Updates, and other management services. However, it is not considered to be a general-purpose operating system by Microsoft. Windows Fundamentals for Legacy PCs is a Windows XP Embedded derivative and, as such, it requires significantly fewer system resources than the fully featured Windows XP. It also features basic networking, extended peripheral support, DirectX, and the ability to launch the remote desktop clients from compact discs. In addition to local applications, it offers support for those hosted on a remote server using Remote Desktop. It can be installed on a local hard drive, or configured to run on a diskless workstation. Hardware requirements Despite being optimized for older PCs, hardware requirements for Windows Fundamentals for Legacy PCs are similar to Windows XP, although it is faster running on slower clock speeds than Windows XP. Limitations Windows Fundamentals for Legacy PCs has a smaller feature set than Windows XP. For example, WinFLP does not include Paint, Outlook Express and Windows games such as Solitaire. Another limitation is the absence of the Compatibility tab in the Properties... dialog box for executable files. Internet Explorer 8 (and 7) can be installed, but a hotfix is required for auto-complete to work in these newer versions of the browser. Availability Windows Fundamentals for Legacy PCs was exclusively available to Microsoft Software Assurance customers, as it is designed to be an inexpensive upgrade option for corporations that have a number of Windows 9x computers, but lack the hardware necessary to support the latest Windows. It is not available through retail or OEM channels. On October 7, 2008, Service Pack 3 for Windows Embedded for Point of Service and Windows Fundamentals for Legacy PCs was made available. On April 18, 2013, Service Pack 3 for Windows Fundamentals for Legacy PCs was made available for download again after previously having been removed from the Microsoft site. The Microsoft marketing pages for Windows Fundamentals now redirect to those of Windows Thin PC, suggesting that Windows Fundamentals is no longer available for any customers. Windows Fundamentals for Legacy PCs has the same lifecycle policy as Windows XP; as such, its support lifespan ended on 8 April 2014. References External links Windows Fundamentals for Legacy PCs home page on Microsoft's official site (Archived) Bill McMinn's review for WinFLP Choosing the right Virtual OS: Windows XP vs. Windows FLP Fixing null.sys on WinFLP 2006 software Fundamentals IA-32 operating systems
Operating System (OS)
461
Windows Server 2012 Windows Server 2012 is the fifth version of the Windows Server operating system by Microsoft, as part of the Windows NT family of operating systems. It is the server version of Windows based on Windows 8 and succeeds Windows Server 2008 R2, which is derived from the Windows 7 codebase, released nearly three years earlier. Two pre-release versions, a developer preview and a beta version, were released during development. The software was officially launched on September 4, 2012, two months before the release of Windows 8. A successor was released on October 18, 2013, entitled Windows Server 2012 R2. Microsoft ended mainstream support for Windows Server 2012 on October 9, 2018, and extended support will end on October 10, 2023. Unlike its predecessor, Windows Server 2012 has no support for Itanium-based computers, and has four editions. Various features were added or improved over Windows Server 2008 R2 (with many placing an emphasis on cloud computing), such as an updated version of Hyper-V, an IP address management role, a new version of Windows Task Manager, and ReFS, a new file system. Windows Server 2012 received generally good reviews in spite of having included the same controversial Metro-based user interface seen in Windows 8, which includes the Charms Bar for quick access to settings in the desktop environment. History Windows Server 2012, codenamed "Windows Server 8", is the fifth release of Windows Server family of operating systems developed concurrently with Windows 8. It was not until April 17, 2012 that the company announced that the final product name would be "Windows Server 2012". Microsoft introduced Windows Server 2012 and its developer preview in the BUILD 2011 conference on September 9, 2011. However, unlike Windows 8, the developer preview of Windows Server 2012 was only made available to MSDN subscribers. It included a graphical user interface (GUI) based on Metro design language and a new Server Manager, a graphical application used for server management. On February 16, 2012, Microsoft released an update for developer preview build that extended its expiry date from April 8, 2012 to January 15, 2013. Before Windows Server 2012 was finalized, two test builds were made public. A public beta version of Windows Server 2012 was released along with the Windows 8 Consumer Preview on February 29, 2012. The release candidate of Windows Server 2012 was released on May 31, 2012, along with the Windows 8 Release Preview. The product was released to manufacturing on August 1, 2012 (along with Windows 8) and became generally available on September 4, that year. However, not all editions of Windows Server 2012 were released at the same time. Windows Server 2012 Essentials was released to manufacturing on October 9, 2012 and was made generally available on November 1, 2012. As of September 23, 2012, all students subscribed to DreamSpark program can download Windows Server 2012 Standard or Datacenter free of charge. Windows Server 2012 is based on Windows Server 2008 R2 and Windows 8 and requires x86-64 CPUs (64-bit), while Windows Server 2008 worked on the older IA-32 (32-bit) architecture as well. Coupled with fundamental changes in the structure of the client backups and the shared folders, there is no clear method for migrating from the previous version to Windows Server 2012. Features Installation options Unlike its predecessor, Windows Server 2012 can switch between "Server Core" and "Server with a GUI" installation options without a full reinstallation. Server Core – an option with a command-line interface only – is now the recommended configuration. There is also a third installation option that allows some GUI elements such as MMC and Server Manager to run, but without the normal desktop, shell or default programs like File Explorer. User interface Server Manager has been redesigned with an emphasis on easing management of multiple servers. The operating system, like Windows 8, uses the Metro-based user interface unless installed in Server Core mode. Windows Store is available in this version of Windows but is not installed by default. Windows PowerShell in this version has over 2300 commandlets, compared to around 200 in Windows Server 2008 R2. Task Manager Windows Server 2012 includes a new version of Windows Task Manager together with the old version. In the new version the tabs are hidden by default, showing applications only. In the new Processes tab, the processes are displayed in varying shades of yellow, with darker shades representing heavier resource use. Information found in the older versions are now moved to the new Details tab. The Performance tab shows "CPU", "Memory", "Disk", "Wi-Fi" and "Ethernet" graphs. Unlike the Windows 8 version of Task Manager (which looks similar), the "Disk" activity graph is not enabled by default. The CPU tab no longer displays individual graphs for every logical processor on the system by default, although that remains an option. Additionally, it can display data for each non-uniform memory access (NUMA) node. When displaying data for each logical processor for machines with more than 64 logical processors, the CPU tab now displays simple utilization percentages on heat-mapping tiles. The color used for these heat maps is blue, with darker shades again indicating heavier utilization. Hovering the cursor over any logical processor's data now shows the NUMA node of that processor and its ID, if applicable. Additionally, a new Startup tab has been added that lists startup applications, however this tab does not exist in Windows Server 2012. The new task manager recognizes when a Windows Store app has the "Suspended" status. IP address management (IPAM) Windows Server 2012 has an IP address management role for discovering, monitoring, auditing, and managing the IP address space used on a corporate network. The IPAM is used for the management and monitoring of Domain Name System (DNS) and Dynamic Host Configuration Protocol (DHCP) servers. Both IPv4 and IPv6 are fully supported. Active Directory Windows Server 2012 has a number of changes to Active Directory from the version shipped with Windows Server 2008 R2. The Active Directory Domain Services installation wizard has been replaced by a new section in Server Manager, and a GUI has been added to the Active Directory Recycle Bin. Multiple password policies can be set in the same domain. Active Directory in Windows Server 2012 is now aware of any changes resulting from virtualization, and virtualized domain controllers can be safely cloned. Upgrades of the domain functional level to Windows Server 2012 are simplified; it can be performed entirely in Server Manager. Active Directory Federation Services is no longer required to be downloaded when installed as a role, and claims which can be used by the Active Directory Federation Services have been introduced into the Kerberos token. Windows Powershell commands used by Active Directory Administrative Center can be viewed in a "Powershell History Viewer". Hyper-V Windows Server 2012, along with Windows 8, includes a new version of Hyper-V, as presented at the Microsoft BUILD event. Many new features have been added to Hyper-V, including network virtualization, multi-tenancy, storage resource pools, cross-premises connectivity, and cloud backup. Additionally, many of the former restrictions on resource consumption have been greatly lifted. Each virtual machine in this version of Hyper-V can access up to 64 virtual processors, up to 1 terabyte of memory, and up to 64 terabytes of virtual disk space per virtual hard disk (using a new format). Up to 1024 virtual machines can be active per host, and up to 8000 can be active per failover cluster. SLAT is a required processor feature for Hyper-V on Windows 8, while for Windows Server 2012 it is only required for the supplementary RemoteFX role. ReFS Resilient File System (ReFS), codenamed "Protogon", is a new file system in Windows Server 2012 initially intended for file servers that improves on NTFS in some respects. Major new features of ReFS include: Improved reliability for on-disk structures ReFS uses B+ trees for all on-disk structures including metadata and file data. Metadata and file data are organized into tables similar to a relational database. The file size, number of files in a folder, total volume size and number of folders in a volume are limited by 64-bit numbers; as a result ReFS supports a maximum file size of 16 exabytes, a maximum of 18.4 × 1018 folders and a maximum volume size of 1 yottabyte (with 64 KB clusters) which allows large scalability with no practical limits on file and folder size (hardware restrictions still apply). Free space is counted by a hierarchical allocator which includes three separate tables for large, medium, and small chunks. File names and file paths are each limited to a 32 KB Unicode text string. Built-in resilience ReFS employs an allocation-on-write update strategy for metadata, which allocates new chunks for every update transaction and uses large IO batches. All ReFS metadata has built-in 64-bit checksums which are stored independently. The file data can have an optional checksum in a separate "integrity stream", in which case the file update strategy also implements allocation-on-write; this is controlled by a new "integrity" attribute applicable to both files and directories. If nevertheless file data or metadata becomes corrupt, the file can be deleted without taking the whole volume offline. As a result of built-in resiliency, administrators do not need to periodically run error-checking tools such as CHKDSK when using ReFS. Compatibility with existing APIs and technologies ReFS does not require new system APIs and most file system filters continue to work with ReFS volumes. ReFS supports many existing Windows and NTFS features such as BitLocker encryption, Access Control Lists, USN Journal, change notifications, symbolic links, junction points, mount points, reparse points, volume snapshots, file IDs, and oplock. ReFS seamlessly integrates with Storage Spaces, a storage virtualization layer that allows data mirroring and striping, as well as sharing storage pools between machines. ReFS resiliency features enhance the mirroring feature provided by Storage Spaces and can detect whether any mirrored copies of files become corrupt using background data scrubbing process, which periodically reads all mirror copies and verifies their checksums then replaces bad copies with good ones. Some NTFS features are not supported in ReFS, including object IDs, short names, file compression, file level encryption (EFS), user data transactions, hard links, extended attributes, and disk quotas. Sparse files are supported. Support for named streams is not implemented in Windows 8 and Windows Server 2012, though it was later added in Windows 8.1 and Windows Server 2012 R2. ReFS does not itself offer data deduplication. Dynamic disks with mirrored or striped volumes are replaced with mirrored or striped storage pools provided by Storage Spaces. In Windows Server 2012, automated error-correction with integrity streams is only supported on mirrored spaces; automatic recovery on parity spaces was added in Windows 8.1 and Windows Server 2012 R2. Booting from ReFS is not supported either. IIS 8.0 Windows Server 2012 includes version 8.0 of Internet Information Services (IIS). The new version contains new features such as SNI, CPU usage caps for particular websites, centralized management of SSL certificates, WebSocket support and improved support for NUMA, but few other substantial changes were made. Remote Desktop Protocol 8.0 Remote Desktop Protocol has new functions such as Adaptive Graphics (progressive rendering and related techniques), automatic selection of TCP or UDP as transport protocol, multi touch support, DirectX 11 support for vGPU, USB redirection supported independently of vGPU support, etc. A "connection quality" button is displayed in the RDP client connection bar for RDP 8.0 connections; clicking on it provides further information about connection, including whether UDP is in use or not. Scalability Windows Server 2012 supports the following maximum hardware specifications. Windows Server 2012 improves over its predecessor Windows Server 2008 R2: System requirements Windows Server 2012 does not support Itanium and runs only on x64 processors. Upgrades from Windows Server 2008 and Windows Server 2008 R2 are supported, although upgrades from prior releases are not. Editions Windows Server 2012 has four editions: Foundation, Essentials, Standard and Datacenter. Reception Reviews of Windows Server 2012 have been generally positive. Simon Bisson of ZDNet described it as "ready for the datacenter, today," while Tim Anderson of The Register said that "The move towards greater modularity, stronger automation and improved virtualisation makes perfect sense in a world of public and private clouds" but remarked that "That said, the capability of Windows to deliver obscure and time-consuming errors is unchanged" and concluded that "Nevertheless, this is a strong upgrade overall." InfoWorld noted that Server 2012's use of Windows 8's panned "Metro" user interface was countered by Microsoft's increasing emphasis on the Server Core mode, which had been "fleshed out with new depth and ease-of-use features" and increased use of the "practically mandatory" PowerShell. However, Michael Otey of Windows IT Pro expressed dislike with the new Metro interface and the lack of ability to use the older desktop interface alone, saying that most users of Windows Server manage their servers using the graphical user interface rather than PowerShell. Paul Ferrill wrote that "Windows Server 2012 Essentials provides all the pieces necessary to provide centralized file storage, client backups, and remote access," but Tim Anderson contended that "Many businesses that are using SBS2011 and earlier will want to stick with what they have", citing the absence of Exchange, the lack of ability to synchronize with Active Directory Federation Services and the 25-user limit, while Paul Thurott wrote "you should choose Foundation only if you have at least some in-company IT staff and/or are comfortable outsourcing management to a Microsoft partner or solution provider" and "Essentials is, in my mind, ideal for any modern startup of just a few people." Windows Server 2012 R2 A second release, Windows Server 2012 R2, which is derived from the Windows 8.1 codebase, was released to manufacturing on August 27, 2013 and became generally available on October 18, 2013, by Microsoft. A service pack, formally designated Windows Server 2012 R2 Update, was released in April 2014. See also Comparison of Microsoft Windows versions Comparison of operating systems History of Microsoft Windows List of operating systems Microsoft Servers Notes Extended Security Updates Microsoft Announced in July 2021 they will distribute Extended Security Updates for SQL Server 2012, Windows Server 2012, and Windows Server 2012 R2, for a maximum of three years after the end of Extended Support date. End of Support Microsoft originally planned to end support for Windows Server 2012 and Windows Server 2012 R2 on January 10, 2023, but in order to provide customers the standard transition lifecycle timeline, Microsoft extended Windows Server 2012 and 2012 R2 support in March 2017 by 9 months. With a final set end date, Windows Server 2012 will end Extended Support on October 10, 2023. References Further reading External links Windows Server 2012 R2 and Windows Server 2012 on TechNet Windows Server 2012 R2 on MSDN Windows Server 2012 on MSDN Tutorials and Lab Manual Articles of Windows Server 2012 R2 Windows Server X86-64 operating systems 2012 software
Operating System (OS)
462
IBM 7090/94 IBSYS IBSYS is the discontinued tape-based operating system that IBM supplied with its IBM 709, IBM 7090 and IBM 7094 computers. A similar operating system (but with several significant differences), also called IBSYS, was provided with IBM 7040 and IBM 7044 computers. IBSYS was based on FORTRAN Monitor System (FMS) and (more likely) Bell Labs' "BESYS" rather than the SHARE Operating System. IBSYS directly supported several old language processors on the $EXECUTE card: 9PAC, FORTRAN and IBSFAP. Newer language processors ran under IBJOB. IBSYS System Supervisor IBSYS itself is a resident monitor program, that reads control card images placed between the decks of program and data cards of individual jobs. An IBSYS control card begins with a "$" in column 1, immediately followed by a Control Name that selects the various IBSYS utility programs needed to set up and run the job. These card deck images are usually read from magnetic tapes prepared offline, not directly from the card reader. IBJOB Processor The IBJOB Processor is a subsystem that runs under the IBSYS System Supervisor. It reads control cards that request, e.g., compilation, execution. The languages supported include COBOL. Commercial Translator (COMTRAN), Fortran IV (IBFTC) and Macro Assembly Program (IBMAP). See also University of Michigan Executive System Timeline of operating systems Further reading Noble, A. S., Jr., "Design of an integrated programming and operating system", IBM Systems Journal, June 1963. "The present paper considers the underlying design concepts of IBSYS/IBJOB, an integrated programming and operating system. The historical background and over-all structure of the system are discussed. Flow of jobs through the IBJOB processor, as controlled by the monitor, is also described." "IBM 7090/7094 IBSYS Operating System Version 13 System Monitor (IBSYS)", Form C28-6248-7 "IBM 7090/7094 IBSYS Operating System Version 13 IBJOB Processor", Form C28-6389-1 "IBM 7090/7094 IBSYS Operating System Version 13 IBJOB Processor Debugging Package", Form C28-6393-2 Notes External links IBM 7090/94 IBSYS Operating System, Jack Harper Dave Pitts' IBM 7090 support IBSYS source archived with Bitsavers History of FORTRAN and FORTRAN II – FORTRAN II and other software running on IBSYS, Software Preservation Group, Computer History Museum 7090 94 IBSYS OS IBSYS Discontinued operating systems 1960 software
Operating System (OS)
463
Object Oriented Input System OIS (Object-Oriented Input System) is a code library for constructing a human-computer interface with input devices such as a keyboard, mouse or game controller. OIS is designed so that software developers can easily use input from these devices with a computer application. General information The Object-Oriented Input Library is a mostly C++ library for handling input. Input types include mouse, keyboard, joystick and Wii remote. OIS is meant to be cross-platform, supporting Windows and Linux systems. OS X and FreeBSD are only partially supported. Features OIS uses an Object-oriented design. Various types of input including mouse, keyboard, joystick and Wii Remote are supported. OIS can handle force feedback devices. External links Project Homepage Project Repository (GitHub) Source-Forge project page (legacy) Ohloh project page References Free software programmed in C++
Operating System (OS)
464
Systems programming Systems programming, or system programming, is the activity of programming computer system software. The primary distinguishing characteristic of systems programming when compared to application programming is that application programming aims to produce software which provides services to the user directly (e.g. word processor), whereas systems programming aims to produce software and software platforms which provide services to other software, are performance constrained, or both (e.g. operating systems, computational science applications, game engines, industrial automation, and software as a service applications). Systems programming requires a great degree of hardware awareness. Its goal is to achieve efficient use of available resources, either because the software itself is performance critical or because even small efficiency improvements directly transform into significant savings of time or money. Overview The following attributes characterize systems programming: The programmer can make assumptions about the hardware and other properties of the system that the program runs on, and will often exploit those properties, for example by using an algorithm that is known to be efficient when used with specific hardware. Usually a low-level programming language or programming language dialect is used so that: Programs can operate in resource-constrained environments Programs can be efficient with little runtime overhead, possibly having either a small runtime library or none at all Programs may use direct and "raw" control over memory access and control flow The programmer may write parts of the program directly in assembly language Often systems programs cannot be run in a debugger. Running the program in a simulated environment can sometimes be used to reduce this problem. Systems programming is sufficiently different from application programming that programmers tend to specialize in one or the other. In systems programming, often limited programming facilities are available. The use of automatic garbage collection is not common and debugging is sometimes hard to do. The runtime library, if available at all, is usually far less powerful, and does less error checking. Because of those limitations, monitoring and logging are often used; operating systems may have extremely elaborate logging subsystems. Implementing certain parts in operating systems and networking requires systems programming, for example implementing paging (virtual memory) or a device driver for an operating system. History Originally systems programmers invariably wrote in assembly language. Experiments with hardware support in high level languages in the late 1960s led to such languages as PL/S, BLISS, BCPL, and extended ALGOL for Burroughs large systems. Forth also has applications as a systems language. In the 1970s, C became widespread, aided by the growth of Unix. More recently a subset of C++ called Embedded C++ has seen some use, for instance it is used in the I/O Kit drivers of macOS. Alternative Meaning For historical reasons, some organizations use the term systems programmer to describe a job function which would be more accurately termed systems administrator. This is particularly true in organizations whose computer resources have historically been dominated by mainframes, although the term is even used to describe job functions which do not involve mainframes. This usage arose because administration of IBM mainframes often involved the writing of custom assembler code (IBM's Basic Assembly Language (BAL)), which integrated with the operating system such as OS/MVS, DOS/VSE or VM/CMS. Indeed, some IBM software products had substantial code contributions from customer programming staff. This type of programming is progressively less common, but the term systems programmer is still the de facto job title for staff directly administering IBM mainframes. See also Ousterhout's dichotomy System programming language Scripting language Interrupt handler References Further reading Systems Programming by John J. Donovan Computer programming System software
Operating System (OS)
465
32-bit disk access 32-bit Disk Access (also known as FastDisk) refers to a special disk access and caching mode available in older, MS-DOS-based Microsoft Windows operating systems. It was a set of protected mode device drivers that worked together to take advantage of advanced disk I/O features in the system BIOS. It filtered interrupt 13h BIOS calls to the disk controller and directed them in the most efficient way for the system — either through the 32-bit interface with the hard disk controller or through the system BIOS. Using 32-bit Disk Access allowed for more pageable memory in Windows to page MS-DOS–based applications to disk to free enough RAM for applications when they needed to use it. Sometimes enabling this mode would break older applications of the day. Windows 3.1 had an option in its 386 Enhanced Control Panel that would enable 32-bit read & write access in 386 enhanced mode. Usually, 32-bit read could be safely enabled, but 32-bit write had issues with a number of applications. 32-bit Disk Access was the feature that made it possible to page MS-DOS applications to disk. Without it, if the real mode disk code (the Int 13h handler) was paged out, the virtual DOS machine would loop forever. 32-bit disk access should not be confused with 32-bit file access. Although both technologies are similar, 32-bit disk access was introduced with Windows 3.1 and file access with Windows for Workgroups 3.11. 32-bit file access provided a 32-bit code path for Windows to directly access the disk bus by intercepting the MS-DOS Int 21H services while remaining in 386 protected mode and at CPU speeds, rather than handling the Int 21H services in real mode by MS-DOS. 32-bit disk access offers less performance and is less likely to work on many computers than 32-bit file access. 32-bit file access does not require 32-bit disk access. Windows 95, Windows 98, and Windows Me use native, protected mode 32-bit disk drivers during normal operation. However Safe Mode uses MS-DOS real mode disk drivers instead. Real mode MS-DOS drivers could also be used during normal operation for disk peripherals for which Windows did not have native drivers. 32-bit versions of the Windows NT family of operating systems including the newer Windows 2000, Windows XP, Windows Server 2003, Windows Vista and later always have 32-bit disk drivers active, cannot use MS-DOS drivers at all, and the expression is not used for them. References https://web.archive.org/web/20070324064925/http://pclt.cis.yale.edu/pclt/OPSYS/WFWG311.HTM Windows architecture
Operating System (OS)
466
Systems management Systems management refers to enterprise-wide administration of distributed systems including (and commonly in practice) computer systems. Systems management is strongly influenced by network management initiatives in telecommunications. The application performance management (APM) technologies are now a subset of Systems management. Maximum productivity can be achieved more efficiently through event correlation, system automation and predictive analysis which is now all part of APM. Centralized management has a time and effort trade-off that is related to the size of the company, the expertise of the IT staff, and the amount of technology being used: For a small business startup with ten computers, automated centralized processes may take more time to learn how to use and implement than just doing the management work manually on each computer. A very large business with thousands of similar employee computers may clearly be able to save time and money, by having IT staff learn to do systems management automation. A small branch office of a large corporation may have access to a central IT staff, with the experience to set up automated management of the systems in the branch office, without need for local staff in the branch office to do the work. Systems management may involve one or more of the following tasks: Hardware inventories. Server availability monitoring and metrics. Software inventory and installation. Anti-virus and anti-malware. User's activities monitoring. Capacity monitoring. Security management. Storage management. Network capacity and utilization monitoring. Anti-manipulation management Functions Functional groups are provided according to International Telecommunication Union Telecommunication Standardization Sector (ITU-T) Common management information protocol (X.700) standard. This framework is also known as Fault, Configuration, Accounting, Performance, Security (FCAPS). Fault management Troubleshooting, error logging and data recovery Configuration management Hardware and software inventory As we begin the process of automating the management of our technology, what equipment and resources do we have already? How can this inventorying information be gathered and updated automatically, without direct hands-on examination of each device, and without hand-documenting with a pen and notepad? What do we need to upgrade or repair? What can we consolidate to reduce complexity or reduce energy use? What resources would be better reused somewhere else? What commercial software are we using that is improperly licensed, and either needs to be removed or more licenses purchased? Provisioning What software will we need to use in the future? What training will need to be provided to use the software effectively? Software deployment What steps are necessary to install it on perhaps hundreds or thousands of computers? Package management How do we maintain and update the software we are using, possibly through automated update mechanisms? Accounting management Billing and statistics gathering Performance management Software metering Who is using the software and how often? If the license says only so many copies may be in use at any one time but may be installed in many more places than licensed, then track usage of those licenses. If the licensed user limit is reached, either prevent more people from using it, or allow overflow and notify accounting that more licenses need to be purchased. Event and metric monitoring How reliable are the computers and software? What errors or software bugs are preventing staff from doing their job? What trends are we seeing for hardware failure and life expectancy? Security management Identity management Policy management However this standard should not be treated as comprehensive, there are obvious omissions. Some are recently emerging sectors, some are implied and some are just not listed. The primary ones are: Business Impact functions (also known as Business Systems Management) Capacity management Real-time Application Relationship Discovery (which supports Configuration Management) Security Information and Event Management functions (SIEM) Workload scheduling Performance management functions can also be split into end-to-end performance measuring and infrastructure component measuring functions. Another recently emerging sector is operational intelligence (OI) which focuses on real-time monitoring of business events that relate to business processes, not unlike business activity monitoring (BAM). Standards Distributed Management Task Force (DMTF) Alert Standard Format (ASF) Common Information Model (CIM) Desktop and mobile Architecture for System Hardware (DASH) Systems Management Architecture for Server Hardware (SMASH) Java Management Extensions (JMX) Academic preparation Schools that offer or have offered degrees in the field of systems management include the University of Southern California, the University of Denver, Capitol Technology University, and Florida Institute of Technology. See also List of systems management systems Application service management Enterprise service management Business activity monitoring Business transaction management Computer Measurement Group Event correlation Network management Operational intelligence System administration Service governance References Bibliography External links Standards for Automated Resource Management IT Systems management Forum Nederland Computer systems System administration
Operating System (OS)
467
Mac OS X 10.0 Mac OS X 10.0 (code named Cheetah) is the first major release and version of macOS, Apple's desktop and server operating system. Mac OS X 10.0 was released on March 24, 2001 for a price of US$129. It was the successor of the Mac OS X Public Beta and the predecessor of Mac OS X 10.1 (code named Puma). Mac OS X 10.0 was a radical departure from the classic Mac OS and was Apple's long-awaited answer for a next generation Macintosh operating system. It introduced a brand new code base completely separate from Mac OS 9's as well as all previous Apple operating systems, and had a new Unix-like core, Darwin, which features a new memory management system. Unlike subsequent releases starting with Mac OS X 10.2, Mac OS X 10.0 was not externally marketed with its codename. System requirements Supported Computers: Power Macintosh G3 Beige, G3 B&W, G4, G4 Cube, iMac, PowerBook G3, PowerBook G4, iBook RAM: 128 MB (unofficially 64 MB minimum) Hard Drive Space: 1,500 MB (800 MB for the minimal install) Features Dock — the Dock was a new way of organizing one's Mac OS X applications on a user interface, and a change from the classic method of Application launching in previous Mac OS systems. OSFMK 7.3 — the Open Software Foundation Mach kernel from the OSF was part of the XNU kernel for Mac OS X, and was one of the largest changes from a technical standpoint in Mac OS X. Terminal — the Terminal was a feature that allowed access to Mac OS X's underpinnings, namely the Unix core. Mac OS had previously had the distinction of being one of the few operating systems with no command line interface at all. Mail — email client. Address Book TextEdit — new on-board word processor, replacement to SimpleText. Full preemptive multitasking support, a long-awaited feature on the Mac. PDF Support (create PDFs from any application) Aqua UI — new user interface Built on Darwin, a Unix-like operating system. OpenGL AppleScript Support for Carbon and Cocoa APIs Sherlock — desktop and web search engine. Protected memory — memory protection so that if an application corrupts its memory, the memory of other applications will not be corrupted. Limitations File-sharing client — The system can only use TCP/IP, not AppleTalk, to connect to servers sharing the Apple Filing Protocol. It cannot use SMB to connect to Windows or Samba servers. File-sharing server — As a server, the system can share files using only the Apple Filing Protocol (over TCP/IP), HTTP, SSH, and FTP. Optical media — DVD playback is not supported, and CDs cannot be burned to. Multilingual snags Mac OS X 10.0 began a short era (that ended with Mac OS X 10.2 Jaguar's release) where Apple offered two types of installation CDs: 1Z and 2Z CDs. The difference in the two lay in the extent of multilingual support. Input method editors of Simplified Chinese, Traditional Chinese, and Korean were only included with the 2Z CDs. They also came with more languages (the full set of 15 languages), whereas the 1Z CDs came only with about eight languages and could not actually display simplified Chinese, traditional Chinese and/or Korean (except for the Chinese characters present in Japanese Kanji). A variant of 2Z CDs were introduced when Mac OS X v10.0.3 was released to the Asian market (this variant could not be upgraded to version 10.0.4). The brief period of multilingual confusion ended with the release of v10.2. Currently, all Mac OS X installer CDs and preinstallations include the full set of 15 languages and full multilingual compatibility. Release history References External links Mac OS X v10.0 review at Ars Technica from apple.com 0 2001 software PowerPC operating systems
Operating System (OS)
468
Caldera International Caldera International, earlier Caldera Systems, was an American software company that existed from 1998 to 2002 and developed and sold Linux- and Unix-based operating system products. Caldera Systems was created in August 1998 as a spinoff of Caldera, Inc., with Ransom Love as its CEO. It focused on selling Caldera OpenLinux, a high-end Linux distribution aimed at business customers that included features it developed, such as an easy-to-use, graphical installer and graphical and web-based system administration tools, as well as features from bundled proprietary software. Caldera Systems was also active in the Java language and software platform on Linux community. In March 2000, Caldera Systems staged a successful IPO of its stock, although the stock price did not reach the stratospheric heights of its chief competitor Red Hat and some other companies during the "Linux mania" of 1999. In August 2000, Caldera Systems announced the purchase of Unix technology and services from the Santa Cruz Operation (SCO). The much larger, merged company changed its name to Caldera International when the deal closed in May 2001. Caldera International sought to shape SCO's UnixWare product (renamed Open UNIX) to present a unified view of Unix and Linux that could satisfy high-end business needs and take advantage of SCO's large reseller channel. The Volution suite of higher-layer solutions for system management, mail and messaging, and authentication also had the same goal. Caldera International was part of the United Linux effort of Linux companies seeking to form a common distribution that could compete with Red Hat. In the end none of these efforts succeeded in the marketplace, and Caldera Systems/International lost large amounts of money in all four years of its existence. Under severe financial pressure, in June 2002 Love was replaced as CEO by Darl McBride, who soon adopted the corporate name The SCO Group and took that entity in a completely different business direction. Caldera Systems Background and formation Caldera, Inc., based in Utah, was founded in 1994 by Bryan Wayne Sparks and Ransom H. Love, receiving start-up funding from Ray Noorda's Canopy Group. Its main product was Caldera Network Desktop (CND), a Linux distribution mainly targeted at business customers and containing some proprietary additions. Caldera, Inc. later purchased the German LST Software GmbH and its LST Power Linux distribution, which was made the basis of their following product Caldera OpenLinux (COL). Caldera, Inc. inherited a lawsuit against Microsoft when it purchased DR-DOS from Novell in 1996. This Caldera v. Microsoft action related to Caldera's claims of monopolization, illegal tying, exclusive dealing, and tortious interference by Microsoft. On September 2, 1998, Caldera, Inc. announced the creation of two Utah-based wholly owned subsidiaries, Caldera Systems, Inc. and Caldera Thin Clients, Inc., in order to split up tasks and directions. Caldera Systems, whose actual incorporation date had been August 21, 1998, took over the Linux business, including development, training, services, and support, while Caldera Thin Clients (which changed its name to Lineo the following year) took over the DOS and embedded business. The shell company Caldera, Inc., remained responsible for the lawsuit only. "Linux for Business" Caldera Systems was headquartered in Orem, Utah, and was headed by co-founder Ransom Love as President and CEO. Caldera Deutschland GmbH, based in Erlangen, Germany, served as their Linux development center. Drew Spencer joined in 1999 and became the company's Chief Technology Officer. The company targeted the Linux-based software business with its Linux distribution named Caldera OpenLinux, and the Caldera Systems business plan stressed the importance of corporate training, support, and services. Towards this end they created a professional certification program for Linux as well as for the KDE desktop that the Caldera Systems distribution used. In doing so they worked with the Linux Professional Institute in developing class materials and created a series of Authorized Linux Education Centers around the globe that would train successful students towards doing well in Linux Professional Institute Certification Programs. Beginning courses trained on several difficult Linux distributions as well as Caldera OpenLinux, while more advanced courses focused on OpenLinux only (the name OpenLinux tended to annoy other Linux distributions, suggesting as it did that the others were not open). The early leader in the Linux as a business race was Red Hat Software, which attracted equity investments from several major technology companies in early 1999. Red Hat also tended to get the most media attention. Besides Red Hat and Caldera, other well-known companies selling Linux distributions included SuSE, Turbolinux, and Mandrake Soft. But no company at the time had been successful in building a profitable business around open source software. Caldera Systems focused on a high-end Linux product and its Linux distribution became rich with features with bundled proprietary software. For instance, the company offered NetWare for Linux, which included a full-blown NetWare implementation from Novell. They licensed Sun Microsystems's Wabi to allow people to run Windows applications under Linux. Additionally, they shipped with Linux versions of WordPerfect from Corel as well as productivity applications from Applixware. Since many of their customers used a dual boot setup, Caldera shipped with PowerQuest's PartitionMagic to allow their customers to non-destructively repartition their hard disks. This approach led to a debate about the purity of Linux-based products. Red Hat CEO Bob Young said in 1999, "One where you might see a problem is Caldera, because they see part of their value added in proprietary tools they have licensed from third parties." In response, a Caldera Systems executive expressed the company's philosophy: "We have produced a product that combines the best of open-source and commercial packages; we are doing Linux for business. We do add to it commercial packages that allow business users to easily integrate it." Caldera OpenLinux was also available on a retail basis, in the form of a CD-ROM for installing Linux on an IBM PC compatible machine that sold for . OpenLinux 2.2, released in April 1999, was seen as significantly improved from the previous year's 1.3 release, especially in terms of it having a fully graphical and easy-to-use installation feature. Ease of installation was an important criteria in selecting a Linux distribution, and Caldera Deutschland had created this first fully graphical installer for Linux, called Lizard, starting in November 1998. Several years later it was still receiving praise from reviewers. The installer could even be started from a Microsoft Windows partition. Industry writer Hal Plotkin praised Caldera as a product development company and noted that OpenLinux won several industry awards, including 1999 product of the year from Linux Journal. Other products and projects In addition to other people's applications, Caldera Systems created many Linux extensions to fill voids where no other commercial company was. Caldera Systems created a full-featured GUI system administration tool called Caldera Open Administration System (COAS) that was deployed during 1999. The tool was a unified, easy to use administration tool with a modular design and goals of scalability and broad scope applicability, and was expressly designed to be usable on other Linux distributions in addition to Caldera Systems'. Following that, Caldera Systems sponsored the development of browser-based Unix system administration via the webmin project between 1999 and 2001. It became the first Linux distribution to include Webmin as the standard tool for system administration. Caldera Systems was a leader in the adoption of the Java language and software platform on Linux. The Blackdown Java project, which first produced working Java ports for Linux systems, was featured on Caldera OpenLinux. In 2000, Caldera Systems was one of the companies elected to the inaugural JCP Executive Committee for Java SE/EE, which guided the evolution of Java language and software platform through the Java Community Process. Caldera Systems' role on the Executive Committee included representing the Linux and open source communities. The company was re-elected to its seat on the Executive Committee after it became Caldera International, and represented Java usage on SCO Unix platforms as well. Work to improve just-in-time compilation under the Sun "Classic JVM" for SCO Unix platforms that begun under SCO was completed with Caldera International. Caldera Systems was also involved in several Java Specification Requests, including being the specification lead for JSR 140, Service Location Protocol API for Java, and participating in the JSR 48 WBEM Services Specification. Investments and IPO Caldera Systems had not been profitable; for the company's 1998 fiscal year, ending on October 31, it had a loss of $7.9 million on revenue of $1.05 million, and for its 1999 fiscal year, it had a loss of $9.3 million on revenue of $3.05 million. However, the industry saw promise in Linux as a solution for businesses, and in the latter half of 1999 a "Linux hysteria" had erupted in the stock market, with first Red Hat in August 1999 and then Cobalt Networks and VA Linux in November and December 1999 having experienced huge jumps in value during their first day each of trading. On January 10, 2000, three things happened, all of which were coincidental. A settlement to the Caldera v. Microsoft suit over DR-DOS was announced, with Microsoft paying former parent company Caldera, Inc. an amount estimated at $275 million (which turned out to be $280 million). Caldera Systems received a $30 million private equity investment from a group of companies that included Sun Microsystems, Novell, Citrix, Santa Cruz Operation, Chicago Venture Partners, and Egan-Managed Capital, with the goal to "fund operations and accelerate the growth and acceptance of Linux." Also, Caldera Systems announced that it would be filing to have an initial public offering. Ransom Love said that the Microsoft settlement would not benefit Caldera Systems other than that Caldera, Inc. would relinquish the name "Caldera", which would address existing industry confusion between the two. Reports at the time also indicated that the settlement would not directly benefit Caldera Systems, but that Caldera Systems could get an intangible benefit from a name association with a company that had bested an industry giant. Love also said that the timing between the funding round, work for which had begun six months earlier, and the IPO announcement was "unfortunate, and completely coincidental". Caldera Systems reincorporated in Delaware on March 6, 2000. By this point it was well positioned in some respects, such as having a strong relationship with Sun and receiving good product reviews within the industry. But it suffered from a lack of public awareness; as IDC analyst Dan Kusnetzky said, "They have a wonderful demo, and the product looks very good. But if you asked people on the street about Caldera they would probably think you are talking about a volcano in Hawaii." The company then staged an IPO of its common stock, with the symbol CALD. On the first day of trading, March 21, 2000, Caldera Systems' shares doubled in value, going from an initial price of $14 to close at $29 7/16, with heavy trading been seen and an intra-day high of $33. The IPO raised $70 million for the company and gave it a market capitalization of $1.1 billion. While the launch was successful on its own terms, analysts saw signs that the Linux mania was finally cooling, abetted by Red Hat and VA Linux having seen their values steadily decrease since their spectacular starts. So, while some observers viewed the IPO as a success, others viewed it as a disappointment. Red Hat continued to dominate in North America, with an over 50 percent share of the Linux market. Caldera International Acquisition of SCO UNIX On August 2, 2000, following several months of negotiations, Santa Cruz Operation announced that it would sell its Server Software and Services Divisions, including UnixWare – its most technically advanced proprietary Unix operating systems for Intel commodity hardware – to Caldera Systems. (The agreement was phrased in terms of Caldera Holding, Inc., a typical Newco in such transactions.) The annual SCO Forum conference of developers and resellers at the University of California, Santa Cruz, held later that month, had its name shortened to just "Forum". The deal was complex, involving cash, stock, and loans, and difficult to evaluate monetarily, but based on the price of Caldera Systems stock at the time it was worth around $110–114 million. SCO was much the bigger company, with 900 employees to Caldera Systems' 120. But SCO had been in distress; in part due to the advent of Linux, a series of previously good financial results had gone sour for the company as 1999 turned into 2000. As Forbes magazine stated, "Questions remain about execution, but the deal is at least a temporary life preserver for SCO, whose flagship UnixWare server software was in danger of eventually becoming irrelevant in the face of Linux." As Caldera Systems saw it, Unix and Linux were complementary rather than competitive technologies, especially in the sense that SCO Unix represented a good back-office and database solution while Linux specialized in networking. The deal gave Caldera Systems access to partnerships with Compaq Computer and IBM, both of which resold UnixWare, and also meant Caldera Systems would become the world's largest vendor of Unix licenses. SCO also had thousands of business applications running on it targeted to vertical markets. In addition, Caldera Systems saw SCO's role as one of the OS companies involved in Project Monterey as a means to develop a 64-bit computing strategy. But a primary reason for the acquisition was to get SCO's -strong reseller channel. Caldera Systems had been emphasizing trying to get into much the same VAR channel business that SCO was in, using the argument that resellers could find larger margins with free software than by selling Microsoft's Windows NT. But it had been a difficult sell against SCO; even when Linux outperformed SCO Unix, the idea of switching vendors and support organizations made resellers reluctant to make the move. So combining these channels was seen as a solution to this problem. As the president of iXorg, a reseller organization focused on SCO, stated, "The real value that Caldera will get from the deal is not the Unix name, not the [SCO] customer base, not even the technologies. It is the reseller channel." Skeptics noted, however, that many of those listed resellers were probably not that active anymore, especially in light of SCO's recent struggles (it had reported a $19 million quarterly loss a week before the acquisition announcement). Traditional SCO users were leery of the move, but Love tried to reassure them that the SCO Unix operating systems would continue on: "Why would we buy it to destroy what we buy? That wouldn't make any sense." There were hurdles to be overcome, including a fair amount of enmity for SCO within the Linux community. A major question became whether Caldera Systems would make the SCO-acquired Unix source code open source. Ransom Love initially said, "While we're having to look carefully at the licensing, we're going to open up the [UnixWare] source as much as possible, and at least some of it will be under" the GNU Public Licence. But there was pushback on the idea from the UnixWare staff in New Jersey, and in addition the license issues involved proved formidable. Love later said, "at first we wanted to open-source all of Unix's code, but we quickly found that even though we owned it, it was, and still is, full of other companies' copyrights. The challenge was that there were a lot of business entities that didn't want this to happen. Intel was the biggest opposition." Instead, there was a focus on SCO's Linux Kernel Personality (LKP), a layer that conformed to the Linux Standard Base specification which would allow applications built for Linux to run on SCO's UnixWare. This was seen as both a way to capture more applications for Unix, and as a way to increase the performance of high-end applications. The latter factor was because SCO UnixWare had an advantage over Linux at the time in terms of support for 16- and 32-way symmetric multiprocessing, UnixWare NonStop Clusters, and some other high-end operating system capabilities. Indeed, one SCO product manager said that some Linux applications could run several times faster under UnixWare with LKP than they could under native Linux. The SCO acquisition was originally scheduled to close in October 2000, but got delayed due to concerns from the Securities and Exchange Commission (SEC) regarding the details of the merger. However, the two companies' support organizations did get combined during this time. In addition, there was confusion among the SCO customer base about the fate of its other operating system, SCO OpenServer. So in February 2001, the deal was renegotiated to include OpenServer in what was sold to Caldera Systems, although a percentage of OpenServer revenue would still go back to SCO. The monetary terms of the deal were adjusted as well, with Caldera Systems paying SCO more cash than in the original agreement. Analysts were skeptical that these multiple operating systems could be managed without considerable difficulties being encountered. Financial pressure on the company continued; for fiscal 2000, ending on October 31, Caldera Systems lost $39.2 million on revenue of $4.3 million. "Unifying Unix with Linux for Business" The merger was originally being done under the name of the holding company Caldera, Inc., Then on March 26, 2001, during the CeBIT conference in Germany, Caldera Systems announced that it would be changing its name to Caldera International once the SCO acquisition was complete. By this point, the length and difficulty of the acquisition process had alienated some longtime SCO customers and partners. The acquisition closed on May 7, 2001, and the new Caldera International name became effective. The merged company had major offices in not just Utah, but also Santa Cruz, California, Murray Hill, New Jersey, and Watford, England, as well as smaller facilities in 16 additional countries. Thus included in late May 2001, Caldera International, with investments of Fujitsu and Hitachi, opening the Caldera K.K. () subsidiary, directed by Makoto Asoh, who had previously run Nihon SCO, in Tokyo, Japan, which had been one of two SCO subsidiaries in that country. Overall, SCO had an infrastructure presence of some kind in 80 countries, whereas Caldera Systems had always been largely domestic, thus in part the rationale for the name change. "Unifying Unix with Linux for Business" became the company's new marketing slogan. In light of that, the company began the Caldera Developer Network, which was intended to give developers of all kinds "early access to UNIX and Linux technologies, allowing them to develop on UNIX, on Linux or on a combined UNIX and Linux platform." Caldera International's initial release of UnixWare was renamed Open UNIX 8. This release was what would have been UnixWare 7.1.2. While it may have been done to make the branding more consistent with OpenLinux and Open Server, it confused people as well as build and installation scripts that tested for system name. Later, the newly renamed SCO Group reverted to the previous UnixWare brand and version release numbering, releasing UnixWare 7.1.3. In terms of the question of making some of UnixWare open source, in August 2001 Caldera International did announce that it was placing the code for the regular expression parser and the grep and awk commands, as well for the AIM Multiuser Benchmark, under the GNU General Public License. It also said it would begin an "Open Access to Open UNIX 8" program to allow developer partners to read-only viewing of unencumbered parts of the source base. But overall, Caldera International found itself in a classic business problem where the interests of the existing business conflicted with their growth model. SCO Unix was mature and sold itself (mainly to repeat customers and replicated sites). The VAR relationship was even more problematic. Even though the reseller organizations had been combined, in reality the prior SCO resellers made much more from each SCO Unix sale than from sales of Caldera OpenLinux, so they were not anxious to move existing customers from Unix to Linux. And even those that were supportive of Linux, did not necessarily see a strong value add for Caldera International products and could often sell Red Hat Enterprise Linux instead. Volution The Volution program was created out of the desire to create a layer of functionality on top of Linux, and Open UNIX 8 Linux Kernel Personality, that would add value to the operating systems offerings. It would end up having four main components: Volution Manager, Volution Messaging Server, Volution Online, and Volution Authentication Server, with an effort to build a common console for a unified user experience. As Ransom Love said, "Volution is a complex and extensive platform". In January 2001, Caldera Systems first shipped Volution Manager, a browser-based systems administration solution. Intended for service providers and corporate accounts, it was based around OpenLDAP and Novell eDirectory. It featured some sophisticated functionality, but its initial user interface was limited in some ways and the product was costly. Caldera Systems made a deal in February 2001 with Acrylis, Inc., a company based in Chelmsford, Massachusetts, to offer Acrylis's subscription-based service that allowed system administrators to test and then update Linux systems over a network. The service also delivered alerts to customers regarding the necessity for upgrades. The effort was an attempt to compete with the Red Hat Network service and gain a source of recurring revenue. Then in May 2001, Caldera International bought the WhatifLinux technology and assets outright from Acrylis, and changed the name of the service to Volution Online. Caldera Systems had earlier begun work on a Linux equivalent to the Microsoft Exchange Server that was aimed at the small to medium business market. This would eventually become the Volution Messaging Server, which was released in late 2001 for use on Caldera OpenLinux and Open UNIX 8 with LKP. It offered shared calendaring and scheduling options, SSL support for e-mail, simple configuration, and integration with Microsoft Outlook. However, there were already a number of mail servers available for Linux and none of them had taken off in the business market. Caldera Systems, and then Caldera International, had substantial experience with Web-Based Enterprise Management (WBEM), and its OpenWBEM implementation won the Best Open Source Project Award at LinuxWorld Conference and Expo in February 2002. That, combined with experience in the Kerberos authentication protocol and the difficulties of Windows–Unix integration, led Caldera International into research and development of an overall authentication solution that would find its place among Microsoft Active Directory, LDAP, Kerberos, and WBEM. The product of this work was the Volution Authentication Server, which allowed the management of Unix and Linux authentication via Active Directory. United Linux and continued decline When Caldera OpenLinux 3.1 Workstation was released in June 2001, it was with the requirement for per-seat licensing. This was part of what continued to bring criticism of Caldera in the some quarters of the open source and free software communities; Free Software Foundation founder Richard Stallman subsequently said of Ransom Love, "He's only a parasite", to which Love took umbrage, responding, "Did Richard Stallman ever invest £50m in Linux? We did. I have been involved in the Linux community since my time at Novell in 1994. … I am not a greedy capitalist. I am only a businessman. … You can't call our business model parasitic. We add value to Linux, so it can become successful. … I know that the open source movement has no clue about marketing, they underestimate it." United Linux was an attempt by a consortium of Linux companies to create a common base distribution for enterprise use and minimize duplication of engineering effort. and form an effective competitor to Red Hat. The founding members of United Linux were SuSE, Turbolinux, Conectiva, and Caldera International. The consortium was announced on May 30, 2002. The UnitedLinux distribution would be based mostly SuSE Enterprise Linux rather than Caldera OpenLinux. The Caldera product name was changed to "Caldera OpenLinux powered by United Linux", which as one Network World writer observed, was "certainly never going to become a catchphrase." UnitedLinux did attract some major hardware vendors in support, such as Hewlett-Packard, Intel, and AMD, with the goal of creating a uniform Linux distribution by the end of 2002. However, as CNET technology reporter Stephen Shankland wrote at the time, "UnitedLinux is widely viewed as an effort by second-tier Linux companies to gain the critical mass held by Linux leader Red Hat, but industry watchers are skeptical it will triumph." Other users saw the venture as more of a marketing move by a group of companies that were in difficulty. Intimations that UnitedLinux would also feature per-seat licensing were unpopular in the broader Linux community, and SuSE for their part said they had no such plans. Overall, the fortunes of Caldera International had been steadily declining, the SCO–Caldera combined total revenue having decreased from $170 million in 1999 to $70 million in 2001. The company was consistently reporting losses; for the third quarter of its fiscal year in 2001, for instance, it reported a net loss of $18.8 million against revenue of only $18.9 million. In the following quarter they took a large write-down of the assets acquired from SCO, as they could no longer be accounted for as having the value they were originally thought to possess. For the fiscal year ending on October 31, 2001, Caldera International reported a loss of $131.4 million based on revenues of $40.4 million (the loss included a total amount of write-down and other non-cash and restructuring charges of $98.6 million). The Linux side of Caldera International was bleeding funds; it was spending $4 for each $1 it received in revenue. The only Linux distributor company that was doing even passably well at the time was Red Hat. Caldera International's UnixWare and OpenServer business continued to be focused on small and medium-sized businesses and replicated sites, the latter largely being represented by retail or franchise-based companies such as CVS Pharmacy, Kmart, Pizza Hut, Pep Boys, Nasdaq, and others. A typical deployment scenario was that of McDonald's, which had a server running SCO OpenServer in each store that collected data from point-of-sale devices and relayed it to corporate headquarters while also providing access to corporate applications. An example of Linux Kernel Personality being used was Shoppers Drug Mart, which used it to run a SilverStream Software application server on UnixWare. In part Caldera International's problems were due to the economic environment surrounding the collapse of the dot-com bubble; investors were very reluctant to put additional monies into unprofitable start-up companies. The additional effects of the early 2000s recession were especially difficult for high-tech companies, with information technology spending slowing to a near halt. Overall the SCO side of the business often saw customers making do with what they had rather than buying anything new. The Caldera stock price was well under a dollar and NASDAQ was threatening to delist it. Financial analysts stopped their coverage of the company. On March 14, 2002, Caldera engaged in a 1-for-4 reverse stock split in order to get the stock price back over a dollar and avoid delisting. Also in March 2002, Caldera International moved its headquarters from Orem to Lindon, Utah. Several rounds of layoffs took place during this time. There was one in April 2001 that resulted in 32 employees losing their jobs. In September 2001 there was a layoff of 8 percent of the company's workforce, reducing it from 618 to 567 employees. A localized layoff hit the Santa Cruz office in April 2002. An especially broad, 15 percent layoff in May 2002 affected all areas of the company, with 73 people being let go and around 400 employees remaining. Offices in Chelmsford, Massachusetts and Erlangen, Germany were closed, representing what had been the development sites for Volution Online and the original Caldera OpenLinux. At the same time, the company's CTO, Drew Spencer, also departed. Plans to continue the company's annual Forum conference for the international SCO Unix community in Santa Cruz were scrapped, with instead a GeoForum event announced that would be held in multiple locations around the world and in Las Vegas, Nevada in the United States. Despite having earlier done the reverse stock split, as well as a stock buyback, in late June 2002 Caldera International received another delisting notice from NASDAQ. The company had less than four months' cash for operations. As Wired magazine later wrote, the company "faced a nearly hopeless situation." Change of management, name, and direction On June 27, 2002, Caldera International had a change in management, with Darl McBride, formerly an executive with Novell, FranklinCovey, and several start-ups, taking over as CEO from Ransom Love. At the same time, Caldera International said it would buy back its stock owned by Tarantella, Inc. and MTI Technology, thereby relieving itself of the obligation to pay a percentage of OpenServer revenue past a certain point to Tarantella. Love became head of Caldera International's role in the United Linux effort. IDC analyst Dan Kusnetzky said that while the United Linux role was important, the removal of Love from the CEO post could be seen as "moving him away from the controls at Caldera to let someone else take over." Changes under McBride happened quickly. On August 26, 2002, it was announced that Caldera International was changing its name back to SCO, in the form of the new name The SCO Group. (The final legal aspects of the name change did not become complete until May 2003.) This reflected recognition of the reality that almost all of the company's revenue was coming from Unix, not Linux, products. The product name Caldera OpenLinux became "SCO Linux powered by UnitedLinux" and all other Caldera branded names were changed as well. The Volution Messaging Server product was retained and renamed SCOoffice Server, but the other Volution products were split off under the names Volution Technologies, Center 7, and finally Vintela. From the start of his time as CEO, McBride had considered the possibility of claiming ownership of some of the code within Linux. Love had told him, "Don't do it. You don't want to take on the entire Linux community." But by October 2002, McBride had created an internal organization "to formalize the licensing of our intellectual property". Within a few months after that, SCO had begun issuing proclamations and lawsuits based upon its belief that its Unix intellectual property had been incorporated into Linux in an unlawful and uncompensated manor, and had stopped selling its own Linux product. The SCO–Linux disputes were fully underway, and SCO would soon become, as Businessweek headlined, "The Most Hated Company In Tech". Interviewed later in 2003, Ransom Love – by then no longer in the Linux business either – said that SCO might have a legitimate argument regarding some specific contractual issues, but that lawsuits were rarely helpful and that "Fundamentally, I would not have pursued SCO's path." Legacy Caldera played an important role in Linux history by establishing what would be necessary to create a mainstream, business-oriented system, with stability and support, out of the Linux kernel. Along with Red Hat and SuSE, it was the most important of the commercial Linux distributions. And as Glyn Moody wrote in Rebel Code: Linux and the Open Source Revolution, Caldera Systems' announcement in 2000 that it was buying SCO Unix – and with it code that dated back through Unix System Laboratories and AT&T before that – was the final marker for the ascendency of Linux over the Unix old guard: "The hackers had triumphed over the establishment." But from a business perspective, the Caldera Systems acquisition of SCO Unix has been treated less kindly in retrospect. In 2016, ZDNet ranked it ninth on its list of the worst technology mergers and acquisitions of all time. In any case, the one true success story to come out of business-oriented Linux distributions was Red Hat, who at the time maintained they were competing against Microsoft, not Caldera Systems or the other distributions, and which set several marks for revenue for an open-source oriented business before being acquired by IBM in 2018 for $34 billion. Perhaps the most successful technology venture to come out of Caldera International was the Volution Authentication Server, which under the Vintela name achieved considerable success. Vintela itself was bought by Quest Software for $56.5 million in 2005, and the Vintela software became a core part of that company's One Identity product. As Dave Wilson, CEO of Vintela, later said, "Caldera Systems … played a major role in establishing Linux as a serious technology in our industry, and the people who worked for Caldera Systems are very proud of their achievements. Many of those people continue to drive innovation [at a variety of other companies]." Products Caldera OpenLinux, a Linux distribution with added non-free components UnixWare, a UNIX operating system. UnixWare 2.x and below were direct descendants of Unix System V Release 4.2 and was originally developed by AT&T, Univel, Novell and later on The Santa Cruz Operation. UnixWare 7 was sold as a UNIX OS combining UnixWare 2 and OpenServer 5 and was based on System V Release 5. UnixWare 7.1.2 was branded OpenUNIX 8, but later releases returned to the UnixWare 7.1.x name and version numbering. SCO OpenServer, another UNIX operating system, which was originally developed by The Santa Cruz Operation. SCO OpenServer 5 was a descendant of SCO UNIX, which is in turn a descendant of XENIX. OpenServer 6 is, in fact, an OpenServer compatibility environment running on a modern SVR5 based UNIX kernel. Smallfoot, technology consisting of an operating system and a toolkit to create point of sale applications Volution Manager, a browser-based systems administration solution Volution Online, a subscription-based service for testing and then updating Linux systems over a network Volution Messaging Server, a bundled mail and messaging solution for Linux and Unix servers Volution Authentication Server, technology to allow the management of Linux and Unix authentication via Microsoft servers References External links Caldera Systems, Inc. (archived web site calderasystems.com from 1999-01-17 to 2001-04-05 and caldera.com from 2000-02-29 to 2000-12-17), Caldera Holdings (archived web site caldera.com from 2001-01-18 to 2001-03-02), Caldera International, Inc. (archived web site caldera.com from 2001-03-30 to 2002-08-25) and The SCO Group (archived web site caldera.com from 2002-09-14 to 2004-09-01 and sco.com from 2001-05-08) LST Software GmbH (archived web site lst.de from 1997-01-11 to 1997-12-11), Caldera Deutschland GmbH (archived web site lst.de from 1998-12-01 to 2000-01-02 and caldera.de from 2000-04-13 to 2001) and LST - Verein zur Förderung freier Software (archived web site lst.de from 2001-03-31) Caldera (company) Defunct software companies of the United States Defunct companies based in Utah Software companies established in 1998 Software companies disestablished in 2002 1998 establishments in Utah 2002 disestablishments in Utah 2000 initial public offerings 2001 mergers and acquisitions Linux companies Free software companies Unix history American companies established in 1998 American companies disestablished in 2002
Operating System (OS)
469
MiNT MiNT is Now TOS (MiNT) is a free software alternative operating system kernel for the Atari ST system and its successors. It is a multi-tasking alternative to TOS and MagiC. Together with the free system components fVDI device drivers, XaAES graphical user interface widgets, and TeraDesk file manager, MiNT provides a free TOS compatible replacement OS that can multitask. History Work on MiNT began in 1989, as the developer Eric Smith was trying to port the GNU library and related utilities on the Atari ST TOS. It turned out quickly, that it was much easier to add a Unix-like layer to the TOS, than to patch all of the GNU software, and MiNT began as a TOS extension to help in porting. MiNT was originally released by Eric Smith as "MiNT is Not TOS" (a recursive acronym in the style of "GNU's Not Unix") in May 1990. The new Kernel got traction, with people contributing a port of the MINIX Filesystem and a port to the Atari TT. At the same time Atari was looking to enhance the TOS with multitasking abilities, they found that MiNT could fulfill the job, and hired Eric Smith. MiNT was adopted as an official alternative kernel with the release of the Atari Falcon, slightly altering the MiNT acronym into "MiNT is Now TOS". Atari bundled MiNT with a multitasking version of the Graphics Environment Manager (GEM) under the name MultiTOS as a floppy disk based installer. After Atari left the computer market, MiNT development continued under the name FreeMiNT, and is now maintained by a team of volunteers. FreeMiNT development follows a classic open-source approach, with the source code hosted on a publicly browsable FreeMiNT GIT repository on GitHub and development discussed in a public mailing list., which is maintained on SourceForge, after an earlier (2014) move from AtariForge, where it was maintained for almost 20 years. Hardware requirements A minimal install of MiNT will run on an Atari ST with its stock 8 MHz 68000 CPU, with 4 MB RAM and a hard drive. It is highly recommended that an Atari computer with a 16 MHz 68030 CPU and 8 MB of RAM be used. MiNT can also run inside the emulators Hatari and STEem, and with networking on the 68040 virtual machine Aranym. MiNT software ecosystem FreeMiNT provides only a kernel, so several distributions support MiNT, like VanillaMint, EasyMint, STMint and BeeKey/BeePi. Although FreeMiNT can use the graphical user interface of the TOS (the Graphics Environment Manager GEM and the Application Environment Services or AES), it is better served with an enhanced AES which can use its multi-tasking abilities. The default one is currently XaAES, which is developed as a FreeMiNT kernel module. The older N.AES also works, however the modern alternative is MyAES See also XaAES EmuTOS SpareMiNT Hatari (emulator) References External links FreeMiNT Project website , MyAeS Unofficial XaAES website MiNT is Now TOS—an interview with Mr Eric R. Smith, the creator of MiNT FreeMiNT mailing list FreeMiNT mailing list archives FreeMiNT wiki XaAES source FreeMiNT support forum Atari ST software Disk operating systems Free software operating systems Atari operating systems
Operating System (OS)
470
Coreboot The software project coreboot, formerly known as LinuxBIOS, is aimed at replacing proprietary firmware (BIOS or UEFI) found in most computers with a lightweight firmware designed to perform only the minimum number of tasks necessary to load and run a modern 32-bit or 64-bit operating system. Since coreboot initializes the bare hardware, it must be ported to every chipset and motherboard that it supports. As a result, coreboot is available only for a limited number of hardware platforms and motherboard models. One of the coreboot variants is Libreboot, a software distribution fully free of proprietary blobs, aimed at end users. History The coreboot project began in the winter of 1999 in the Advanced Computing Laboratory at Los Alamos National Laboratory (LANL), with the goal of creating a BIOS that would start fast and handle errors intelligently. It is licensed under the terms of the GNU General Public License (GPL). Main contributors include LANL, SiS, AMD, Coresystems and Linux Networx, Inc, as well as motherboard vendors MSI, Gigabyte and Tyan, which offer coreboot alongside their standard BIOS or provide specifications of the hardware interfaces for some of their motherboards. Google partly sponsors the coreboot project. CME Group, a cluster of futures exchanges, began supporting the coreboot project in 2009. Coreboot has been accepted in seven consecutive years (2007–2014) for the Google Summer of Code. Other than the first three models, all Chromebooks run coreboot. Code from Das U-Boot has been assimilated to enable support for processors based on the ARM instruction set. In June 2019, Coreboot began to use the NSA software Ghidra for its reverse engineering efforts on firmware-specific problems following the open source release of the software suite. Supported platforms CPU architectures supported by coreboot include IA-32, x86-64, ARM, ARM64, MIPS and RISC-V. Supported system-on-a-chip (SOC) platforms include AMD Geode, starting with the Geode GX processor developed for the OLPC. Artec Group added Geode LX support for its ThinCan model DBE61; that code was adopted by AMD and further improved for the OLPC after it was upgraded to the Geode LX platform, and is further developed by the coreboot community to support other Geode variants. Coreboot can be flashed onto a Geode platform using Flashrom. From that initial development on AMD Geode based platforms, coreboot support has been extended onto many AMD processors and chipsets. The processor list includes Family 0Fh and 10h (K8 core), and recently Family 14h (Bobcat core, Fusion APU). Coreboot support also extends to AMD chipsets: RS690, RS7xx, SB600, and SB8xx. AMD Generic Encapsulated Software Architecture (AGESA)a bootstrap protocol by which system devices on AMD64 mainboards are initializedwas open sourced in early 2011, aiming to provide required functionality for coreboot system initialization on AMD64 hardware. However, such releases never became the basis for future development by AMD, and were subsequently halted. Devices that can be preloaded with coreboot or one of its derivatives include some x86-based Chromebooks, the Libreboot X200 and T400 (rebranded ThinkPad X200 and T400, respectively, available from Minifree, previously known as Gluglug), OLPC XO from the One Laptop per Child initiative, ThinCan models DBE61, DBE62 and DBE63, and fanless server/router hardware manufactured by PC Engines. All Librem laptops come with coreboot. Some System76 PCs use coreboot TianoCore firmware, including open source Embedded Controller firmware. Design Coreboot typically loads a Linux kernel, but it can load any other stand-alone ELF executable, such as iPXE, gPXE or Etherboot that can boot a Linux kernel over a network, or SeaBIOS that can load a Linux kernel, Microsoft Windows 2000 and later, and BSDs (previously, Windows 2000/XP and OpenBSD support was provided by ADLO). Coreboot can also load a kernel from any supported device, such as Myrinet, Quadrics, or SCI cluster interconnects. Booting other kernels directly is also possible, such as a Plan 9 kernel. Instead of loading a kernel directly, coreboot can pass control to a dedicated boot loader, such as a coreboot-capable version of GNU GRUB 2. Coreboot is written primarily in C, with a small amount of assembly code. Choosing C as the primary programming language enables easier code audits when compared to contemporary PC BIOS that was generally written in assembly, which results in improved security. There is build and runtime support to write parts of coreboot in Ada to further raise the security bar, but it is currently only sporadically used. The source code is released under the GNU GPL version 2 license. Coreboot performs the absolute minimal amount of hardware initialization and then passes control to the operating system. As a result, there is no coreboot code running once the operating system has taken control. A feature of coreboot is that the x86 version runs in 32-bit mode after executing only ten instructions (almost all other x86 BIOSes run exclusively in 16-bit mode). This is similar to the modern UEFI firmware, which is used on newer PC hardware. By itself, coreboot does not provide BIOS call services. The SeaBIOS payload can be used to provide BIOS calls and thus allow coreboot to load operating systems that require those services, such as Windows 2000/XP/Vista/7 and BSDs. However, most modern operating systems access hardware in another manner and use BIOS calls only during early initialization and as a fallback mechanism. Coreboot stages Bootblock stage: Prepare to obtain Flash access and look up the ROM stage to use ROM stage: Memory and early chipset init (a bit like PEI in UEFI) RAM stage: CPU, chipset and mainboard init, PCI resource assignment, ACPI and SMBIOS table creation, SMM handler (a bit like DXE stage in UEFI) Payload. Initializing DRAM The most difficult hardware that coreboot initializes is the DRAM controllers and DRAM. In some cases, technical documentation on this subject is NDA restricted or unavailable. RAM initialization is particularly difficult because before the RAM is initialized it cannot be used. Therefore, to initialize DRAM controllers and DRAM, the initialization code may have only the CPU's general purpose registers or Cache-as-RAM as temporary storage. romcc, a C compiler that uses registers instead of RAM, eases the task. Using romcc, it is relatively easy to make SMBus accesses to the SPD ROMs of the DRAM DIMMs, that allows the RAM to be used. With newer x86 processors, the processor cache can be used as RAM until DRAM is initialized. The processor cache has to be initialized into Cache-as-RAM mode as well, but this needs fewer instructions than initializing DRAM. Also, the Cache-as-RAM mode initialization is specific to CPU architectures, thus more generic than DRAM initialization, which is specific to each chipset and mainboard. For most modern x86 platforms, closed source binary-only components provided by the vendor are used for DRAM setup. For Intel systems, FSP-M is required, while AMD has no current support. Binary AGESA is currently used for proprietary UEFI firmware on AMD systems, and this model is expected to carry over to any future AMD-related coreboot support. Developing and debugging coreboot Since coreboot must initialize the bare hardware, it must be ported to every chipset and motherboard that it supports. Before initializing RAM, coreboot initializes the serial port (addressing cache and registers only), so it can send out debug text to a connected terminal. It can also send byte codes to port 0x80 that are displayed on a two-hex-digit display of a connected POST card. Another porting aid was the commercial "RD1 BIOS Savior" product from www.ioss.com.tw, (not to be confused with US Interagency OPSEC Support Staff at www.iad.gov/ioss/) which was a combination of two boot memory devices that plugs into the boot memory socket and has a manual switch to select between the two devices. The computer could boot from one device, and then the switch can be toggled to allow the computer to reprogram or "flash" the second device. A more expensive alternative is an external EEPROM/NOR flash programmer. There are also CPU emulators that either replace the CPU or connect via a JTAG port, with the Sage SmartProbe being an example. Code can be built on, or downloaded to, BIOS emulators rather than flashing the BIOS device. Payloads Coreboot can load a payload, which may be written using the helper library. Existing payloads include the following: SeaBIOS, a tiny implementation of x86 BIOS, written mostly in 16-bit C using the GNU C compiler TianoCore, a free and open-source implementation of UEFI OpenBIOS, a free and open-source implementation of Open Firmware GNU GRUB, a bootloader FILO, a GRUB-like bootloader with USB boot support Etherboot, it can boot an operating system over the network gPXE/iPXE, the successor to Etherboot, works when run under SeaBIOS or TianoCore Depthcharge is used by Google for Chrome OS A branch of Das U-Boot was used by Google for Chromium OS in the past European Coreboot Conference One physical meeting is the European Coreboot Conference which was organized in October 2017 and lasted for 3 days. Conference history Variants Coreboot has a number of variants from its original code base each with slightly different objectives: librecore - A variant with more focus on freedom, non-x86 instruction set computers, and firmware development frameworks. Libreboot - A variant with a primary focus to remove all binary blobs. Libreboot has been established as a distribution of coreboot without proprietary binary blobs. Libreboot is not a straight fork of coreboot; instead, it is a parallel effort that works closely with and re-bases every so often on the latest coreboot as the upstream supplier, with patches merged upstream whenever possible. In addition to removing proprietary software, libreboot also attempts to make coreboot easy to use by automating the build and installation processes. The Libreboot project made possible the required modifications for completely libre variants of some ThinkPad, MacBook and ARM Chromebook laptops. See also Beowulf cluster LinuxBoot LOBOS Open-source hardware Rapid Boot References Further reading Inside the Linux boot process, by M. Jones, IBM Open BIOSes for Linux, by Peter Seebach (archive only) LinuxBIOS ready to go mainstream, by Bruce Byfield First desktop motherboard supported by LinuxBIOS: GIGABYTE M57SLI-S4, by Brandon Howard Video recording of Ron Minnich's LinuxBIOS talk from FOSDEM 2007 Coreboot Your Service, Linux Journal, October 2009 media.ccc.de - Search for "Peter Stuge" External links Free BIOS implementations High-priority free software projects Firmware Custom firmware Software related to embedded Linux
Operating System (OS)
471
Mac OS X Jaguar Mac OS X Jaguar (version 10.2) is the third major release of macOS, Apple's desktop and server operating system. It superseded Mac OS X 10.1 and preceded Mac OS X Panther. The operating system was released on August 23, 2002 either for single-computer installations, and in a "family pack," which allowed five installations on separate computers in one household. Jaguar was the first Mac OS X release to publicly use its code name in marketing and advertisements. System requirements Mac OS X Jaguar required a PowerPC G3 or G4 CPU and 128 MB of RAM. Special builds were released for the first PowerPC G5 systems released by Apple. New and changed features Jaguar introduced many new features to Mac OS X, which are still supported to this day, including MPEG-4 support in QuickTime, Address Book, and Inkwell for handwriting recognition. It also included the first release of Apple's Zeroconf implementation, Rendezvous (later renamed to Bonjour), which allows devices on the same network to automatically discover each other and offer available services, such as file sharing, shared scanners, and printers, to the user. Mac OS X Jaguar Server 10.2.2 added journaling to HFS Plus, the native Macintosh file system, to add increased reliability and data recovery features. This was later added to the standard Mac OS X in version 10.3 Panther. Jaguar saw the debut of Quartz Extreme, a technology used to composite graphics directly on the video card, without the use of software to composite windows. The technology allotted the task of drawing the 3D surface of windows to the video card, rather than to the CPU, to increase interface responsiveness and performance. Universal Access was added to allow the Macintosh to be usable by disabled computer users. The user interface of Jaguar was also amended to add search features to the Finder using the updated Sherlock 3. Internally, Jaguar also added the Common Unix Printing System (also known as CUPS), a modular printing system for Unix-like operating systems, and improved support for Microsoft Windows networks using the open-source Samba as a server for the SMB remote file access protocol and a FreeBSD-derived virtual file system module as a client for SMB. The famous Happy Mac that had greeted Mac users for almost 18 years during the Macintosh startup sequence was replaced with a large grey Apple logo with the introduction of Mac OS X Jaguar. Marketing Unlike Mac OS X 10.1, Jaguar was a paid upgrade, costing $129. In October 2002, Apple offered free copies of Jaguar to all U.S K-12 teachers as part of the "X For Teachers" program. Teachers who wanted to get a copy simply had to fill out a form and a packet containing Mac OS X installation discs and manuals was shipped to the school where they worked. Jaguar marked the first Mac OS X release which publicly used its code name as both a marketing ploy and as an official reference to the operating system. To that effect, Apple replaced the packaging for Mac OS X with a new jaguar-themed box. Starting with Jaguar, Mac OS X releases were given a feline-related marketing name upon announcement until the introduction of OS X Mavericks in June 2013, at which point releases began to be named after locations in California, where Apple is headquartered. Mac OS X (rebranded as OS X in 2012 and later macOS in 2016) releases are now also referred to by their marketing name, in addition to version numbers. Release history Mac OS X 10.2.7 (codenames Blackrider, Smeagol) was only available to the new Power Mac G5s and aluminum PowerBook G4s released before Mac OS X Panther. Officially, it was never released to the general public. Mac OS X 10.2.8 is the last version of Mac OS X officially supported on the "Beige G3" desktop, minitower, and all-in-one systems as well as the PowerBook G3 Series (1998) also known as Wallstreet/PDQ; though later releases can be run on such Macs with the help of unofficial, unlicensed, and unsupported third-party tools such as XPostFacto. References External links Mac OS X v10.2 review at Ars Technica from apple.com 2 PowerPC operating systems 2002 software Computer-related introductions in 2002
Operating System (OS)
472
Fuchsia (operating system) Fuchsia is an open-source capability-based operating system developed by Google. In contrast to prior Google-developed operating systems such as Chrome OS and Android, which are based on the Linux kernel, Fuchsia is based on a new kernel named Zircon. It first became known to the public when the project appeared on a self-hosted git repository in August 2016 without any official announcement. After years of development, Fuchsia was officially released to the public on the first-generation Google Nest Hub, replacing its original Cast OS. Etymology The name "Fuchsia" is a reference to the color of the same name, which itself is a combination of the color pink (also the codename of Apple Pink) and purple (also the codename of the first-generation iPhone). History In August 2016, media outlets reported on a codebase post published on GitHub, revealing that Google was developing a new operating system named Fuchsia. No official announcement was made, but inspection of the code suggested its capability to run on various devices, including "dash infotainment" systems for cars, embedded devices like traffic lights, digital watches, smartphones, tablets and PCs. The code differs from Android and Chrome OS due to its being based on the Zircon kernel (formerly named Magenta) rather than on the Linux kernel. In May 2017, Ars Technica wrote about Fuchsia's new user interface, an upgrade from its command-line interface at its first reveal in August, along with a developer writing that Fuchsia "isn't a toy thing, it's not a 20% Project, it's not a dumping ground of a dead thing that we don't care about anymore". Multiple media outlets wrote about the project's seemingly close ties to Android, with some speculating that Fuchsia might be an effort to "re-do" or replace Android in a way that fixes problems on that platform. In January 2018, Google published a guide on how to run Fuchsia on Pixelbooks. This was followed successfully by Ars Technica. A Fuchsia "device" was added to the Android ecosystem in January 2019 via the Android Open Source Project (AOSP). Google talked about Fuchsia at Google I/O 2019. Hiroshi Lockheimer, Senior Vice President of Chrome and Android, described Fuchsia as one of Google’s experiments around new concepts for operating systems. On July 1, 2019, Google announced the official website of the development project providing source code and documentation for the operating system. Roughly a year and a half later, on December 8, 2020, Google announced that they were "expanding Fuchsia's open-source model" including making mailing lists public, introducing a governance model, publishing a roadmap and would be using a public issue tracker. In May 2021, Google employees confirmed that it had deployed Fuchsia in the consumer market for the first time, within a software update to the first-generation Google Home Hub that replaces its existing Chromecast-based software. The update contains no user-facing changes to the device's software or user interface. After the initial wave of updates to preview devices, the update was rolled out to all Nest Hub devices in August 2021. Overview The GitHub project suggested Fuchsia can run on many platforms, from embedded systems to smartphones, tablets, and personal computers. In May 2017, Fuchsia was updated with a graphical user interface, along with a developer writing that the project was not a "dumping ground of a dead thing", prompting media speculation about Google's intentions with the operating system, including the possibility of it replacing Android. On July 1, 2019 Google announced the homepage of the project, fuchsia.dev, which provides source code and documentation for the newly announced operating system. Fuchsia's user interface and apps are written with Flutter, a software development kit allowing cross-platform development abilities for Fuchsia, Android and iOS. Flutter produces apps based on Dart, offering apps with high performance graphics that run at 120 frames per second. Fuchsia also offers a Vulkan-based graphics rendering engine called Escher, with specific support for "Volumetric soft shadows", an element that Ars Technica wrote, "seems custom-built to run Google's shadow-heavy 'Material Design' interface guidelines". Due to the Flutter software development kit offering cross-platform opportunities, users are able to install parts of Fuchsia on Android devices. In 2017, Ars Technica noted that, though users could test Fuchsia, nothing "works", because "it's all a bunch of placeholder interfaces that don't do anything". They found multiple similarities between Fuchsia's interface and Android, including a Recent Apps screen, a Settings menu, and a split-screen view for viewing multiple apps at once. In a 2018 review, Ars Technica experts were impressed with the progress, noting that things were then working, and were especially pleased by the hardware support. One of the positive surprises was support for multiple mouse pointers. A special version of Android Runtime for Fuchsia is planned to run from a FAR file, the equivalent of the Android APK. Kernel Fuchsia is based on a new message passing kernel named Zircon, after the mineral zircon. Zircon's codebase was derived from that of Little Kernel (LK), a kernel for embedded devices, aimed for low resource uses, to be used on a wide variety of devices. LK was developed by Travis Geiselbrecht, who had also co-authored the NewOS kernel used by Haiku. Zircon is written mostly in C++, with some parts in assembly language. It is composed of a kernel with a small set of user services, drivers, and libraries which are all necessary for the system to boot, communicate with the hardware, and load the user processes. Its present features include handling threads, virtual memory, processes intercommunication, and waiting for changes in the state of objects. It is heavily inspired by Unix kernels but differs greatly. For example, it does not support Unix-like signals but incorporates event-driven programming and the observer pattern. Most system calls do not block the main thread. Resources are represented as objects rather than files, unlike traditional Unix systems. References External links 2016 software C++ software Capability systems Embedded operating systems Free software operating systems Free software programmed in C Free software programmed in Go Free software programmed in Rust Fuchsia Fuchsia Software using the Apache license Software using the BSD license Software using the MIT license Upcoming software x86-64 operating systems
Operating System (OS)
473
ISPF In computing, Interactive System Productivity Facility (ISPF) is a software product for many historic IBM mainframe operating systems and currently the z/OS and z/VM operating systems that run on IBM mainframes. It includes a screen editor, the user interface of which was emulated by some microcomputer editors sold commercially starting in the late 1980s, including SPF/PC. ISPF primarily provides an IBM 3270 terminal interface with a set of panels. Each panel may include menus and dialogs to run tools on the underlying environment, e.g., Time Sharing Option (TSO). Generally, these panels just provide a convenient interface to do tasks—most of them execute modules of IBM mainframe utility programs to do the actual work. ISPF is frequently used to manipulate z/OS data sets via its Program Development Facility (ISPF/PDF). ISPF is user-extensible and it is often used as an application programming interface. Many vendors have created products for z/OS that use the ISPF interface. An early version was called Structured Programming Facility (SPF) and introduced in SVS and MVS systems in 1974. IBM chose the name because SPF was introduced about the same time as structured programming concepts. In 1979 IBM introduced a new version and a compatible product for CMS under Virtual Machine Facility/370 Release 5. In 1980 IBM changed its name to System Productivity Facility and offered a version for CMS under VM/SP. In 1982 IBM changed the name to Interactive System Productivity Facility, split off some facilities into Interactive System Productivity Facility/Program Development Facility (ISPF/PDF) and offered a version for VSE/AF. In 1984 IBM released ISPF Version 2 and ISPF/PDF Version 2; the VM versions allowed the user to select either the PDF editor or XEDIT. IBM eventually merged PDF back into the base product. ISPF can also be run from a z/OS batch job. ISPF/PDF interactive tools When a foreground (interactive) TSO user invokes ISPF, it provides a menuing system, normally with an initial display of a Primary Option Menu this provides them access to many useful tools for application development and for administering the z/OS operating system. Such tools include Browse - for viewing data sets, partitioned data set (PDS) members, and Unix System Services files. Edit - for editing data sets, PDS members, and Unix System Services files. Utilities - for performing data manipulation operations, such as: Data Set List - which allows the user to list and manipulate (copy, move, rename, print, catalog, delete, etc.) files (termed "data sets" in the z/OS environment). Member List - for similar manipulations of members of PDSs. Search facilities for finding modules or text within members or data sets. Compare facilities for comparing members or data sets. Library Management, including promoting and demoting program modules. ISPF as a user interface development environment Underlying ISPF/PDF is an extensive set of tools that allow application developers to create panel-driven applications, and a set of guidelines to promote consistent use of ISPF functions. A "panel" is a character-based "window" which can encompass all or part of a 3270 session's screen real estate. See Text-based user interfaces. Most mainframe software vendors used ISPF functions to create their applications, so their tools are similar in appearance and operation to ISPF. Similarly, many installations write their own informal tools that use ISPF services. ISPF services are generally available to any programmer in the shop, and can be used to write panels for either personal or shop-wide use, writing in either compiled languages such as C, COBOL, PL/I, or interpreted languages such as CLIST and REXX. ISPF applications can be used to perform so-called "file tailoring" functions, customisation of specially crafted JCL members called "skeletons", which can then be submitted as batch jobs to the mainframe. Editor The editor screen is formatted with 2 lines (info & command line) at the top (or bottom -- user choice), a six character line number column in the left margin, and the remainder of the screen width being filled with the records of the dataset being edited. Primary commands (which apply to the whole dataset) such as Find, Print, Sort, etc. are typed in the command line. Line commands (which apply only to specific line(s)) such as copy, move, repeat, insert, exclude, delete, text flow, text split are entered by over-typing the line number fields with a one or two character code representing the command to be applied at that line followed by an optional number which further modifies the supplied command. The editor has several key features: Context sensitive color highlighting for several languages and file types Code folding via the X or XX...XX(hide) line(s) command & indentation-selective reveals editor macro commands in REXX or compiled languages comparison with another dataset models of ISPF service calls context-sensitive Help available recovery from lost sessions The editor can also be invoked in a 'view' mode. It behaves like the editor, but does not allow saving the data. Edited files can also be saved under a different name, creating or replacing another file. ISPF provides the 'editor interface' which lets an application program display arbitrary data in the familiar editor panel. Thus many vendor packages use this familiar interface. Customization ISPF is designed to be customized for each user (a fairly new concept in 1974, when it was introduced). Some of the customization is global and some is specific to an ISPF application. It supports a set of 24 function keys which, when pressed, execute commands. These are customizable: Each user can replace the default commands assigned to any key with his own preferred command (or series of commands). User settings are stored centrally, so that the user can logon from any terminal and that session will remember their previously-chosen commands for each key. Most Personal computers copied this, and have a set of 12 function keys. Even some defaults have endured: the F1 key triggers a "help" function on a large number of mainframe & PC programs. ISPF remembers each user's choices for such things as screen colors & layout, the location of the command line and scrolling preferences. It also remembers the last-used data set names on each panel, so the next time the panel is used the names are already filled in. This is very convenient for mainframe programmers because they frequently work with the same files repeatedly. PC use Many of the early users of PCs were mainframe programmers or users, who were accustomed to and liked the ISPF panel system. This led several companies to create partial clones of ISPF that run on DOS, OS/2, Windows or Unix PC systems. In 1984 IBM introduced the EZ-VU dialog manager for DOS PCs, and later OS/2. In 1991 Tritus, Inc introduced Tritus SPF (TSPF), a program to allow use of mainframe ISPF applications and edit macros written in REXX on DOS, OS/2 and Windows; the last release was 1.2.8 in 1994. The SPF/SE 365 and Uni-SPF editors are still sold currently, and the free SPFlite is currently available. In 1994 IBM introduced a built-in downloadable client program called the ISPF Workstation Agent (WSA) that can install and run on OS/2, Windows and selected UNIX workstations; the z/OS version of ISPF only includes WSA for Windows and selected UNIX workstations. WSA communicates directly with ISPF on z/OS and provides a point-and-click graphical user interface automatically. The ISPF Workstation Agent can be used to edit PC based files from the ISPF editor to take advantage of the editor's strengths. See also SMIT, the built-in menu/panels program for AIX References Notes External links IBM: "ISPF for z/OS" IBM: ISPF documentation IBM mainframe operating systems Command shells IBM software Text editors IBM mainframe software
Operating System (OS)
474
CP/M CP/M, originally standing for Control Program/Monitor and later Control Program for Microcomputers, is a mass-market operating system created in 1974 for Intel 8080/85-based microcomputers by Gary Kildall of Digital Research, Inc. Initially confined to single-tasking on 8-bit processors and no more than 64 kilobytes of memory, later versions of CP/M added multi-user variations and were migrated to 16-bit processors. The combination of CP/M and S-100 bus computers became an early standard in the microcomputer industry. This computer platform was widely used in business through the late 1970s and into the mid-1980s. CP/M increased the market size for both hardware and software by greatly reducing the amount of programming required to install an application on a new manufacturer's computer. An important driver of software innovation was the advent of (comparatively) low-cost microcomputers running CP/M, as independent programmers and hackers bought them and shared their creations in user groups. CP/M was eventually displaced by DOS following the 1981 introduction of the IBM PC. Hardware model A minimal 8-bit CP/M system would contain the following components: A computer terminal using the ASCII character set An Intel 8080 (and later the 8085) or Zilog Z80 microprocessor The NEC V20 and V30 processors support an 8080-emulation mode that can run 8-bit CP/M on a PC DOS/MS-DOS computer so equipped, though any PC can also run the 16-bit CP/M-86. At least 16 kilobytes of RAM, beginning at address 0 A means to bootstrap the first sector of the diskette At least one floppy disk drive The only hardware system that CP/M, as sold by Digital Research, would support was the Intel 8080 Development System. Manufacturers of CP/M-compatible systems customized portions of the operating system for their own combination of installed memory, disk drives, and console devices. CP/M would also run on systems based on the Zilog Z80 processor since the Z80 was compatible with 8080 code. While the Digital Research distributed core of CP/M (BDOS, CCP, core transient commands) did not use any of the Z80-specific instructions, many Z80-based systems used Z80 code in the system-specific BIOS, and many applications were dedicated to Z80-based CP/M machines. On most machines the bootstrap was a minimal bootloader in ROM combined with some means of minimal bank switching or a means of injecting code on the bus (since the 8080 needs to see boot code at Address 0 for start-up, while CP/M needs RAM there); for others, this bootstrap had to be entered into memory using front-panel controls each time the system was started. CP/M used the 7-bit ASCII set. The other 128 characters made possible by the 8-bit byte were not standardized. For example, one Kaypro used them for Greek characters, and Osborne machines used the 8th bit set to indicate an underlined character. WordStar used the 8th bit as an end-of-word marker. International CP/M systems most commonly used the ISO 646 norm for localized character sets, replacing certain ASCII characters with localized characters rather than adding them beyond the 7-bit boundary. Components of the operating system In the 8-bit versions, while running, the CP/M operating system loaded into memory had three components: Basic Input/Output System (BIOS), Basic Disk Operating System (BDOS), Console Command Processor (CCP). The BIOS and BDOS were memory-resident, while the CCP was memory-resident unless overwritten by an application, in which case it was automatically reloaded after the application finished running. A number of transient commands for standard utilities were also provided. The transient commands resided in files with the extension .COM on disk. The BIOS directly controlled hardware components other than the CPU and main memory. It contained functions such as character input and output and the reading and writing of disk sectors. The BDOS implemented the CP/M file system and some input/output abstractions (such as redirection) on top of the BIOS. The CCP took user commands and either executed them directly (internal commands such as DIR to show a directory or ERA to delete a file) or loaded and started an executable file of the given name (transient commands such as PIP.COM to copy files or STAT.COM to show various file and system information). Third-party applications for CP/M were also essentially transient commands. The BDOS, CCP and standard transient commands were the same in all installations of a particular revision of CP/M, but the BIOS portion was always adapted to the particular hardware. Adding memory to a computer, for example, meant that the CP/M system had to be reinstalled to allow transient programs to use the additional memory space. A utility program (MOVCPM) was provided with system distribution that allowed relocating the object code to different memory areas. The utility program adjusted the addresses in absolute jump and subroutine call instructions to new addresses required by the new location of the operating system in processor memory. This newly patched version could then be saved on a new disk, allowing application programs to access the additional memory made available by moving the system components. Once installed, the operating system (BIOS, BDOS and CCP) was stored in reserved areas at the beginning of any disk which would be used to boot the system. On start-up, the bootloader (usually contained in a ROM firmware chip) would load the operating system from the disk in drive A:. By modern standards CP/M was primitive, owing to the extreme constraints on program size. With version 1.0 there was no provision for detecting a changed disk. If a user changed disks without manually rereading the disk directory the system would write on the new disk using the old disk's directory information, ruining the data stored on the disk. From version 1.1 or 1.2 onwards, changing a disk then trying to write to it before its directory was read would cause a fatal error to be signalled. This avoided overwriting the disk but required a reboot and loss of the data that was to be stored on disk. The majority of the complexity in CP/M was isolated in the BDOS, and to a lesser extent, the CCP and transient commands. This meant that by porting the limited number of simple routines in the BIOS to a particular hardware platform, the entire OS would work. This significantly reduced the development time needed to support new machines, and was one of the main reasons for CP/M's widespread use. Today this sort of abstraction is common to most OSs (a hardware abstraction layer), but at the time of CP/M's birth, OSs were typically intended to run on only one machine platform, and multilayer designs were considered unnecessary. Console Command Processor The Console Command Processor, or CCP, accepted input from the keyboard and conveyed results to the terminal. CP/M itself would work with either a printing terminal or a video terminal. All CP/M commands had to be typed in on the command line. The console would most often display the A> prompt, to indicate the current default disk drive. When used with a video terminal, this would usually be followed by a blinking cursor supplied by the terminal. The CCP would await input from the user. A CCP internal command, of the form drive letter followed by a colon, could be used to select the default drive. For example, typing B: and pressing enter at the command prompt would change the default drive to B, and the command prompt would then become B> to indicate this change. CP/M's command-line interface was patterned after the operating systems from Digital Equipment, such as RT-11 for the PDP-11 and OS/8 for the PDP-8. Commands took the form of a keyword followed by a list of parameters separated by spaces or special characters. Similar to a Unix shell builtin, if an internal command was recognized, it was carried out by the CCP itself. Otherwise it would attempt to find an executable file on the currently logged disk drive and (in later versions) user area, load it, and pass it any additional parameters from the command line. These were referred to as "transient" programs. On completion, CP/M would reload the part of the CCP that had been overwritten by application programs — this allowed transient programs a larger memory space. The commands themselves could sometimes be obscure. For instance, the command to duplicate files was named PIP (Peripheral-Interchange-Program), the name of the old DEC utility used for that purpose. The format of parameters given to a program was not standardized, so that there was no single option character that differentiated options from file names. Different programs could and did use different characters. Commands The following list of built-in commands are supported by the CP/M Console Command Processor: DIR ERA REN SAVE TYPE USER Transient commands in CP/M include: ASM DDT DUMP ED LOAD PIP STAT SUBMIT SYSGEN CP/M Plus (CP/M Version 3) includes the following built-in commands: DIR – display list of files from a directory except those marked with the SYS attribute DIRSYS / DIRS – list files marked with the SYS attribute in the directory ERASE / ERA – delete a file RENAME / REN – rename a file TYPE / TYP – display contents of an ASCII character file USER / USE – change user number CP/M 3 allows the user to abbreviate the built-in commands. Transient commands in CP/M 3 include: COPYSYS DATE DEVICE DUMP ED GET HELP HEXCOM INITDIR LINK MAC PIP PUT RMAC SET SETDEF SHOW SID SUBMIT XREF Basic Disk Operating System The Basic Disk Operating System, or BDOS, provided access to such operations as opening a file, output to the console, or printing. Application programs would load processor registers with a function code for the operation, and addresses for parameters or memory buffers, and call a fixed address in memory. Since the address was the same independent of the amount of memory in the system, application programs would run the same way for any type or configuration of hardware. Basic Input Output System The Basic Input Output System or BIOS, provided the lowest level functions required by the operating system. These included reading or writing single characters to the system console and reading or writing a sector of data from the disk. The BDOS handled some of the buffering of data from the diskette, but before CP/M 3.0 it assumed a disk sector size fixed at 128 bytes, as used on single-density 8-inch floppy disks. Since most 5.25-inch disk formats used larger sectors, the blocking and deblocking and the management of a disk buffer area was handled by model-specific code in the BIOS. Customization was required because hardware choices were not constrained by compatibility with any one popular standard. For example, some manufacturers used a separate computer terminal, while others designed a built-in integrated video display system. Serial ports for printers and modems could use different types of UART chips, and port addresses were not fixed. Some machines used memory-mapped I/O instead of the 8080 I/O address space. All of these variations in the hardware were concealed from other modules of the system by use of the BIOS, which used standard entry points for the services required to run CP/M such as character I/O or accessing a disk block. Since support for serial communication to a modem was very rudimentary in the BIOS or may have been absent altogether, it was common practice for CP/M programs that used modems to have a user-installed overlay containing all the code required to access a particular machine's serial port. File system File names were specified as a string of up to eight characters, followed by a period, followed by a file name extension of up to three characters ("8.3" filename format). The extension usually identified the type of the file. For example, .COM indicated an executable program file, and .TXT indicated a file containing ASCII text. Each disk drive was identified by a drive letter, for example drive A and drive B. To refer to a file on a specific drive, the drive letter was prefixed to the file name, separated by a colon, e.g. A:FILE.TXT. With no drive letter prefixed, access was to files on the current default drive. File size was specified as the number of 128 byte records (directly corresponding to disk sectors on 8-inch drives) occupied by a file on the disk. There was no generally supported way of specifying byte-exact file sizes. The current size of a file was maintained in the file's File Control Block (FCB) by the operating system. Since many application programs (such as text editors) prefer to deal with files as sequences of characters rather than as sequences of records, by convention text files were terminated with a control-Z character (ASCII SUB, hexadecimal 1A). Determining the end of a text file therefore involved examining the last record of the file to locate the terminating control-Z. This also meant that inserting a control-Z character into the middle of a file usually had the effect of truncating the text contents of the file. With the advent of larger removable and fixed disk drives, disk de-blocking formulas were employed which resulted in more disk blocks per logical file allocation block. While this allowed for larger file sizes, it also meant that the smallest file which could be allocated increased in size from 1 KB (on single-density drives) to 2 KB (on double-density drives) and so on, up to 32 KB for a file containing only a single byte. This made for inefficient use of disk space if the disk contained a large number of small files. File modification time stamps were not supported in releases up to CP/M 2.2, but were an optional feature in MP/M and CP/M 3.0. CP/M 2.2 had no subdirectories in the file structure, but provided 16 numbered user areas to organize files on a disk. To change user one had to simply type "User X" at the command prompt, X being the number of the user wanted; security was non-existent and not believed to be necessary. The user area concept was to make the single-user version of CP/M somewhat compatible with multi-user MP/M systems. A common patch for the CP/M and derivative operating systems was to make one user area accessible to the user independent of the currently set user area. A USER command allowed the user area to be changed to any area from 0 to 15. User 0 was the default. If one changed to another user, such as USER 1, the material saved on the disk for this user would only be available to USER 1; USER 2 would not be able to see it or access it. However, files stored in the USER 0 area were accessible to all other users; their location was specified with a prefatory path, since the files of USER 0 were only visible to someone logged in as USER 0. The user area feature arguably had little utility on small floppy disks, but it was useful for organizing files on machines with hard drives. The intent of the feature was to ease use of the same computer for different tasks. For example, a secretary could do data entry, then, after switching USER areas, another employee could use the machine to do billing without their files intermixing. Transient Program Area The read/write memory between address 0100 hexadecimal and the lowest address of the BDOS was the Transient Program Area (TPA) available for CP/M application programs. Although all Z80 and 8080 processors could address 64 kilobytes of memory, the amount available for application programs could vary, depending on the design of the particular computer. Some computers used large parts of the address space for such things as BIOS ROMs, or video display memory. As a result, some systems had more TPA memory available than others. Bank switching was a common technique that allowed systems to have a large TPA while switching out ROM or video memory space as needed. CP/M 3.0 allowed parts of the BDOS to be in bank-switched memory as well. Debugging application CP/M came with a Dynamic Debugging Tool, nicknamed DDT (after the insecticide, i.e. a bug-killer), which allowed memory and program modules to be examined and manipulated, and allowed a program to be executed one step at a time. Resident programs CP/M originally did not support the equivalent of terminate and stay resident (TSR) programs as under DOS. Programmers could write software that could intercept certain operating system calls and extend or alter their functionality. Using this capability, programmers developed and sold auxiliary desk accessory programs, such as SmartKey, a keyboard utility to assign any string of bytes to any key. CP/M 3, however, added support for dynamically loadable Resident System Extensions (RSX). A so called null command file could be used to allow CCP to load an RSX without a transient program. Similar solutions like RSMs (for Resident System Modules) were also retrofitted to CP/M 2.2 systems by third-parties. Installation Although CP/M provided some hardware abstraction to standardize the interface to disk I/O or console I/O, typically application programs still required installation to make use of all the features of such equipment as printers and terminals. Often these were controlled by escape sequences which had to be altered for different devices. For example, the escape sequence to select bold face on a printer would have differed among manufacturers, and sometimes among models within a manufacturer's range. This procedure was not defined by the operating system; a user would typically run an installation program that would either allow selection from a range of devices, or else allow feature-by-feature editing of the escape sequences required to access a function. This had to be repeated for each application program, since there was no central operating system service provided for these devices. The initializing codes for each model of printer had to be written into the application. To use a program such as Wordstar with more than one printer (say, a fast dot matrix printer or a slower but presentation-quality daisy wheel printer), a separate version of Wordstar had to be prepared, and one had to load the Wordstar version that corresponded to the printer selected (and exiting and reloading to change printers). History The beginning and CP/M's heyday Gary Kildall originally developed CP/M during 1974, as an operating system to run on an Intel Intellec-8 development system, equipped with a Shugart Associates 8-inch floppy disk drive interfaced via a custom floppy disk controller. It was written in Kildall's own PL/M (Programming Language for Microcomputers). Various aspects of CP/M were influenced by the TOPS-10 operating system of the DECsystem-10 mainframe computer, which Kildall had used as a development environment. Under Kildall's direction, the development of CP/M 2.0 was mostly carried out by John Pierce in 1978. Kathryn Strutynski, a friend of Kildall from Naval Postgraduate School (NPS) times, became the fourth employee of Digital Research Inc. in early 1979. She started by debugging CP/M 2.0, and later became influential as key developer for CP/M 2.2 and CP/M Plus. Other early developers of the CP/M base included Robert "Bob" Silberstein and David "Dave" K. Brown. The name CP/M originally stood for "Control Program/Monitor", a name which implies a resident monitor—a primitive precursor to the operating system. However, during the conversion of CP/M to a commercial product, trademark registration documents filed in November 1977 gave the product's name as "Control Program for Microcomputers". The CP/M name follows a prevailing naming scheme of the time, as in Kildall's PL/M language, and Prime Computer's PL/P (Programming Language for Prime), both suggesting IBM's PL/I; and IBM's CP/CMS operating system, which Kildall had used when working at the NPS. This renaming of CP/M was part of a larger effort by Kildall and his wife with business partner, Dorothy McEwen to convert Kildall's personal project of CP/M and the Intel-contracted PL/M compiler into a commercial enterprise. The Kildalls intended to establish the Digital Research brand and its product lines as synonymous with "microcomputer" in the consumer's mind, similar to what IBM and Microsoft together later successfully accomplished in making "personal computer" synonymous with their product offerings. Intergalactic Digital Research, Inc. was later renamed via a corporation change-of-name filing to Digital Research, Inc. Portability By September 1981, Digital Research had sold more than CP/M licenses; InfoWorld stated that the actual market was likely larger because of sublicenses. Many different companies produced CP/M-based computers for many different markets; the magazine stated that "CP/M is well on its way to establishing itself as the small-computer operating system". The companies chose to support CP/M because of its large library of software. The Xerox 820 ran the operating system because "where there are literally thousands of programs written for it, it would be unwise not to take advantage of it", Xerox said. (Xerox included a Howard W. Sams CP/M manual as compensation for Digital Research's documentation, which InfoWorld in 1982 described as atrocious.) By 1984 Columbia University used the same source code to build Kermit binaries for more than a dozen different CP/M systems, plus a generic version. The operating system was described as a "software bus", allowing multiple programs to interact with different hardware in a standardized way. Programs written for CP/M were typically portable among different machines, usually requiring only the specification of the escape sequences for control of the screen and printer. This portability made CP/M popular, and much more software was written for CP/M than for operating systems that ran on only one brand of hardware. One restriction on portability was that certain programs used the extended instruction set of the Z80 processor and would not operate on an 8080 or 8085 processor. Another was graphics routines, especially in games and graphics programs, which were generally machine-specific as they used direct hardware access for speed, bypassing the OS and BIOS (this was also a common problem in early DOS machines). Bill Gates claimed that the Apple II family with a Z-80 SoftCard was the single most-popular CP/M hardware platform. Many different brands of machines ran the operating system, some notable examples being the Altair 8800, the IMSAI 8080, the Osborne 1 and Kaypro luggables, and MSX computers. The best-selling CP/M-capable system of all time was probably the Amstrad PCW. In the UK, CP/M was also available on Research Machines educational computers (with the CP/M source code published as an educational resource), and for the BBC Micro when equipped with a Z80 co-processor. Furthermore, it was available for the Amstrad CPC series, the Commodore 128, TRS-80, and later models of the ZX Spectrum. CP/M 3 was also used on the NIAT, a custom handheld computer designed for A.C. Nielsen's internal use with 1 MB of SSD memory. Applications WordStar, one of the first widely used word processors, and dBase, an early and popular database program for microcomputers, were originally written for CP/M. Two early outliners, KAMAS (Knowledge and Mind Amplification System) and its cut-down successor Out-Think (without programming facilities and retooled for 8080/V20 compatibility) were also written for CP/M, though later rewritten for MS-DOS. Turbo Pascal, the ancestor of Borland Delphi, and Multiplan, the ancestor of Microsoft Excel, also debuted on CP/M before MS-DOS versions became available. Visicalc, the first-ever spreadsheet program, was made available for CP/M. Another company, Sorcim, created its SuperCalc spreadsheet for CP/M, which would go on to become the market leader and de facto standard on CP/M. Supercalc would go on to be a competitor in the spreadsheet market in the MS-DOS world. AutoCAD, a CAD application from Autodesk debuted on CP/M. A host of compilers and interpreters for popular programming languages of the time (such as BASIC, Borland's Turbo Pascal, FORTRAN and even PL/I) were available, among them several of the earliest Microsoft products. CP/M software often came with installers that adapted it to a wide variety of computers. The source code for BASIC programs was easily accessible, and most forms of copy protection were ineffective on the operating system. A Kaypro II owner, for example, would obtain software on Xerox 820 format, then copy it to and run it from Kaypro-format disks. The lack of standardized graphics support limited video games, but various character and text-based games were ported, such as Telengard, Gorillas, Hamurabi, Lunar Lander, along with early interactive fiction including the Zork series and Colossal Cave Adventure. Text adventure specialist Infocom was one of the few publishers to consistently release their games in CP/M format. Lifeboat Associates started collecting and distributing user-written "free" software. One of the first was XMODEM, which allowed reliable file transfers via modem and phone line. Another program native to CP/M was the outline processor KAMAS. Disk formats IBM System/34 and IBM 3740's single-density, single-sided format is CP/M's standard 8-inch floppy disk format. No standard 5.25-inch CP/M disk format exists, with Kaypro, Morrow Designs, Osborne, and others using their own. InfoWorld estimated in September 1981 that "about two dozen formats were popular enough that software creators had to consider them to reach the broadest possible market". JRT Pascal, for example, provided versions on 5.25-inch disk for North Star, Osborne, Apple, Heath hard sector and soft sector, and Superbrain, and one 8-inch version. Ellis Computing also offered its software for both Heath formats, and 16 other 5.25-inch formats including two different TRS-80 CP/M modifications. Certain disk formats were more popular than others. Most software was available in the Xerox 820 format, and other computers such as the Kaypro II were compatible with it. No single manufacturer, however, prevailed in the 5.25-inch era of CP/M use, and disk formats were often not portable between hardware manufacturers. A software manufacturer had to prepare a separate version of the program for each brand of hardware on which it was to run. With some manufacturers (Kaypro is an example), there was not even standardization across the company's different models. Because of this situation, disk format translation programs, which allowed a machine to read many different formats, became popular and reduced the confusion, as did programs like Kermit which allowed transfer of data and programs from one machine to another using the serial ports that most CP/M machines had. Various formats were used depending on the characteristics of particular systems and to some degree the choices of the designers. CP/M supported options to control the size of reserved and directory areas on the disk, and the mapping between logical disk sectors (as seen by CP/M programs) and physical sectors as allocated on the disk. There were many ways to customize these parameters for every system but once they had been set, no standardized way existed for a system to load parameters from a disk formatted on another system. The degree of portability between different CP/M machines depended on the type of disk drive and controller used since many different floppy types existed in the CP/M era in both 8-inch and 5.25-inch format. Disks could be hard or soft sectored, single or double density, single or double sided, 35 track, 40 track, 77 track, or 80 track, and the sector layout, size and interleave could vary widely as well. Although translation programs could allow the user to read disk types from different machines, it also depended on the drive type and controller. By 1982, soft sector, single sided, 40 track 5.25-inch disks had become the most popular format to distribute CP/M software on as they were used by the most common consumer-level machines of that time such as the Apple II, TRS-80, Osborne 1, Kaypro II, and IBM PC. A translation program allowed the user to read any disks on his machine that had a similar format—for example, the Kaypro II could read TRS-80, Osborne, IBM PC, and Epson disks. Other disk types such as 80 track or hard sectored were completely impossible to read. The first half of double sided disks (like the Epson QX-10's) could be read because CP/M accessed disk tracks sequentially with track 0 being the first (outermost) track of side 1 and track 79 (on a 40 track disk) being the last (innermost) track of side 2. Apple II users could not use anything but Apple's GCR format and so had to obtain CP/M software on Apple format disks or else transfer it via serial link. The fragmented CP/M market, requiring distributors either to stock multiple formats of disks or to invest in multiformat duplication equipment, compared with the more standardized IBM PC disk formats, was a contributing factor to the rapid obsolescence of CP/M after 1981. One of the last notable CP/M capable machines to appear was the Commodore 128 in 1985, which had a Z80 for CP/M support in addition to its native mode using a 6502-derivative CPU. Using CP/M required either a 1571 or 1581 disk drive which could read soft sector 40 track MFM format disks. The first computer to use a 3.5-inch floppy drive, the Sony SMC-70, ran CP/M 2.2. The Commodore 128, Bondwell-2 laptop, Micromint/Ciarcia SB-180, MSX and TRS-80 Model 4 (running Montezuma CP/M 2.2) also supported the use of CP/M with 3.5-inch floppy disks. The Amstrad PCW ran CP/M using 3 inch floppy drives at first, and later switched to the 3.5 inch drives. Graphics Although graphics-capable S-100 systems existed from the commercialization of the S-100 bus, CP/M did not provide any standardized graphics support until 1982 with GSX (Graphics System Extension). Owing to the small memory available, graphics was never a common feature associated with 8-bit CP/M operating systems. Most systems could only display rudimentary ASCII art charts and diagrams in text mode or by using a custom character set. Some computers in the Kaypro line and the TRS-80 Model 4 had video hardware supporting block graphics characters, and these were accessible to assembler programmers and BASIC programmers using the CHR$ command. The Model 4 could display 640 by 240 pixel graphics with an optional high resolution board. Multi-user In 1979, a multi-user compatible derivative of CP/M was released. MP/M allowed multiple users to connect to a single computer, using multiple terminals to provide each user with a screen and keyboard. Later versions ran on 16-bit processors. CP/M Plus The last 8-bit version of CP/M was version 3, often called CP/M Plus, released in 1983. Its BDOS was designed by Brown. It incorporated the bank switching memory management of MP/M in a single-user single-task operating system compatible with CP/M 2.2 applications. CP/M 3 could therefore use more than 64 KB of memory on an 8080 or Z80 processor. The system could be configured to support date stamping of files. The operating system distribution software also included a relocating assembler and linker. CP/M 3 was available for the last generation of 8-bit computers, notably the Amstrad PCW, the Amstrad CPC, the ZX Spectrum +3, the Commodore 128, MSX machines and the Radio Shack TRS-80 Model 4. The 16-bit world There were versions of CP/M for some 16-bit CPUs as well. The first version in the 16-bit family was CP/M-86 for the Intel 8086 in November 1981. Kathryn Strutynski was the project manager for the evolving CP/M-86 line of operating systems. At this point, the original 8-bit CP/M became known by the retronym CP/M-80 to avoid confusion. CP/M-86 was expected to be the standard operating system of the new IBM PCs, but DRI and IBM were unable to negotiate development and licensing terms. IBM turned to Microsoft instead, and Microsoft delivered PC DOS based on 86-DOS. Although CP/M-86 became an option for the IBM PC after DRI threatened legal action, it never overtook Microsoft's system. Most customers were repelled by the significantly greater price IBM charged for CP/M-86 over PC DOS ( and , respectively). When Digital Equipment Corporation (DEC) put out the Rainbow 100 to compete with IBM, it came with CP/M-80 using a Z80 chip, CP/M-86 or MS-DOS using an 8088 microprocessor, or CP/M-86/80 using both. The Z80 and 8088 CPUs ran concurrently. A benefit of the Rainbow was that it could continue to run 8-bit CP/M software, preserving a user's possibly sizable investment as they moved into the 16-bit world of MS-DOS. A similar dual-processor adaption for the was named CP/M 8-16. The CP/M-86 adaptation for the 8085/8088-based Zenith Z-100 also supported running programs for both of its CPUs. Soon following CP/M-86, another 16-bit version of CP/M was CP/M-68K for the Motorola 68000. The original version of CP/M-68K in 1982 was written in Pascal/MT+68k, but it was ported to C later on. CP/M-68K, already running on the Motorola EXORmacs systems, was initially to be used in the Atari ST computer, but Atari decided to go with a newer disk operating system called GEMDOS. CP/M-68K was also used on the SORD M68 and M68MX computers. In 1982 there was also a port from CP/M-68K to the 16-bit Zilog Z8000 for the Olivetti M20, written in C, named CP/M-8000. These 16-bit versions of CP/M required application programs to be re-compiled for the new CPUs. Some programs written in assembly language could be automatically translated for a new processor. One tool for this was Digital Research's XLT86, which translated .ASM source code for the Intel 8080 processor into .A86 source code for the Intel 8086. The translator would also optimize the output for code size and take care of calling conventions, so that CP/M-80 and MP/M-80 programs could be ported to the CP/M-86 and MP/M-86 platforms automatically. XLT86 itself was written in PL/I-80 and was available for CP/M-80 platforms as well as for VAX/VMS. MS-DOS takes over Many expected that CP/M would be the standard operating system for 16-bit computers. In 1980 IBM approached Digital Research, at Bill Gates' suggestion, to license a forthcoming version of CP/M for its new product, the IBM Personal Computer. Upon the failure to obtain a signed non-disclosure agreement, the talks failed, and IBM instead contracted with Microsoft to provide an operating system. The resulting product, MS-DOS, soon began outselling CP/M. Many of the basic concepts and mechanisms of early versions of MS-DOS resembled those of CP/M. Internals like file-handling data structures were identical, and both referred to disk drives with a letter (A:, B:, etc.). MS-DOS's main innovation was its FAT file system. This similarity made it easier to port popular CP/M software like WordStar and dBase. However, CP/M's concept of separate user areas for files on the same disk was never ported to MS-DOS. Since MS-DOS had access to more memory (as few IBM PCs were sold with less than 64 KB of memory, while CP/M could run in 16 KB if necessary), more commands were built into the command-line shell, making MS-DOS somewhat faster and easier to use on floppy-based computers. Although one of the first peripherals for the IBM PC was a SoftCard-like expansion card that let it run 8-bit CP/M software, InfoWorld stated in 1984 that efforts to introduce CP/M to the home market had been largely unsuccessful and most CP/M software was too expensive for home users. In 1986 the magazine stated that Kaypro had stopped production of 8-bit CP/M-based models to concentrate on sales of MS-DOS compatible systems, long after most other vendors had ceased production of new equipment and software for CP/M. CP/M rapidly lost market share as the microcomputing market moved to the IBM-compatible platform, and it never regained its former popularity. Byte magazine, at the time one of the leading industry magazines for microcomputers, essentially ceased covering CP/M products within a few years of the introduction of the IBM PC. For example, in 1983 there were still a few advertisements for S-100 boards and articles on CP/M software, but by 1987 these were no longer found in the magazine. Later versions of CP/M-86 made significant strides in performance and usability and were made compatible with MS-DOS. To reflect this compatibility the name was changed, and CP/M-86 became DOS Plus, which in turn became DR DOS. ZCPR ZCPR (the Z80 Command Processor Replacement) was introduced on 2 February 1982 as a drop-in replacement for the standard Digital Research console command processor (CCP) and was initially written by a group of computer hobbyists who called themselves "The CCP Group". They were Frank Wancho, Keith Petersen (the archivist behind Simtel at the time), Ron Fowler, Charlie Strom, Bob Mathias, and Richard Conn. Richard was, in fact, the driving force in this group (all of whom maintained contact through email). ZCPR1 was released on a disk put out by SIG/M (Special Interest Group/Microcomputers), a part of the Amateur Computer Club of New Jersey. ZCPR2 was released on 14 February 1983. It was released as a set of ten disks from SIG/M. ZCPR2 was upgraded to 2.3, and also was released in 8080 code, permitting the use of ZCPR2 on 8080 and 8085 systems. ZCPR3 was released on 14 July 1984, as a set of nine disks from SIG/M. The code for ZCPR3 could also be compiled (with reduced features) for the 8080 and would run on systems that did not have the requisite Z80 microprocessor. In January 1987, Richard Conn stopped developing ZCPR, and Echelon asked Jay Sage (who already had a privately enhanced ZCPR 3.1) to continue work on it. Thus, ZCPR 3.3 was developed and released. ZCPR 3.3 no longer supported the 8080 series of microprocessors, and added the most features of any upgrade in the ZCPR line. Features of ZCPR as of version 3 included: shells aliases I/O redirection flow control named directories search paths custom menus passwords on line help ZCPR 3.3 also included a full complement of utilities with considerably extended capabilities. While enthusiastically supported by the CP/M user base of the time, ZCPR alone was insufficient to slow the demise of CP/M. East-bloc CP/M derivatives A number of CP/M-80 derivatives existed in the former East-bloc under various names including SCP (), SCP/M, CP/A, CP/J, CP/KC, CP/KSOB, CP/L, CP/Z, MICRODOS, BCU880, ZOAZ, OS/M, TOS/M, ZSDOS, M/OS, COS-PSA, DOS-PSA, CSOC, CSOS, CZ-CPM and others. There were also CP/M-86 derivatives named SCP1700, CP/K and K8918-OS. They were produced by the East-German VEB Robotron and others. Legacy A number of behaviors exhibited by Microsoft Windows are a result of backward compatibility with MS-DOS, which in turn attempted some backward compatibility with CP/M. The drive letter and 8.3 filename conventions in MS-DOS (and early Windows versions) were originally adopted from CP/M. The wildcard matching characters used by Windows (? and *) are based on those of CP/M, as are the reserved filenames used to redirect output to a printer ("PRN:"), and the console ("CON:"). The drive names A and B were used to designate the two floppy disk drives that CP/M systems typically used; when hard drives appeared they were designated C, which survived into MS-DOS as the C:\> command prompt. The control character ^Z marking the end of some text files can also be attributed to CP/M. Various commands in DOS were modelled after CP/M commands, some of them even carried the same name like DIR, REN/RENAME, or TYPE (and ERA/ERASE in DR-DOS). File extensions like .TXT or .COM are still used to identify file types on many operating systems. Source code releases In 1997 and 1998 Caldera released some CP/M 2.2 binaries and source code under an open source license, also allowing the redistribution and modification of further collected Digital Research files related to the CP/M and MP/M families through Tim Olmstead's "The Unofficial CP/M Web site" since 1997. After Olmstead's death on 12 September 2001, the distribution license was refreshed and expanded by Lineo, who had meanwhile become the owner of those Digital Research assets, on 19 October 2001. In October 2014, to mark the 40th anniversary of the first presentation of CP/M, the Computer History Museum released early source code versions of CP/M. Hobby and "retro" computing , there are a number of active vintage, hobby and retro-computer people and groups, and some small commercial businesses, still developing and supporting computer platforms that use CP/M (mostly 2.2) as the host operating system. See also Amstrad CP/M Plus character set CPMulator CP/NET and CP/NOS Cromemco DOS, an operating system independently derived from CP/M Eagle Computer IMDOS List of machines running CP/M MP/M MP/NET and MP/NOS Multiuser DOS Pascal/MT+ SpeedStart CP/M 86-DOS References Further reading (NB. This PBS series includes the details of IBM's choice of Microsoft DOS over Digital Research's CP/M for the IBM PC) External links The Unofficial CP/M Web site (founded by Tim Olmstead) - Includes source code Gaby Chaudry's Homepage for CP/M and Computer History - includes ZCPR materials CP/M Main Page - John C. Elliott's technical information site CP/M Internals - CP/M internals MaxFrame's Digital Research CP/M page ftp://ftp.uni-bayreuth.de/pub/pc/caldera/cpm2.2/ How to transfer CP/M floppy disks CP/M variants Microcomputer software Disk operating systems Digital Research operating systems Discontinued operating systems Floppy disk-based operating systems Free software operating systems History of computing 1974 software Formerly proprietary software
Operating System (OS)
475
Ubuntu Netbook Edition Ubuntu Netbook Edition (UNE), known as Ubuntu Netbook Remix (UNR) prior to the release of Ubuntu 10.04, is a discontinued version of the Ubuntu operating system (OS) that had been optimized to enable it to work better on netbooks and other devices with small screens or with the Intel Atom CPU. UNE was available starting with Ubuntu release 8.04 ("Hardy Heron"). UNE was also an optional preinstalled operating system on some netbooks, such as Dell Inspiron Mini 10v and the Toshiba NB100, and also ran on popular models such as the Acer Aspire One and the Asus Eee PC. Canonical Ltd., the developers of Ubuntu, collaborated with the Moblin project to ensure optimization for lower hardware requirements and longer battery life. Beginning with version 10.10, Ubuntu Netbook Edition used the Unity desktop as its desktop interface. The classic netbook interface was available in Ubuntu's software repositories as an option. Because Ubuntu's desktop edition has moved to the same Unity interface as the netbook edition, starting with Ubuntu 11.04, the netbook edition was merged into the desktop edition. Installation UNE could be installed in several ways: by first installing the regular Ubuntu package, then adding the UNE repository and installing the relevant packages. Starting with Ubuntu 10.04, the packages are available on main repositories. by downloading UNE directly from the Ubuntu server, as either a .iso or .img file and writing the file to a USB stick (using Ubuntu Live USB Creator or UNetbootin) or CD. an option to install via the Wubi installer is available for the Ubuntu 10.04 "Lucid Lynx" release. Unity Starting with UNE 10.10, the interface was switched to Unity. Due to the desktop version of Ubuntu also being changed to the Unity interface, the netbook edition was rolled into the general Ubuntu distribution starting with Ubuntu 11.04 Natty Narwhal and the netbook edition was discontinued as a separate distribution. Variants Dell Ubuntu Netbook Edition is built specifically for the hardware profile of the Inspiron Mini 9, and is also available for the Inspiron Mini 12. It includes a custom built interface and launcher as well as non-free codecs such as MPEG-4 and MP3. It began shipping on September 22, 2008. EasyPeasy is considered to be among the first UNE-based distributions, with a focus on the usage of proprietary software like Skype by default and also integrating a set of different standard applications and drivers. Support The minimum requirements are a Intel Atom CPU of at least 1.6 GHz, 512MB RAM and 4GB storage. Ubuntu Netbook Edition was officially shipped with the following netbooks: Sylvania G Netbook Meso Toshiba NB100 System76 Starling Netbook Dell Mini10v, Mini10, Latitude 2100 & Latitude 2110 Advent 4211C Samsung N110 ZaReason Terra HD netbook and other ZaReason laptop models See also Comparison of netbook-oriented Linux distributions EasyPeasy Eeebuntu Joli OS Leeenux Linux Ubuntu for Android Ubuntu Phone References External links Official product page (Canonical Ltd.) UNE packages (Launchpad) UNE Ubuntu Wiki UNE netbook support Canonical Announces Availability of Ubuntu 9.04 Netbook Remix Ubuntu Netbook Remix: a detailed explanation (Free Software Magazine) Ubuntu derivatives Mobile computers Linux distributions
Operating System (OS)
476
Symobi Symobi (System for mobile applications) is a proprietary modern and mobile real-time operating system. It was and is developed by the German company Miray Software, since 2002 partly in cooperation with the research team of Prof. Dr. Uwe Baumgarten at the Technical University of Munich. The graphical operating system is designed for the area of embedded and mobile systems. It is also often used on PCs for end users and in the field of industry. Design The basis of Symobi is the message-oriented operating system µnOS, which is on its part based on the real-time microkernel Sphere. µnOS offers communication through message passing between all processes (from basic operating system service processes to application processes) using the integrated process manager. On the lowest level, the responsibility of the Sphere microkernel is to implement and enforce security mechanisms and resource management in real-time. Symobi itself additionally offers a complete graphical operating system environment with system services, a consistent graphical user interface, as well as standard programs and drivers. Classification Symobi combines features from different fields of application in one operating system. As a modern operating system it offers separated, isolated processes, light-weight threads, and dynamic libraries, like Windows, Linux, and Unix for example. In the area of mobile embedded operating systems, through its low resource requirement and the support of mobile devices it resembles systems like Windows CE, SymbianOS or Palm OS. With conventional real-time operating systems like QNX or VxWorks it shares the real-time ability and the support of different processor architectures. History The development of Sphere, µnOS and Symobi is based on the ideas and work of Konrad Foikis and Michael Haunreiter (founders of the company Miray Software), initiated during their schooldays, even before they started studying computer science. The basic concept was to combine useful and necessary features (like real-time and portability) with modern characteristics (like microkernel and inter-process communication etc.) to form a stable and reliable operating system. Originally, it was only supposed to serve as a basis for the different application programs developed by Foikis and Haunreiter during their studies. In 2000, Konrad Foikis and Michael Haunreiter founded the company Miray Software when they realised that µnOS was suited for far more than their own use. The cooperation with the TU Munich already evolved two years later. In 2006, the first official version of Symobi was completed, and in autumn of the same year it was introduced in professional circles on the Systems exhibition. Support Single-Core: Intel: 80386, 80486, Pentium, Pentium Pro, Pentium II, Pentium III, Pentium 4, Core Solo, Core 2 Solo AMD: Élan SC410, Élan SC520, K6, K6-2, K6-III, Duron, Sempron, Athlon, Opteron VIA: Cyrix Mark II, Cyrix III, C3, C7, Eden Rise: mP6 Marvell / Intel: PXA-250, PXA-255, PXA-270, IXP-420 Motorola / Freescale: G2, G3, G4 Multi-Core: Intel: Pentium 4, Core Duo, Core 2 Duo AMD: Athlon X2, Opteron Application areas Symobi is suited for hand-held products (portable communicators, internet appliances), as well as for consumer appliances (set-top boxes, home gateways, games, consoles). Furthermore, it is used in the areas of automotive (control and infotainment systems), industrial control systems (motion control, process control), and point of sale (cashier systems, ticket machines, information terminals). Advantages and disadvantages The operating system stands out through its real-time microkernel and its multi-processor ability. Furthermore, it is portable and therefore not bound to specific hardware platforms. Symobi's inter-process communication guarantees security and flexibility. It has a modern architecture and runs with only low resource requirements (processor, system memory). The system offers a Java-VM. In the area of standard appliances the operating system it not yet widely spread. It has only a rudimentary POSIX support and has restricted hardware support through drivers. In addition, Symobi is not an open source operating system and at present does not offer office applications, email functions, or a web browser. References Miray Software: Introducing Symobi, a modern embeddable RTOS, 2006 External links Symobi Chair for Operating Systems at the Technical University of Munich Miray Software Embedded operating systems Real-time operating systems Microkernel-based operating systems
Operating System (OS)
477
Resurrection Remix OS Resurrection Remix OS, abbreviated as RR, is a free and open-source operating system for smartphones and tablet computers, based on the Android mobile platform. UX designer and head developer Altan KRK & Varun Date started the project in 2012. History On February 9, 2018, Resurrection Remix 6.0.0 was released, based on Android 8.1 Oreo after months in development. In early 2019 Resurrection Remix 7.0.0, 7.0.1 and 7.0.2 were released, based on Android 9 Pie. The project seemed abandoned after a disagreement between 2 major developers which caused one of them (Acar) to leave, but later in mid 2020, Resurrection Remix came back with 8.5.7 based on Android 10. 8.7.3 is the latest version based on Android 10. Reviews and studies A DroidViews review of Resurrection Remix OS called it "feature packed," and complimented the large online community, updates, and customization options, as compared with the simplicity of Lineage OS. ZDNet stated Resurrection Remix OS was a custom ROM that could evade SafetyNet exclusions and display Netflix app in Play Store. Resurrection Remix OS was one of a few operating systems mentioned as Android upgrade options in Upcycled Technology. Resurrection Remix OS was one of a handful of operating systems supported by the OpenKirin development team for bringing pure Android to Huawei devices, and was one of two suggested for OnePlus 5. In a 2017 detailed review, Stefanie Enge of Curved.De said Resurrection Remix combined the best of LineageOS, OmniROM and SlimRoms. The camera performance was criticized, however, extensive customization options, speed and lack of Google services were all acclaimed. In a study of phone sensors, Resurrection Remix OS was one of six Android Operating Systems used on two Xiaomi devices, to compare gyroscope, accelerometer, orientation and light sensor data to values recorded by very accurate, reference sensors. Supported devices More than 150 devices are supported, some of which are: See also List of custom Android firmware References External links Custom Android firmware Linux distributions
Operating System (OS)
478
Aleph kernel Aleph is a discontinued operating system kernel developed at the University of Rochester as part of their RIG project in 1975. Aleph was an early set on the road to the creation of the first practical microkernel operating system, Mach. Aleph used inter-process communications to move data between programs and the kernel, so applications could transparently access resources on any machine on the local area network (which at the time was a 3-Mbit/s experimental Xerox Ethernet). The project eventually petered out after several years due to rapid changes in the computer hardware market, but the ideas led to the creation of Accent at Carnegie Mellon University, leading in turn to Mach. Applications written for the RIG system communicated via ports. Ports were essentially message queues that were maintained by the Aleph kernel, identified by a machine unique (as opposed to globally unique) ID consisting of a process id, port id pair. Processes were automatically assigned a process number, or pid, on startup, and could then ask the kernel to open ports. Processes could open several ports and then "read" them, automatically blocking and allowing other programs to run until data arrived. Processes could also "shadow" another, receiving a copy of every message sent to the one it was shadowing. Similarly, programs could "interpose" on another, receiving messages and essentially cutting the original message out of the conversation. RIG was implemented on a number of Data General Eclipse minicomputers. The ports were implemented using memory buffers, limited to 2 kB in size. This produced significant overhead when copying large amounts of data. Another problem, realized only in retrospect, was that the use of global ID's allowed malicious software to "guess" at ports and thereby gain access to resources they should not have had. And since those IDs were based on the program ID, the port IDs changed if the program was restarted, making it difficult to write servers with clients that could rely on a specific port number for service. References Monolithic kernels
Operating System (OS)
479
Mac OS X Panther Mac OS X Panther (version 10.3) is the fourth major release of macOS, Apple's desktop and server operating system. It followed Mac OS X 10.2 and preceded Mac OS X Tiger. It was released on October 24, 2003. System requirements Panther's system requirements are: PowerPC G3, G4, or G5 processor (at least 233 MHz) Built-in USB At least 128 MB of RAM (256 MB recommended, minimum of 96 MB supported unofficially) At least 1.5 GB of available hard disk space CD drive Internet access requires a compatible service provider; iDisk requires a .Mac account Video conferencing requires: 333 MHz or faster PowerPC G3, G4, or G5 processor Broadband internet access (100 kbit/s or faster) Compatible FireWire DV camera or web camera Since a New World ROM was required for Mac OS X Panther, certain older computers (such as beige Power Mac G3s and 'Wall Street' PowerBook G3s) were unable to run Panther by default. Third-party software (such as XPostFacto) can, however, override checks made during the install process; otherwise, installation or upgrades from Jaguar fails on these older machines. Panther still fully supported the Classic environment for running older Mac OS 9 applications, but made Classic application windows double-buffered, interfering with some applications written to draw directly to screen. New and changed features End-user features Apple advertised that Mac OS X Panther had over 150 new features, including: Finder: Updated with a brushed-metal interface, a new live search engine, customizable Sidebar, secure deletion, colored labels (resurrected from classic Mac OS) in the filesystem and Zip support built in. The Finder icon was also changed. Fast user switching: Allows a user to remain logged in while another user logs in, and quickly switch among several sessions. Exposé: Helps the user manage windows by showing them all as thumbnails. TextEdit: TextEdit now is also compatible with Microsoft Word (.doc) documents. Xcode developer tools: Faster compile times with gcc 3.3. Preview: Increased speed of PDF rendering. QuickTime: Now supports the Pixlet high-definition video codec. New applications in Panther Font Book: A font manager which simplifies viewing character maps, and adding new fonts that can be used systemwide. The app also allows the user to organize fonts into collections. FileVault: On-the-fly encryption and decryption of a user's home folder. iChat AV: The new version of iChat. Now with built-in audio- and video conferencing. X11: X11 is built into Panther. Safari: A new web browser that was developed to replace Internet Explorer for Mac when the contract between Apple and Microsoft ended, although Internet Explorer for Mac was still available. Safari 1.0 was included in an update in Jaguar but was used as the default browser in Panther. Other Microsoft Windows interoperability improvements, including out-of-the-box support for Active Directory and SecurID-based VPNs. Built-in fax support. Release history References 3 PowerPC operating systems 2003 software Computer-related introductions in 2003
Operating System (OS)
480
ODS ODS may refer to: Computing, Internet and information technology Files-11 (On-Disk Structure), a DEC filesystem OpenDocument Spreadsheet file format Online dating service Operational data store, an intermediate data warehouse for databases OpenDNSSEC, a security extension of DNS Protocol Optical data storage a technology for storing information Science and technology Octadecylsilyl, also known as C18, a surface coating used in reversed-phase chromatography Oxide dispersion strengthened alloys Ozone-depleting substance, chemicals which contribute to ozone depletion Osmotic demyelination syndrome, a neurological condition involving severe damage to the myelin sheath of nerve cells Military operations Operation Defensive Shield Operation Desert Storm Other Civic Democratic Party (Czech Republic) (Czech: Občanská demokratická strana) Civic Democratic Party (Slovakia) (Slovak: Občianska demokratická strana) Odessa Airport, an airport in Odessa, Ukraine (IATA code ODS) L'Officiel du jeu Scrabble, the reference dictionary for Scrabble in French-speaking countries One Day School, a gifted education program in New Zealand. Operating Deflection Shape, a method used for visualisation of the vibration pattern of a machine Ordbog over det danske Sprog, a dictionary of Danish Overdoses (especially drug overdoses) In Liverpool in England, Old Dock Sill Orbital Dysfunctional Syndrome from the film Pandorum Occupy Dame Street, a protest in Dublin, Ireland in 2011–12
Operating System (OS)
481
Tails (operating system) Tails, or The Amnesic Incognito Live System, is a security-focused Debian-based Linux distribution aimed at preserving privacy and anonymity. It connects to the Internet exclusively through the anonymity network Tor. The system is designed to be booted as a live DVD or live USB, and leaves no digital footprint on the machine unless explicitly told to do so. It can also be run as a virtual machine, with some additional security risks. The Tor Project provided financial support for its development in the beginnings of the project, and continues to do so alongside numerous corporate and anonymous sponsors. History Tails was first released on 23 June 2009. It is the next iteration of development on Incognito, a discontinued Gentoo-based Linux distribution. The Tor Project provided financial support for its development in the beginnings of the project. Tails also received funding from the Open Technology Fund, Mozilla, and the Freedom of the Press Foundation. Laura Poitras, Glenn Greenwald, and Barton Gellman have each said that Tails was an important tool they used in their work with National Security Agency whistleblower Edward Snowden. From release 3.0, Tails requires a 64-bit processor to run. Features Tails's pre-installed desktop environment is GNOME 3. The system includes essential software for functions such as reading and editing documents, image editing, video watching and printing. Other software from Debian can be installed at the user's behest. Tails includes a unique variety of software that handles the encryption of files and internet transmissions, cryptographic signing and hashing, and other functions important to security. It is pre-configured to use Tor, with multiple connection options for Tor. It tries to force all connections to use Tor and blocks connection attempts outside Tor. For networking, it features the Tor Browser, instant messaging, email, file transmission and monitoring local network connections for security. By design, Tails is "amnesic". It runs in the computer's Random Access Memory (RAM) and does not write to a hard drive or other storage medium. The user may choose to keep files or applications on their Tails drive in "persistent storage", which is not hidden and is detectable by forensic analysis. While shutting down by normal or emergency means, Tails overwrites most of the used RAM to avoid a cold boot attack. Security incidents In 2014 Das Erste reported that the NSA's XKeyscore surveillance system sets threat definitions for people who search for Tails using a search engine or visit the Tails website. A comment in XKeyscore's source code calls Tails "a comsec [communications security] mechanism advocated by extremists on extremist forums". In the same year, Der Spiegel published slides from an internal National Security Agency presentation dating to June 2012, in which the NSA deemed Tails on its own as a "major threat" to its mission and in conjunction with other privacy tools as "catastrophic". In 2017, the FBI used malicious code developed by Facebook, identifying sexual extortionist and Tails user Buster Hernandez through a zero-day vulnerability in the default video player. The exploit was never explained to or discovered by the Tails developers, but it is believed that the vulnerability was patched in a later release of Tails. It was not easy to find Hernandez: for a long time, the FBI and Facebook had searched for him with no success, resorting to developing the custom hacking tool. See also Crypto-anarchism Dark web Deep web Freedom of information GlobaLeaks GNU Privacy Guard I2P Internet censorship Internet privacy Off-the-Record Messaging Proxy server Security-focused operating systems Tor (anonymity network) Tor2web Whonix References External links Anonymity networks Debian-based distributions Free security software I2P Operating system distributions bootable from read-only media Privacy software Tor (anonymity network) 2009 software Linux distributions
Operating System (OS)
482
Executable In computing, executable code, an executable file, or an executable program, sometimes simply referred to as an executable or binary, causes a computer "to perform indicated tasks according to encoded instructions", as opposed to a data file that must be interpreted (parsed) by a program to be meaningful. The exact interpretation depends upon the use. "Instructions" is traditionally taken to mean machine code instructions for a physical CPU. In some contexts, a file containing scripting instructions (such as bytecode) may also be considered executable. Generation of executable files Executable files can be hand-coded in machine language, although it is far more convenient to develop software as source code in a high-level language that can be easily understood by humans. In some cases, source code might be specified in assembly language instead, which remains human-readable while being closely associated with machine code instructions. The high-level language is compiled into either an executable machine code file or a non-executable machine code – object file of some sort; the equivalent process on assembly language source code is called assembly. Several object files are linked to create the executable. Object files -- executable or not -- are typically stored in a container format, such as Executable and Linkable Format (ELF) or Portable Executable (PE) which is operating system-specific. This gives structure to the generated machine code, for example dividing it into sections such as .text (executable code), .data (initialized global and static variables), and .rodata (read-only data, such as constants and strings). Executable files typically also include a runtime system, which implements runtime language features (such as task scheduling, exception handling, calling static constructors and destructors, etc.) and interactions with the operating system, notably passing arguments, environment, and returning an exit status, together with other startup and shutdown features such as releasing resources like file handles. For C, this is done by linking in the crt0 object, which contains the actual entry point and does setup and shutdown by calling the runtime library. Executable files thus normally contain significant additional machine code beyond that directly generated from the specific source code. In some cases, it is desirable to omit this, for example for embedded systems development, or simply to understand how compilation, linking, and loading work. In C, this can be done by omitting the usual runtime, and instead explicitly specifying a linker script, which generates the entry point and handles startup and shutdown, such as calling main to start and returning exit status to the kernel at the end. Execution In order to be executed by the system (such as an operating system, firmware, or boot loader), an executable file must conform to the system's application binary interface (ABI). In simple interfaces, a file is executed by loading it into memory and jumping to the start of the address space and executing from there. In more complicated interfaces, executable files have additional metadata specifying a separate entry point. For example, in ELF, the entry point is specified in the header's e_entry field, which specifies the (virtual) memory address at which to start execution. In the GCC (GNU Compiler Collection) this field is set by the linker based on the _start symbol. See also Comparison of executable file formats Executable compression Executable text References External links EXE File Format at What Is Computer file systems Programming language implementation
Operating System (OS)
483
Team OS/2 Team OS/2 was an advocacy group formed to promote IBM's OS/2 operating system. Originally internal to IBM with no formal IBM support, Team OS/2 successfully converted to a grassroots movement formally supported (but not directed) by IBM - consisting of well over ten thousand OS/2 enthusiasts both within and without IBM. It is one of the earliest examples of both an online viral phenomenon and a cause attracting supporters primarily through online communications. The decline of Team OS/2 largely coincided with IBM's abandonment of OS/2 and the coinciding attacks orchestrated by Microsoft on OS/2, Team OS/2, and IBM's early attempts at online evangelism. History Beginnings Team OS/2 was a significant factor in the spread and acceptance of OS/2. Formed in February 1992, Team OS/2 began when IBM employee Dave Whittle, recently appointed by IBM to evangelize OS/2 online, formed an internal IBM discussion group titled TEAMOS2 FORUM on IBM's worldwide network, which at the time, served more individuals than did the more academic Internet. The forum header stated that its purpose was The forum went viral as increasing numbers of IBMers worldwide began to contribute a wide variety of ideas as to how IBM could effectively compete with Microsoft to establish OS/2 as the industry standard desktop operating system. Within a short time, thousands of IBM employees had added the words TEAMOS2 to their internet phone directory listing, which enabled anyone within IBM to find like-minded OS/2 enthusiasts within the company and work together to overcome the challenges posed by IBM's size, insularity, and top-down marketing style. TEAMOS2 FORUM quickly caught the attention of some IBM executives, including Lee Reiswig and Lucy Baney, who after initial scepticism, offered moral and financial support for Whittle's grass roots and online marketing efforts. IBM's official program for generating word-of-mouth enthusiasm was called the "OS/2 Ambassador Program", where OS/2 enthusiasts company-wide could win Gold, Silver, and Bronze Ambassador pins and corporate recognition with various levels of structured achievement. Both the OS/2 Ambassador Program and Team OS/2 were effective in evangelizing OS/2 within IBM, but only Team OS/2 was effective in generating support for the promotion of OS/2 outside of IBM. Externalization Whittle began to extend the Team OS/2 effort outside of IBM with various posts on CompuServe, Prodigy, bulletin boards, newsgroups, and other venues. He also made a proposal to IBM executives, which they eventually implemented when IBM Personal Software Products moved to Austin, Texas, that they form a "Grass Roots Marketing Department". Team OS/2 went external that spring, when the first Team OS/2 Party was held in Chicago. The IBM Marketing Office in Chicago created a huge banner visible from the streets. Microsoft reacted when Steve Ballmer roamed the floor with an application on diskette that had been specially programmed to crash OS/2; and OS/2 enthusiasts gathered for an evening of excitement at the first Team OS/2 party. Tickets were limited to those who had requested them on one of the online discussion groups. Attendees were asked to nominate their favorite "Teamer" for the "Team OS/2 Hall of Fame", and those whose names were drawn came forward to tell the story of their nominee - what sacrifice they had made to promote OS/2 and why they were deserving of recognition. Prizes included limousine rides that evening. At the end, all attendees received the first TEAM OS/2 T-shirt, which includes the first Team OS/2 logo on the front and the distinctive IBM blue-stripe logo on the back - except with lower-case letters: "ibm/2" to represent the new IBM. Even the lead singer in the band Chicago that had provided music for the event asked if they could have a T-shirt for each member of the band. One IBM executive in attendance said it was the first IBM event that had given him goosebumps. After that, word about the Team OS/2 phenomenon spread even more quickly, both within IBM and without. OS/2 enthusiasts spread the word to computer user groups across the United States, then eventually worldwide, independently of IBM marketing efforts. Whittle established multiple localized forums within IBM, such as TEAMNY, TEAMDC, TEAMFL, TEAMTX, and TEAMCA, which attracted new supporters and enabled enthusiastic followers to share ideas and success stories, plan events, and creatively apply what they were learning from one another. The "Teamer Invasion" of COMDEX in the Fall of 1993 was perhaps the high water mark for Team OS/2. COMDEX was, at that time, the most important computer and electronics trade show, held in Las Vegas. Wearing the salmon-colored shirts which were to become associated with Team OS/2, the group's members, led by Doug Azzarito, Keith Wood, Mike Kogan, IBM User Group Manager Gene Barlow, and others wandered the convention floors, promoting OS/2 and providing demo discs to vendors and offering to install the distributed version of OS/2 on display computers. Many Team OS/2 volunteers had traveled to the convention on their own, including some from overseas; so their independence and grass-roots enthusiasm attracted significant attention in the media and amongst exhibitors. What little funding IBM provided went to provide the shirts, "trinkets and trash", and an onsite headquarters for Teamers to coordinate their efforts and collect items to give to vendors. IBM had established the Grass Roots Marketing department proposed earlier, and had even tapped Vicci Conway and Janet Gobeille to provide support and guidance for Team OS/2 with Whittle voluntarily stepping aside from his previous day-to-day focus on supporting and monitoring Team OS/2 activities. Janet was nicknamed "Team Godmother", but everyone in IBM, especially Whittle, was wary of trying to direct volunteers or make Team OS/2 too structured or formal, in order to avoid "breaking something that works". According to the Team OS/2 Frequently Asked Questions document, Team OS/2 at one point had a presence (sponsoring members willing to publish their e-mail addresses as points of contact) in Argentina, Australia, Austria, Belgium, Canada, Denmark, Germany, Ireland, Japan, Latvia, the Netherlands, Portugal, Singapore, South Africa, Spain, Sweden, Switzerland, and the United Kingdom; as well as online on America Online, CompuServe, Delphi, FidoNet, Genie, the Internet/Usenet/mail servers, Prodigy, and WWIVNet. Analysis In an article analyzing Team OS/2 and its meaning and context, Robert L. Scheier listed several of the factors that led to the success of the group. These included the creation of a strong group identity with a powerful name, corporate support without corporate direction, the ability of volunteer members to do things that companies couldn't do, keeping it "loose" and relatively unstructured, providing lots of smaller material rewards without compensation, and listening to team members as if they were the "eyes and ears of the public." However, Team OS/2's very lack of structure left it vulnerable. Various journalists have documented a "dirty tricks" campaign by Microsoft. Online, numerous individuals (nicknamed "Microsoft Munchkins" by John C. Dvorak) used pseudonyms to attack OS/2 and manipulate online discussions. Whittle was the target of a widespread online character assassination campaign. Some journalists who were less than enthusiastic about OS/2 received death threats and other nasty emails from numerous sources, identified in taglines as "Team OS/2" without a name. Whether this attack pattern was part of Microsoft's efforts or from Team OS/2, the identity was never proven. Ultimately, at least some of Microsoft's efforts were exposed on Will Zachmann's Canopus forum on CompuServe, where the owner of one particular account, ostensibly belonging to "Steve Barkto", (who had been attacking OS/2, David Barnes, Whittle, and other OS/2 fans) was discovered to be funded by the credit card of Rick Segal, a high-level Microsoft employee and evangelist, who had also been active in the forums. James Fallows, a nationally renowned journalist, weighed in to state that the stylistic fingerprint found in the Barkto posts were almost certainly a match with the stylistic fingerprints in the Microsoft evangelist's postings. Will Zachmann sent an open letter to Steve Ballmer, futilely demanding a public investigation into the business practices of the publicly traded Microsoft. Decline At the height of the marketing effort, Team OS/2 consisted of more than ten thousand known members, and countless undocumented members. IBM acknowledged publicly that without Team OS/2, there might not have been a fourth generation ("Warp 4") of the operating system. However, the IBM Marketing Director over the Grass Roots Marketing Department made the decision to meet his headcount cut targets by eliminating the entire department - one week before the 1995 Fall Comdex. Microsoft executives were said to be positively gleeful and Team OS/2 members worldwide were said to be incredulous. Within months, Whittle and Barlow had left IBM, Conway and Gobeille were reassigned within IBM, and Teamers were crushed by IBM's announcement that the marketing of individual desktop versions would come to a close. Most Team members eventually migrated away from OS/2 to Linux, which offered the power and stability which they had come to expect from OS/2, and where much of what was learned with Team OS/2 inspired at least some in the Linux and Open Source movements. Legacy Microsoft attempted to fabricate "Team NT" for COMDEX Fall 1995, but this was widely ridiculed as a blatant attempt at impersonation. "Team NT" members were Microsoft employees, and called "Team Nice Try" by industry pundits such as Spencer F. Katt (a pen name with various contributors, such as Paul Connolly), in PCWeek Magazine. When Microsoft was readying the first version of Windows NT (designated "Version 3.1") in 1993, a Texas computer user group (HAL-PC) invited IBM and Microsoft to a public "shootout" between the two operating systems. Videotape of the two demonstrations was later distributed by IBM and Team OS/2 members. Compared to the dynamic presentation given by David Barnes as he put OS/2 through its paces, the Microsoft presenter and NT showed so poorly that Microsoft demanded that all portions of the NT presentation be cut out of the videotapes which IBM was distributing of the event. This resulted in issuance of an edited version of the tape, but hundreds of original (complete) copies had already been released. The uncut version of the "OS/2 - NT Shootout" tape have been dubbed the "OS/2 - NT Shootdown" or "The Shootdown of Flight 31". The tape has been used to train professional software and hardware presenters who might face user groups. See also Operating system advocacy References OS/2 Operating system advocacy
Operating System (OS)
484
Osu! osu! is a free-to-play rhythm game primarily developed, published, and created by Dean "peppy" Herbert. Inspired by iNiS' rhythm game Ouendan, it was written in C# on the .NET framework, and was released for Microsoft Windows on 16 September 2007. The game has throughout the years been ported to macOS, Linux, Android and iOS. Asides from Osu! Tatakae! Ouendan, the game has been inspired by titles such as Taiko no Tatsujin, Beatmania IIDX, Elite Beat Agents, O2Jam, StepMania and DJMax. The game is heavily community-oriented, with all beatmaps and playable songs being community-made through the in-game map editor. Four different game modes exist, offering various ways to play a beatmap, which can also be combined with addable modifiers, increasing or decreasing the difficulty. The original osu!standard mode remains the most popular to date and as of 2021, the game has over 15,200,000 registered users. Gameplay and features There are four official game modes: "osu!" (Called "osu!standard", "osu!taiko", "osu!catch", and "osu!mania"). Each mode offers a variety of beatmaps, playable songs ranging from "TV sized" anime openings to "marathons" surpassing 7 minutes. In osu!standard, beatmaps consist of three items – hit circles, sliders, and spinners. These items are collectively known as "hit objects" or "Circles" and are arranged in different positions on the screen at different points of time during a song. Taiko beatmaps have drumbeats and spinners. Catch beatmaps have fruits and spinners, which are arranged in a horizontally falling manner. Mania beatmaps consist of keys (depicted as a small bar) and holds. The beatmap is then played with accompanying music, simulating a sense of rhythm as the player interacts with the objects to the beat of the music. Each beatmap is accompanied by music and a background. The game can be played using various peripherals: the most common setup is a graphics tablet or computer mouse to control cursor movement, paired with a keyboard or a mini keyboard with only two keys. The game offers a buyable service called osu!supporter, which grants many extra features to the user. Players are able to download beatmaps directly from inside the game, without the lengthy process of using browsers through a service called osu!direct. Features include a heart icon beside the username on the official osu! website, additional pending beatmap slots, faster download speeds, access to multiplayer on cutting edge builds, friend and country-specific leaderboards, one free username change, more in-game customization, a yellow username in the in-game chat, and more customization on one's user page (the "me" tab). osu!supporter does not affect the ranking system, or provide any in-game advantage. osu!supporter is not a recurring service. Community and competitive play Community events osu! also features different events, such as fanart and beatmapping contests. Unofficial events and conventions are also being held. The biggest unofficial event held in the community is "cavoe's osu! event" (Usually referred to as "osu! event" or "COE"), held at The Brabanthallen in 's Hertogenbosch, The Netherlands. The event has been arranged three times since 2017 yearly. However, due to the COVID-19 pandemic, COE 2020 was cancelled. There were also official stands at TwitchCon and Anime Expo. Tournaments osu! contains three main facets of competition between players. In multiplayer lobbies, up to 16 users play a map simultaneously. On individual maps, players compete for highscores on global leaderboards or against highscores set by themselves and friends. Players also compete with their ranks, which are calculated by accumulating "performance points" (pp). pp is based on a map's difficulty and the player's performance on it. In July 2019, a player, Vaxei, exceeded 1,000pp for the first time, followed by another player, idke, less than twenty-four hours later. Since 2011, there have been nine annual "osu! World Cups" (usually abbreviated as "OWC"), one for each game mode (osu!mania having two for four key and seven key). Teams for World Cups are country-based, with up to eight players per team. There are also many different community-hosted tournaments, differing in rank range, types of maps played, and how the teams are composed. Winners of tournaments typically receive prizes such as cash, merchandise, profile badges and/or osu!supporter subscriptions. Adaptations osu!stream In 2011, osu!stream was released as an adaptation of osu! for iOS devices running iOS 6 and later, also developed by Dean Herbert. The main difference between osu! and osu!stream is that osu!stream beatmaps are not user-created and are instead made by the developers of osu!stream. The version also includes some new gameplay elements. On 26 February 2020, Herbert announced that he released the source code and plans to halt development of the game, releasing one final update that made all the levels free to download. osu!lazer osu!lazer is a free and open source remake of the original game client under heavy development. It was originally projected for the stable version to come out in 2017. However, , not all features were working. The development of osu!lazer started in 2015 and development versions of osu!lazer are currently available for testing on Microsoft Windows, MacOS, Linux, Android, and iOS. osu!lazer is written entirely in .NET (formerly .NET Core). Related projects osu!framework osu!framework is an open source game framework developed with osu!lazer in mind. The goal of osu!framework development is to create a versatile and accessible game framework that goes further than most, providing things out-of-the-box such as graphics, advanced input processing, and text rendering. McOsu McOsu is an open source game client designed to play osu!standard beatmaps, available on Windows, Linux, MacOS, and the Nintendo Switch. The focus of McOsu is to provide an unofficial osu! client for practice, featuring tools that allow players to retry specific parts of beatmaps. McOsu also offers virtual reality support. This game client does not allow players to gain "performance points" or to increase their official ranking. Reception Jeuxvideo.com reviewed osu! favorably with 18/20 points in 2015. In 2010, MMOGames.com reviewer Daniel Ball said that while the game was very similar to Elite Beat Agents, it was differentiated by its community's large library of high-quality community made content and customization. osu! has been used and recommended by esports players such as Ninja and EFFECT, as a way to warm-up and practice their aim. Notes References External links osu!lazer GitHub page Official osu! wiki 2007 video games IOS games MacOS games Music video games Rhythm games Open-source video games Creative Commons-licensed video games Video games developed in Australia Windows games Windows Phone games Software using the MIT license
Operating System (OS)
485
UNIX/32V UNIX/32V was an early version of the Unix operating system from Bell Laboratories, released in June 1979. 32V was a direct port of the Seventh Edition Unix to the DEC VAX architecture. Overview Before 32V, Unix had primarily run on DEC PDP-11 computers. The Bell Labs group that developed the operating system was dissatisfied with DEC, so its members refused DEC's offer to buy a VAX when the machine was announced in 1977. They had already begun a Unix port to the Interdata 8/32 instead. DEC then approached a different Bell Labs group in Holmdel, New Jersey, which accepted the offer and started work on what was to become 32V. Performed by Tom London and John F. Reiser, porting Unix was made possible due to work done between the Sixth and Seventh Editions of the operating system to decouple it from its "native" PDP-11 environment. The 32V team first ported the C compiler (Johnson's pcc), adapting an assembler and loader written for the Interdata 8/32 version of Unix to the VAX. They then ported the April 15, 1978 version of Unix, finding in the process that "[t]he (Bourne) shell [...] required by far the largest conversion effort of any supposedly portable program, for the simple reason that it is not portable." UNIX/32V was released without paging virtual memory, retaining only the swapping architecture of Seventh Edition. A virtual memory system was added at Berkeley by Bill Joy and Özalp Babaoğlu in order to support Franz Lisp; this was released to other Unix licensees as the Third Berkeley Software Distribution (3BSD) in 1979. Thanks to the popularity of the two systems' successors, 4BSD and UNIX System V, UNIX/32V is an antecedent of nearly all modern Unix systems. See also Ancient UNIX References Further reading Marshall Kirk McKusick and George V. Neville-Neil, The Design and Implementation of the FreeBSD Operating System (Boston: Addison-Wesley, 2004), , pp. 4–6. External links The Unix Heritage Society, (TUHS) a website dedicated to the preservation and maintenance of historical UNIX systems Complete distribution of 32V with source code Source code of the 32V kernel Installation instructions and download for SimH A MS Windows program that installs the SIMH emulator and a UNIX/32V image. Information about running UNIX/32V in SIMH Bell Labs Unices Discontinued operating systems 1979 software
Operating System (OS)
486
Ultra-large-scale systems Ultra-large-scale system (ULSS) is a term used in fields including Computer Science, Software Engineering and Systems Engineering to refer to software intensive systems with unprecedented amounts of hardware, lines of source code, numbers of users, and volumes of data. The scale of these systems gives rise to many problems: they will be developed and used by many stakeholders across multiple organizations, often with conflicting purposes and needs; they will be constructed from heterogeneous parts with complex dependencies and emergent properties; they will be continuously evolving; and software, hardware and human failures will be the norm, not the exception. The term 'ultra-large-scale system' was introduced by Northrop and others to describe challenges facing the United States Department of Defense. The term has subsequently been used to discuss challenges in many areas, including the computerization of financial markets. The term 'ultra-large-scale system' (ULSS) is sometimes used interchangeably with the term 'large-scale complex IT system' (LSCITS). These two terms were introduced at similar times to describe similar problems, the former being coined in the USA and the latter in the UK. Background The term ultra-large-scale system was introduced in a 2006 report from the Software Engineering Institute at Carnegie Mellon University authored by Linda Northrop and colleagues. The report explained that software intensive systems are reaching unprecedented scales (by measures including lines of code; numbers of users and stakeholders; purposes the system is put to; amounts of data stored, accessed, manipulated, and refined; numbers of connections and interdependencies among components; and numbers of hardware elements). When systems become ultra-large-scale, traditional approaches to engineering and management will no longer be adequate. The report argues that the problem is no longer of engineering systems or system of systems, but of engineering "socio-technical ecosystems". In 2013, Linda Northrop and her team conducted a talk to review outcome of the 2006 study and the reality of 2013. In summary, the talk concluded that (a) ULS systems are in the midst of society and the changes to current social fabric and institutions are significant; (b) The 2006 original research team was probably too conservative in their report; (c) Recent technologies have exacerbated the pace of scale growth; and (d) There are great opportunities. At a similar time to the publication of the report by Northrop and others, a research and training initiative was being initiated in the UK on Large-scale Complex IT Systems. Many of the challenges recognized in this initiative were the same as, or were similar to those recognized as the challenges of ultra-large-scale systems. Greg Goth quotes Dave Cliff, director of the UK initiative as saying "The ULSS proposal and the LSCITS proposal were written entirely independently, yet we came to very similar conclusions about what needs to be done and about how to do it". A difference pointed out by Ian Sommerville is that the UK initiative began with a 5 to 10 year vision, while that of Northrop and her co-authors was much longer term. This seems to have led to there being two slightly different perspectives on ultra-large-scale systems. For example, Richard Gabriel's perspective is that ultra-large-scale systems are desirable but currently impossible to build due to limitations in the fields of software design and systems engineering. On the other hand, Ian Sommerville's perspective is that ultra-large-scale systems are already emerging (for example in air traffic control), the key problem being not how to achieve them but how to ensure they are adequately engineered. Characteristics of an ultra-large-scale system Ultra-large-scale systems hold the characteristics of systems of systems (systems that have: operationally independent sub-systems; managerially independent components and sub-systems; evolutionary development; emergent behavior; and geographic distribution). But in addition to these, the Northrop report argues that a ULSS will: Have decentralized data, development, evolution and operational control Address inherently conflicting, unknowable, and diverse requirements Evolve continuously while it is operating, with different capabilities being deployed and removed Contain heterogeneous, inconsistent and changing elements Erode the people system boundary. People will not just be users, but elements of the system and affecting its overall emergent behavior. Encounter failure as the norm, rather than the exception, with it being extremely unlikely that all components are functioning at any one time Require new paradigms for acquisition and policy, and new methods for control The Northrop report states that "the sheer scale of ULS systems will change everything. ULS systems will necessarily be decentralized in a variety of ways, developed and used by a wide variety of stakeholders with conflicting needs, evolving continuously, and constructed from heterogeneous parts. People will not just be users of a ULS system; they will be elements of the system. The realities of software and hardware failures will be fundamentally integrated into the design and operation of ULS systems. The acquisition of a ULS system will be simultaneous with its operation and will require new methods for control. In ULS systems, these characteristics will dominate. Consequently, ULS systems will place unprecedented demands on software acquisition, production, deployment, management, documentation, usage, and evolution practices." Domains in which ultra-large-scale systems are emerging The term ultra-large-scale system was introduced by Northrop and others to discuss challenges faced by the United States Department of Defense in engineering software intensive systems. In 2008 Greg Goth wrote that although Northrop’s report focused on the US military’s future requirements, "its description of how the fundamental principles of software design will change in a global economy … is finding wide appeal". The term is now used to discuss problems in several domains. Defense The Northrop report argued that "the U.S. Department of Defense (DoD) has a goal of information dominance … this goal depends on increasingly complex systems characterized by thousands of platforms, sensors, decision nodes, weapons, and warfighters connected through heterogeneous wired and wireless networks. … These systems will push far beyond the size of today's systems by every measure … They will be ultra-large-scale systems." Financial trading Following the flash crash, Cliff and Northrop have argued "The very high degree of interconnectedness in the global markets means that entire trading systems, implemented and managed separately by independent organizations, can rightfully be considered as significant constituent entities in the larger global super-system. … The sheer number of human agents and computer systems connected within the global financial-markets system-of-systems is so large that it is an instance of an ultra-large-scale system, and that largeness-of-scale has significant effects on the nature of the system". Healthcare Kevin Sullivan has stated that the US healthcare system is "clearly an ultra-large-scale system" and that building national scale cyber-infrastructure for healthcare "demands not just a rigorous, modern software and systems engineering effort, but an approach at the cutting edge of our understanding of information processing systems and their development and deployment in complex socio-technical environments". Others Other domains said to be seeing the rise of ultra-large-scale systems include government, transport systems (for example air traffic control systems), energy distribution systems (for example smart grids) and large enterprises. Research Fundamental gaps in our current understanding of software and software development at the scale of ULS systems present profound impediments to the technically and economically effective achievement of significant gains in core system functionality. These gaps are strategic, not tactical. They are unlikely to be addressed adequately by incremental research within established categories. Rather, we require a broad new conception of both the nature of such systems and new ideas for how to develop them. We will need to look at them differently, not just as systems or systems of systems, but as socio-technical ecosystems. We will face fundamental challenges in the design and evolution, orchestration and control, and monitoring and assessment of ULS systems. These challenges require breakthrough research. ULSS research in the United States The Northrop report proposed a ULS systems research agenda for an interdisciplinary portfolio of research in at least the following areas: Human interaction – People are key participants in ULS systems. Many problems in complex systems today stem from failures at the individual and organizational level. Understanding ULS system behavior will depend on the view that humans are elements of a socially constituted computational process. This research involves anthropologists, sociologists, and social scientists conducting detailed socio-technical analyses of user interactions in the field, with the goal of understanding how to construct and evolve such socio-technical systems effectively. Computational emergence – ULS systems must satisfy the needs of participants at multiple levels of an organization. These participants will often behave opportunistically to meet their own objectives. Some aspects of ULS systems will be "programmed" by properly incentivizing and constraining behavior rather than by explicitly prescribing. This research area explores the use of methods and tools based on economics and game theory (e.g., mechanism design) to ensure globally optimal ULS system behavior by exploiting the strategic self-interests of the system’s constituencies. This research area also includes exploring metaheuristics and digital evolution to augment the cognitive limits of human designers, so they can manage ongoing ULS system adaptation more effectively. Design – Current design theory, methods, notations, tools, and practices and the acquisition methods that support them are inadequate to design ULS systems effectively. This research area broadens the traditional technology-centric definition of design to include people and organizations; social, cognitive, and economic considerations; and design structures such as design rules and government policies. It involves research in support of designing ULS systems from all of these points of view and at many levels of abstraction, from the hardware to the software to the people and organizations in which they work. Computational engineering – New approaches will be required to enable intellectual control at an entirely new level of scope and scale for system analysis, design, and operation. ULS systems will be defined in many languages, each with its own abstractions and semantic structures. This research area focuses on evolving the expressiveness of representations to accommodate this semantic diversity. Because the complexity of ULS systems will challenge human comprehension, this area also focuses on providing automated support for computing the behavior of components and their compositions in systems and for maintaining desired properties as ULS systems evolve. Adaptive system infrastructure – ULS systems require an infrastructure that permits organizations in distributed locations to work in parallel to develop, select, deploy, and evolve system components. This research area investigates integrated development environments and runtime platforms that support the decentralized nature of ULS systems. This research also focuses on technologies, methods, and theories that will enable ULS systems to be developed in their deployment environments. Adaptable and predictable system quality – ULS systems will be long-running and must operate robustly in environments fraught with failures, overloads, and attacks. These systems must maintain robustness in the presence of adaptations that are not centrally controlled or authorized. Managing traditional qualities such as security, performance, reliability, and usability is necessary but not sufficient to meet the challenges of ULS systems. This research area focuses on how to maintain quality in a ULS system in the face of continuous change, ongoing failures, and attacks. It also includes identifying, predicting, and controlling new indicators of system health (akin to the U. S. gross domestic product) that are needed because of the scale of ULS systems. Policy, acquisition, and management – Policy and management frameworks for ULS systems must address organizational, technical, and operational policies at all levels. Rules and policies must be developed and automated to enable fast and effective local action while preserving global capabilities. This research area focuses on transforming acquisition policies and processes to accommodate the rapid and continuous evolution of ULS systems by treating suppliers and supply chains as intrinsic and essential components of a ULS system. The proposed research does not supplant current, important software research but rather significantly expands its horizons. Moreover, because it is focused on systems of the future, the SEI team purposely avoided couching descriptions in terms of today’s technology. The envisioned outcome of the proposed research is a spectrum of technologies and methods for developing these systems of the future, with national-security, economic, and societal benefits that extend far beyond ULS systems themselves. ULSS research in the UK The UK’s research programme in Large-scale Complex IT Systems has been concerned with issues around ULSS development and considers that an LSCITS (Large-scale complex IT system) shares many of the characteristics of a ULSS. ULSS research in China The National Natural Science Foundation of China has outlined a five-year project for researchers to study the assembly of ultra-large spacecraft. Although vague, the project would have applications for potential megaprojects, including colossal space-based solar power stations. Work on an Ultra-Large Aperture On-Orbit Assembly Project under the Chinese Academy of Sciences (CAS) and with support from the Chinese Ministry of Science and Technology is already underway. See also System of systems Complex adaptive system Systems theory Systems design Software architecture Emergence Self-organization Sociotechnical systems Large-scale Complex IT Systems References External links ULS Systems – Carnegie Mellon Software Engineering Institute's program for Ultra Large Scale Systems Ultra-Large-Scale Systems: The Software Challenge of the Future – The 2006 report for a 12-month study of ultra-large-scale systems software, sponsored by the United States Department of Defense ULS Systems Glossary Stepping Up to Long-Term Research – IEEE Distributed Systems Online article on Ultra-Large Systems research Why Multi-Core is Easy and Internet is Hard – a paper (and discussion) that touches on topics important in ULS research The Agoric Papers archived copies of – Three papers on capability-based market-oriented computing (concepts that are the subject of some ULS Systems research), written by Mark S. Miller and K. Eric Drexler Delivering Ultra-large Scale Services - a Canadian research project investigating the challenges involved in delivering ultra-large scale services. Check out our list of publications Computer engineering Computer systems Electronic design automation Military acquisition Systems analysis Systems engineering Systems theory
Operating System (OS)
487
GNU variants GNU variants (also called GNU distributions or distros for short) are operating systems based upon the GNU operating system (the Hurd kernel, the GNU C library, system libraries and application software like GNU coreutils, bash, GNOME, the Guix package manager, etc). According to the GNU project and others, these also include most operating systems using the Linux kernel and a few others using BSD-based kernels. GNU users usually obtain their operating system by downloading GNU distributions, which are available for a wide variety of systems ranging from embedded devices (for example, LibreCMC) and personal computers (for example, Debian GNU/Hurd) to powerful supercomputers (for example, Rocks Cluster Distribution). Hurd kernel Hurd is the official kernel developed for the GNU system (before Linux-libre also became an official GNU package). Debian GNU/Hurd was discussed for a release as technology preview with Debian 7.0 Wheezy, however these plans were discarded due to the immature state of the system. However the maintainers of Debian GNU/Hurd decided to publish an unofficial release on the release date of Debian 7.0. Debian GNU/Hurd is not considered yet to provide the performance and stability expected from a production system. Among the open issues are incomplete implementation of Java and X.org graphical user interfaces and limited hardware driver support. About two thirds of the Debian packages have been ported to Hurd. Arch Hurd is a derivative work of Arch Linux, porting it to the GNU Hurd system with packages optimised for the Intel P6 architecture. Their goal is to provide an Arch-like user environment (BSD-style init scripts, pacman package manager, rolling releases, and a simple set up) on the GNU Hurd, which is stable enough for at least occasional use. Currently it provides a LiveCD for evaluation purposes and installation guides for LiveCD and conventional installation. Linux kernel The term GNU/Linux or GNU+Linux is used by the FSF and its supporters to refer to an operating system where the Linux kernel is distributed with a GNU system software. Such distributions are the primary installed base of GNU packages and programs and also of Linux. The most notable official use of this term for a distribution is Debian GNU/Linux. As of 2018, the only GNU variants recommended by the GNU Project for regular use are Linux distributions committed to the Free System Distribution Guidelines; most of which refer to themselves as "GNU/Linux" (like Debian), and actually use a deblobbed version of the Linux kernel (like the Linux-libre kernel) and not the mainline Linux kernel. BSD kernels Debian GNU/kFreeBSD is an operating system for IA-32 and x86-64 computer architectures. It is a distribution of GNU with Debian package management and the kernel of FreeBSD. The k in kFreeBSD is an abbreviation for kernel of, and reflects the fact that only the kernel of the complete FreeBSD operating system is used. The operating system was officially released with Debian Squeeze (6.0) on February 6, 2011. One Debian GNU/kFreeBSD live CD is Ging, which is no longer maintained. was an experimental port of GNU user-land applications to NetBSD kernel. No official release of this operating system was made; although work was conducted on ports for the IA-32 and DEC Alpha architectures, it has not seen active maintenance since 2002 and is no longer available for download. As of September 2020, the GNU Project does not recommend or endorse any BSD operating systems. OpenSolaris (Illumos) kernel Nexenta OS is the first distribution that combines the GNU userland (with the exception of libc; OpenSolaris' libc is used) and Debian's packaging and organisation with the OpenSolaris kernel. Nexenta OS is available for IA-32 and x86-64 based systems. Nexenta Systems, Inc initiated the project and sponsors its continued development. Nexenta OS is not considered a GNU variant, due to the use of OpenSolaris libc. Multiple Illumos distributions use GNU userland by default. Darwin kernel Windows NT kernel The Cygwin project is an actively-developed compatibility layer in the form of a C library providing a substantial part of the POSIX API functionality for Windows, as well as a distribution of GNU and other Unix-like programs for such an ecosystem. It was first released in 1995 by Cygnus Solutions (now Red Hat). In 2016 Microsoft and Canonical added an official compatibility layer to Windows 10 that translates Linux kernel calls into Windows NT ones, the reverse of what Wine does. This allows ELF executables to run unmodified on Windows, and is intended to provide web developers with the more familiar GNU userland on top of the Windows kernel. The combination has been dubbed "Linux for Windows", even though Linux (i.e. the operating system family defined by its common use of the Linux kernel) is absent. See also Comparison of Linux distributions GNU/Linux naming controversy References External links Arch Hurd Superunprivileged.org GNU/Hurd-based Live CD Debian GNU/kFreeBSD Debian GNU/NetBSD #debian-kbsd on OFTC Ging live CD Free software operating systems variants
Operating System (OS)
488
VSE (operating system) z/VSE (Virtual Storage Extended) is an operating system for IBM mainframe computers, the latest one in the DOS/360 lineage, which originated in 1965. Announced Feb. 1, 2005 by IBM as successor to VSE/ESA 2.7, then-new z/VSE was named to reflect the new "System z" branding for IBM's mainframe product line. DOS/VSE was introduced in 1979 as a successor to DOS/VS; in turn, DOS/VSE was succeeded by VSE/SP version 1 in 1983, and VSE/SP version 2 in 1985. It is less common than prominent z/OS and is mostly used on smaller machines. In the late 1980s, there was a widespread perception among VSE customers that IBM was planning to discontinue VSE and migrate its customers to MVS instead, although IBM relented and agreed to continue to produce new versions of VSE. Overview DOS/360 originally used 24-bit addressing. As the underlying hardware evolved, VSE/ESA acquired 31-bit addressing capability. IBM released z/VSE Version 4, which requires 64-bit z/Architecture hardware and can use 64-bit real mode addressing, in 2007. With z/VSE 5.1 (available since 2011) z/VSE introduced 64 bit virtual addressing and memory objects (chunks of virtual storage), that are allocated above 2 GB. The latest shipping release is z/VSE 6.2.0 - available since December 2017, which includes the new CICS Transaction Server for z/VSE 2.2. User interfaces Job Control Language (JCL) A Job Control Language (JCL) that continues in the positional-parameter orientation of earlier DOS systems is z/VSE's batch processing primary user interface. There is also another, special interface for system console operators. Beyond batch z/VSE, like z/OS systems, had traditionally provided 3270 terminal user interfaces. However, most z/VSE installations have at least begun to add Web browser access to z/VSE applications. z/VSE's TCP/IP is a separately priced option for historic reasons, and is available in two different versions from two vendors. Both vendors provide a full function TCP/IP stack with applications, such as telnet and FTP. One TCP/IP stack provides IPv4 communication only, the other IPv4 and IPv6 communication. In addition to the commercially available TCP/IP stacks for z/VSE, IBM also provides the Linux Fastpath method which uses IUCV socket or Hipersockets connections to communicate with a Linux guest, also running on the mainframe. Using this method the z/VSE system is able to fully exploit the native Linux TCP/IP stack. IBM recommends that z/VSE customers run Linux on IBM Z alongside, on the same physical system, to provide another 64-bit application environment that can access and extend z/VSE applications and data via Hipersockets using a wide variety of middleware. CICS, one of the most popular enterprise transaction processing systems, is extremely popular among z/VSE users and now implements recent innovations such as Web services. DB2 is also available and popular. Device support z/VSE can use ECKD, FBA and SCSI devices. Fibre Channel access to SCSI storage devices was initially available on z/VSE 3.1 on a limited basis (including on IBM's Enterprise Storage Server (ESS), IBM System Storage DS8000, DS6000 series), but the limitations disappeared with 4.2 (thus including IBM Storwize V7000, V5000, V3700 and V9000). Older z/VSE versions The last VSE/ESA release - VSE/ESA 2.7 - is no longer supported since February 28, 2007. z/VSE 3.1 was the last release, that was compatible with 31-bit mainframes, as opposed to z/VSE Version 4, 5 and 6. z/VSE 3.1 was supported to 2009. z/VSE Version 4 is no longer supported since October 2014 (end of service for z/VSE 4.3). For VSE/ESA, DOS/VSE, VSE/SP, see History of IBM mainframe operating systems#DOS/VS See also z/OS z/TPF z/VM History of IBM mainframe operating systems#DOS/VS History of IBM mainframe operating systems References External links IBM z/VSE website IBM mainframe operating systems
Operating System (OS)
489
Objective Interface Systems Objective Interface Systems, Inc. is a computer communications software and hardware company. The company's headquarters are in Herndon, Virginia, USA. OIS develops, manufactures, licenses, and supports software and hardware products that generally fit into one or more of the following markets: Real-time communications middleware software and hardware Embedded communications middleware software and hardware High-performance communications middleware software and hardware Secure communications software and hardware A popular OIS product is the ORBexpress CORBA middleware software. ORBexpress is most popular in the real-time and embedded computer markets. OIS supports the software version ORBexpress on more than 2,200 computing platforms (combinations of the versions of CPU families, operating systems, and language compilers). OIS also has FPGA versions of ORBexpress to allow hardware blocks on an FPGA to interoperate with software. OIS engineers invented a form of communications security called the Partitioning Communication System or PCS. The PCS is a technical architecture that protects multiple Information Flows from influencing each other when communicated on a single network wire. The PCS is best implemented on a software separation operating system such as SELinux or a separation kernel. OIS's communications products are most frequently found in the enterprise, telecom/datacom, mil/aero, medical, robotics, process control and transportation industries. Objective Interface is a privately held company and has developed software products since 1989 and hardware products since 2001. The Company is actively involved with various standards groups including: Common Criteria IEEE Network Centric Operations Industry Consortium Object Management Group (OMG) The Open Group Wireless Innovation Forum Corporate Headquarters OIS headquarters is located at 220 Spring Street, Suite 530, Herndon, VA, 20170-6201. References External links Objective interface Objective Interface Systems - 'Official website' Object Management Group (OMG) The Open Group Wireless Innovation Forum Common Object Request Broker Architecture Companies based in Fairfax County, Virginia Software companies based in Virginia Computer hardware companies Software companies of the United States
Operating System (OS)
490
Input/output In computing, input/output (I/O, or informally io or IO) is the communication between an information processing system, such as a computer, and the outside world, possibly a human or another information processing system. Inputs are the signals or data received by the system and outputs are the signals or data sent from it. The term can also be used as part of an action; to "perform I/O" is to perform an input or output operation. are the pieces of hardware used by a human (or other system) to communicate with a computer. For instance, a keyboard or computer mouse is an input device for a computer, while monitors and printers are output devices. Devices for communication between computers, such as modems and network cards, typically perform both input and output operations. The designation of a device as either input or output depends on perspective. Mice and keyboards take physical movements that the human user outputs and convert them into input signals that a computer can understand; the output from these devices is the computer's input. Similarly, printers and monitors take signals that computers output as input, and they convert these signals into a representation that human users can understand. From the human user's perspective, the process of reading or seeing these representations is receiving output; this type of interaction between computers and humans is studied in the field of human–computer interaction. A further complication is that a device traditionally considered an input device, e.g., card reader, keyboard, may accept control commands to, e.g., select stacker, display keyboard lights, while a device traditionally considered as an output device may provide status data, e.g., low toner, out of paper, paper jam. In computer architecture, the combination of the CPU and main memory, to which the CPU can read or write directly using individual instructions, is considered the brain of a computer. Any transfer of information to or from the CPU/memory combo, for example by reading data from a disk drive, is considered I/O. The CPU and its supporting circuitry may provide memory-mapped I/O that is used in low-level computer programming, such as in the implementation of device drivers, or may provide access to I/O channels. An I/O algorithm is one designed to exploit locality and perform efficiently when exchanging data with a secondary storage device, such as a disk drive. Interface An I/O interface is required whenever the I/O device is driven by a processor. Typically a CPU communicates with devices via a bus. The interface must have the necessary logic to interpret the device address generated by the processor. Handshaking should be implemented by the interface using appropriate commands (like BUSY, READY, and WAIT), and the processor can communicate with an I/O device through the interface. If different data formats are being exchanged, the interface must be able to convert serial data to parallel form and vice versa. Because it would be a waste for a processor to be idle while it waits for data from an input device there must be provision for generating interrupts and the corresponding type numbers for further processing by the processor if required. A computer that uses memory-mapped I/O accesses hardware by reading and writing to specific memory locations, using the same assembly language instructions that computer would normally use to access memory. An alternative method is via instruction-based I/O which requires that a CPU have specialized instructions for I/O. Both input and output devices have a data processing rate that can vary greatly. With some devices able to exchange data at very high speeds direct access to memory (DMA) without the continuous aid of a CPU is required. Higher-level implementation Higher-level operating system and programming facilities employ separate, more abstract I/O concepts and primitives. For example, most operating systems provide application programs with the concept of files. The C and C++ programming languages, and operating systems in the Unix family, traditionally abstract files and devices as streams, which can be read or written, or sometimes both. The C standard library provides functions for manipulating streams for input and output. In the context of the ALGOL 68 programming language, the input and output facilities are collectively referred to as transput. The ALGOL 68 transput library recognizes the following standard files/devices: stand in, stand out, stand errors and stand back. An alternative to special primitive functions is the I/O monad, which permits programs to just describe I/O, and the actions are carried out outside the program. This is notable because the functions would introduce side-effects to any programming language, but this allows purely functional programming to be practical. Channel I/O Channel I/O requires the use of instructions that are specifically designed to perform I/O operations. The I/O instructions address the channel or the channel and device; the channel asynchronously accesses all other required addressing and control information. This is similar to DMA, but more flexible. Port-mapped I/O Port-mapped I/O also requires the use of special I/O instructions. Typically one or more ports are assigned to the device, each with a special purpose. The port numbers are in a separate address space from that used by normal instructions. Direct memory access Direct memory access (DMA) is a means for devices to transfer large chunks of data to and from memory independently of the CPU. See also Input device Output device Asynchronous I/O I/O bound References External links
Operating System (OS)
491
Multi-Environment Real-Time Multi-Environment Real-Time (MERT), later renamed UNIX Real-Time (UNIX-RT), is a hybrid time-sharing and real-time operating system developed in the 1970s at Bell Labs for use in embedded minicomputers (especially PDP-11s). A version named Duplex Multi Environment Real Time (DMERT) was the operating system for the AT&T 3B20D telephone switching minicomputer, designed for high availability; DMERT was later renamed Unix RTR (Real-Time Reliable). A generalization of Bell Labs' time-sharing operating system Unix, MERT featured a redesigned, modular kernel that was able to run Unix programs and privileged real-time computing processes. These processes' data structures were isolated from other processes with message passing being the preferred form of interprocess communication (IPC), although shared memory was also implemented. MERT also had a custom file system with special support for large, contiguous, statically sized files, as used in real-time database applications. The design of MERT was influenced by Dijkstra's THE, Hansen's Monitor, and IBM's CP-67. The MERT operating system was a four-layer design, in decreasing order of protection: Kernel: resource allocation of memory, CPU time and interrupts Kernel-mode processes including input/output (I/O) device drivers, file manager, swap manager, root process that connects the file manager to the disk (usually combined with the swap manager) Operating system supervisor User processes The standard supervisor was MERT/UNIX, a Unix emulator with an extended system call interface and shell that enabled the use of MERT's custom IPC mechanisms, although an RSX-11 emulator also existed. Kernel and non-kernel processes One interesting feature that DMERT – UNIX-RTR introduced was the notion of kernel processes. This is connected with its microkernelish architecture roots. In support, there is a separate command (/bin/kpkill) rather than (/bin/kill), that is used to send signals to kernel processes. It is likely there are two different system calls also (kill(2) and kpkill(2), the first to end a user process and the second to end a kernel process). It is unknown how much of the normal userland signaling mechanism is in place in /bin/kpkill, assuming there is a system call for it, it is not known if one can send various signals or simply send one. Also unknown is whether the kernel process has a way of catching the signals that are delivered to it. It may be that the UNIX-RTR developers implemented an entire signal and messaging application programming interface (API) for kernel processes. File system bits If one has root on a UNIX-RTR system, they will surely soon find that their ls -l output is a bit different than expected. Namely, there are two completely new bits in the drwxr-xr-x field. They both take place in the first column, and are C (contiguous) and x (extents). Both of these have to do with contiguous data, however one may be to do with inodes and the other with non-metadata. Example ls -l (which does not include group names, as ls -l did not used to print them). drwxr-xr-x root 64 Sun Dec 4 2003 /cft xrwxr-xr-x root 64 Mon Dec 11 2013 /no5text Crwxr-xr-x root 256 Tue Dec 12 2014 /no5data Lucent emulator and VCDX AT&T, then Lucent, and now Alcatel-Lucent, are the vendor of the SPARC-based and Solaris-OEM package ATT3bem (which lives on Solaris SPARC in /opt/ATT3bem). This is a full 3B21D emulator (known as the 3B21E, the system behind the Very Compact Digital eXchange, or VCDX) which is meant to provide a production environment to the Administrative Module (AM) portion of the 5ESS switch. There are parts of the 5ESS that are not part of the 3B21D microcomputer at all: SMs and CMs. Under the emulator the workstation is referred to as the 'AW' (Administrative Workstation). The emulator installs with Solaris 2.6/SPARC and also comes with Solstice X.25 9.1 (SUNWconn), formerly known as SunLink X.25. The reason for packaging the X.25 stack with the 3B21D emulator is because the Bell System, regional Bell operating companies, and ILECs still use X.25 networks for their most critical of systems (telephone switches may live on X.25 or Datakit VCS II, a similar network developed at Bell Labs), but they do not have TCP/IP stacks). The AT&T/Alcatel-Lucent emulator is not an easy program to get working correctly, even if one manages to have an image from a pulled working 5ESS hard disk 'dd' output file. First, there are quite a few bugs the user must navigate around in the installation process. Once this is done, there is a configuration file which connects peripherals to emulated peripherals. But there is scant documentation on the CD which describes this. The name of this file is em_devmap for SS5s, and em_devmap.ultra for Ultra60s. In addition, one of the bugs mentioned in the install process is a broken script to fdisk and image hard disks correctly: certain things need to be written to certain offsets, because the /opt/ATT3bem/bin/3bem process expects, or seems to need, these hard-coded locations. The emulator runs on SPARCstation-5s and UltraSPARC-60s. It is likely that the 3B21D is emulated faster on a modern SPARC than a 3B21D microcomputer's processor actually runs as measured in MIPS. The most difficult thing about having the emulator is acquiring a DMERT/UNIX-RTR hdd image to actually run. The operating system for the 5ESS is restricted to a few people, employees and customers of the vendor, who either work on it or write the code for it. Having an image of a running system, which can be obtained on eBay, pulled from a working 3B21D, and imaged to a file or put into an Ultra60 or SPARCstation-5, provides the resources to attempt to run the UNIX-RTR system. The uname -a output of the Bourne shell running UNIX-RTR (Real-time Reliable) is: # uname -a <3B21D> <3B21D> Though on 3B20D systems it will print 20 instead of 21, though 3B20Ds are rare, nowadays most non-VCDX 5ESSs are 3B21D hardware, not 3B20D (although they will run the software fine). The 3B20D uses the WE32000 processor while the 21 uses the WE32100. There may be some other differences, as well. One thing unusual about the processor is the direction the stack grows: up. Manual page for falloc (which may be responsible for Contiguous or eXtent file space allocation): FALLOC(1) 5ESS UNIX FALLOC(1) NAME falloc - allocate a contiguous file SYNOPSIS falloc filename size DESCRIPTION A contiguous file of the specified filename is allocated to be of 'size' (512 byte) blocks. DIAGNOSTICS The command complains a needed directory is not searchable, the final directory is not writable, the file already exists or there is not enough space for the file. UNIX-RTR includes an atomic file swap command (atomsw, manual page below): ATOMSW(1) 5ESS UNIX ATOMSW(1) NAME atomsw - Atomic switch files SYNOPSIS atomsw file1 file2 DESCRIPTION Atomic switch of two files. The contents, permissions, and owners of two files are switched in a single operation. In case of a system fault during the operation of this command, file2 will either have its original contents, permissions and owner, or will have file1's contents, permissions and owner. Thus, file2 is considered precious. File1 may be truncated in case of a system fault. RESTRICTIONS Both files must exist. Both files must reside on the same file system. Neither file may be a "special device" (for example, a TTY port). To enter this command from the craft shell, switching file "/tmp/abc" with file "/tmp/xyz", enter for MML: EXC:ENVIR:UPROC,FN="/bin/atomsw",ARGS="/tmp/abc"-"/tmp/xyz"; For PDS enter: EXC:ENVIR:UPROC,FN"/bin/atomsw",ARGS("/tmp/abc","/tmp/xyz")! NOTE File 1 may be lost during a system fault. FILES /bin/atomsw References Real-time operating systems Bell Labs Unices Microkernel-based operating systems Microkernels
Operating System (OS)
492
GM-NAA I/O The GM-NAA I/O input/output system of General Motors and North American Aviation was the first operating system for the IBM 704 computer. It was created in 1956 by Robert L. Patrick of General Motors Research and Owen Mock of North American Aviation. It was based on a system monitor created in 1955 by programmers of General Motors for its IBM 701. The main function of GM-NAA I/O was to automatically execute a new program once the one that was being executed had finished (batch processing). It was formed of shared routines to the programs that provided common access to the input/output devices. Some version of the system was used in about forty 704 installations. See also SHARE Operating System, an operating system based on GM-NAA I/O. Multiple Console Time Sharing System Timeline of operating systems Resident monitor References External links Operating Systems at Conception by Robert L. Patrick The World’s First Computer Operating System in millosh's blog talks about the General Motors OS and GM-NAA I/O IBM mainframe operating systems Discontinued operating systems 1956 software Computer-related introductions in 1956 History of computing
Operating System (OS)
493
Cross-platform software In computing, cross-platform software (also called multi-platform software, platform-agnostic software, or platform-independent software) is computer software that is designed to work in several computing platforms. Some cross-platform software requires a separate build for each platform, but some can be directly run on any platform without special preparation, being written in an interpreted language or compiled to portable bytecode for which the interpreters or run-time packages are common or standard components of all supported platforms. For example, a cross-platform application may run on Microsoft Windows, Linux, and macOS. Cross-platform software may run on many platforms, or as few as two. Some frameworks for cross-platform development are Codename One, Kivy, Qt, Flutter, NativeScript, Xamarin, Phonegap, Ionic, and React Native. Platforms Platform can refer to the type of processor (CPU) or other hardware on which an operating system (OS) or application runs, the type of OS, or a combination of the two. An example of a common platform is the Microsoft Windows OS running on the x86 architecture. Other well-known desktop platforms are Linux/Unix and macOS - both of which are themselves cross-platform. There are, however, many devices such as smartphones that are also platforms. Applications can be written to depend on the features of a particular platform—either the hardware, OS, or virtual machine (VM) it runs on. For example, the Java platform is a common VM platform which runs on many OSs and hardware types. Hardware A hardware platform can refer to an instruction set architecture. For example: x86 architecture and its variants such as IA-32 and x86-64. These machines often run one version of Microsoft Windows, though they can run other OSs including Linux, OpenBSD, NetBSD, macOS and FreeBSD. The 32-bit ARM architectures (and newer 64-bit version) is common on smartphones and tablet computers, which run Android, iOS and other mobile operating systems. Software A software platform can be either an OS or programming environment, though more commonly it is a combination of both. An exception is Java, which uses an OS-independent VM to execute Java bytecode. Examples of software platforms are: BlackBerry 10 Android for smartphones and tablet computers (x86, ARM) iOS (ARM) Microsoft Windows (x86, ARM) Microsoft's Common Language Infrastructure (CLI), also known as .NET Framework Cross-platform variant Mono (previously by Novell and now by Xamarin) Java Web browsers – more or less compatible with each other, running JavaScript web-apps Linux (x86, PowerPC, ARM, and other architectures) macOS (x86, PowerPC (on 10.5 and below), and ARM (on Apple silicon or 11.0 and above)) Mendix Solaris (SPARC, x86) SymbianOS SPARC PlayStation 4 (x86), PlayStation 3 (PowerPC) and PlayStation Vita (ARM) Unix Xbox Minor/historical AmigaOS (m68k), AmigaOS 4 (PowerPC), AROS (x86, PowerPC, m68k), MorphOS (PowerPC) Atari TOS, MiNT BSD (many platforms; see NetBSDnet, for example) DOS-type systems on the x86: MS-DOS, IBM PC DOS, DR-DOS, FreeDOS OS/2, eComStation Java The Java language is typically compiled to run on a VM that is part of the Java platform. The Java VM (JVM) is a CPU implemented in software, which runs all Java code. This enables the same code to run on all systems that implement a JVM. Java software can be executed by a hardware-based Java processor. This is used mostly in embedded systems. Java code running in the JVM has access to OS-related services, like disk I/O and network access, if the appropriate privileges are granted. The JVM makes the system calls on behalf of the Java application. This lets users to decide the appropriate protection level, depending on an ACL. For example, disk and network access is usually enabled for desktop applications, but not for browser-based applets. The Java Native Interface (JNI) can also be used to access OS-specific functions, with a loss of portability. Currently, Java Standard Edition software can run on Microsoft Windows, macOS, several Unix-like OSs, and several real-time operating systems for embedded devices. For mobile applications, browser plugins are used for Windows and Mac based devices, and Android has built-in support for Java. There are also subsets of Java, such as Java Card or Java Platform, Micro Edition, designed for resource-constrained devices. Implementation For software to be considered cross-platform, it must be function on more than one computer architecture or OS. Developing such software can be a time-consuming task because different OSs have different application programming interfaces (API). For example, Linux uses a different API from Windows. Software written for one OS may not automatically work on all architectures that OS supports. One example is OpenOffice.org, which in 2006 did not natively run on AMD64 or Intel 64 processors implementing the x86-64 standards; by 2012 it was "mostly" ported to these systems. Just because software is written in a popular programming language such as C or C++, it does not mean it will run on all OSs that support that language—or even on different versions of the same OS. Web applications Web applications are typically described as cross-platform because, ideally, they are accessible from any web browser: the browser is the platform. Web applications generally employ a client–server model, but vary widely in complexity and functionality. It can be hard to reconcile the desire for features with the need for compatibility. Basic web applications perform all or most processing from a stateless server, and pass the result to the client web browser. All user interaction with the application consists of simple exchanges of data requests and server responses. This type of application was the norm in the early phases of World Wide Web application development. Such applications follow a simple transaction model, identical to that of serving static web pages. Today, they are still relatively common, especially where cross-platform compatibility and simplicity are deemed more critical than advanced functionality. Prominent examples of advanced web applications include the Web interface to Gmail, A9.com, Google Maps website, and the Live Search service (now Bing) from Microsoft. Such applications routinely depend on additional features found only in the more recent versions of popular web browsers. These features include Ajax, JavaScript, Dynamic HTML, SVG, and other components of rich web applications. Older versions often lack these. Design Because of the competing interests of compatibility and functionality, numerous design strategies have emerged. Many software systems use a layered architecture where platform-dependent code is restricted to the upper- and lowermost layers. Graceful degradation Graceful degradation attempts to provide the same or similar functionality to all users and platforms, while diminishing that functionality to a least common denominator for more limited client browsers. For example, a user attempting to use a limited-feature browser to access Gmail may notice that Gmail switches to basic mode, with reduced functionality but still of use. Multiple codebases Some software is maintained in distinct codebases for different (hardware and OS) platforms, with equivalent functionality. This requires more effort to maintain the code, but can be worthwhile where the amount of platform-specific code is high. Single codebase This strategy relies on having one codebase that may be compiled to multiple platform-specific formats. One technique is conditional compilation. With this technique, code that is common to all platforms is not repeated. Blocks of code that are only relevant to certain platforms are made conditional, so that they are only interpreted or compiled when needed. Another technique is separation of functionality, which disables functionality not supported by browsers or OSs, while still delivering a complete application to the user. (See also: Separation of concerns.) This technique is used in web development where interpreted code (as in scripting languages) can query the platform it is running on to execute different blocks conditionally. Third-party libraries Third-party libraries attempt to simplify cross-platform capability by hiding the complexities of client differentiation behind a single, unified API, at the expense of vendor lock-in. Responsive Web design Responsive web design (RWD) is a Web design approach aimed at crafting the visual layout of sites to provide an optimal viewing experience—easy reading and navigation with a minimum of resizing, panning, and scrolling—across a wide range of devices, from mobile phones to desktop computer monitors. Little or no platform-specific code is used with this technique. Testing Cross-platform applications need much more integration testing. Some web browsers prohibit installation of different versions on the same machine. There are several approaches used to target multiple platforms, but all of them result in software that requires substantial manual effort for testing and maintenance. Techniques such as full virtualization are sometimes used as a workaround for this problem. Tools such as the Page Object Model allow cross-platform tests to be scripted so that one test case covers multiple versions of an app. If different versions have similar user interfaces, all can be tested with one test case. Traditional applications Web applications are becoming increasingly popular but many computer users still use traditional application software which does not rely on a client/web-server architecture. The distinction between traditional and web applications is not always clear. Features, installation methods and architectures for web and traditional applications overlap and blur the distinction. Nevertheless, this simplifying distinction is a common and useful generalization. Binary software Traditional application software has been distributed as binary files, especially executable files. Executables only support platform they were built for—which means that a single cross-platform executable could be very bloated with code that never executes on a particular platform. Instead, generally there is a selection of executables, each built for one platform. For software that is distributed as a binary executable, such as that written in C or C++, there must be a software build for each platform, using a toolset that translates—transcompiles—a single codebase into multiple binary executables. For example, Firefox, an open-source web browser, is available on Windows, macOS (both PowerPC and x86 through what Apple Inc. calls a Universal binary), Linux, and BSD on multiple computer architectures. The four platforms (in this case, Windows, macOS, Linux, and BSD) are separate executable distributions, although they come largely from the same source code. The use of different toolsets may not be enough to build a working executables for different platforms. In this case, programmers must port the source code to the new platform. For example, an application such as Firefox, which already runs on Windows on the x86 family, can be modified and re-built to run on Linux on the x86 (and potentially other architectures) as well. The multiple versions of the code may be stored as separate codebases, or merged into one codebase. An alternative to porting is cross-platform virtualization, where applications compiled for one platform can run on another without modification of the source code or binaries. As an example, Apple's Rosetta, which is built into Intel-based Macintosh computers, runs applications compiled for the previous generation of Macs that used PowerPC CPUs. Another example is IBM PowerVM Lx86, which allows Linux/x86 applications to run unmodified on the Linux/Power OS. Example of cross-platform binary software: The LibreOffice office suite is built for Microsoft Windows, macOS, many Linux distributions, FreeBSD, NettBSD, OpenBSD, Android, iOS, Chrome OS, web-based Collabora Online and many others. Many of these are supported on several hardware platforms with processor architectures including IA-32, x86-64 and ARM. Scripts and interpreted languages A script can be considered to be cross-platform if its interpreter is available on multiple platforms and the script only uses the facilities built into the language. For example, a script written in Python for a Unix-like system will likely run with little or no modification on Windows, because Python also runs on Windows; indeed there are many implementations (e.g. IronPython for .NET Framework). The same goes for many of the open-source scripting languages. Unlike binary executable files, the same script can be used on all computers that have software to interpret the script. This is because the script is generally stored in plain text in a text file. There may be some trivial issues, such as the representation of a new line character. Some popular cross-platform scripting languages are: bash – A Unix shell commonly run on Linux and other modern Unix-like systems, as well as on Windows via the Cygwin POSIX compatibility layer. Perl – First released in 1987. Used for CGI programming, small system administration tasks, and more. PHP – Mostly used for web applications. Python – A language which focuses on rapid application development and ease of writing, instead of run-time efficiency. Ruby – An object-oriented language which aims to be easy to read. Can also be used on the web through Ruby on Rails. Tcl – A dynamic programming language, suitable for a wide range of uses, including web and desktop applications, networking, administration, testing and many more. Video games Cross-platform or multi-platform is a term that can also apply to video games released on a range of video game consoles. Examples of cross-platform games include: Miner 2049er, Tomb Raider: Legend, FIFA series, NHL series and Minecraft. Each has been released across a variety of gaming platforms, such as the Wii, PlayStation 3, Xbox 360, personal computers, and mobile devices. Sone platforms are harder to write for than others. To offset this, a video game may be released on a few platforms first, then later on others. Typically, this happens when a new gaming system is released, because video game developers need to acquaint themselves with its hardware and software. Some games may not be cross-platform because of licensing agreements between developers and video game console manufacturers that limit development to one particular console. As an example, Disney could create a game with the intention of release on the latest Nintendo and Sony game consoles. Should Disney license the game with Sony first, it may be required to release the game solely on Sony's console for a short time or indefinitely. Cross-platform play Several developers have implemented ways to play games online while using different platforms. Psyonix, Epic Games, Microsoft, and Valve all possess technology that allows Xbox 360 and PlayStation 3 gamers to play with PC gamers, leaving the decision of which platform to use to consumers. The first game to allow this level of interactivity between PC and console games was Quake 3. Games that feature cross-platform online play include Rocket League, Final Fantasy XIV, Street Fighter V, Killer Instinct, Paragon and Fable Fortune, and Minecraft with its Better Together update on Windows 10, VR editions, Pocket Edition and Xbox One. Programming Cross-platform programming is the practice of deliberately writing software to work on more than one platform. Approaches There are different ways to write a cross-platform application. One approach is to create multiple versions of the same software in different source trees—in other words, the Microsoft Windows version of an application might have one set of source code files and the Macintosh version another, while a FOSS *nix system might have a third. While this is straightforward, compared to developing for only one platform it can cost much more to pay a larger team or release products more slowly. It can also result in more bugs to be tracked and fixed. Another approach is to use software that hides the differences between the platforms. This abstraction layer insulates the application from the platform. Such applications are platform agnostic. Applications that run on the JVM are built this way. Some applications mix various methods of cross-platform programming to create the final application. An example is the Firefox web browser, which uses abstraction to build some of the lower-level components, with separate source subtrees for implementing platform-specific features (like the GUI), and the implementation of more than one scripting language to ease software portability. Firefox implements XUL, CSS and JavaScript for extending the browser, in addition to classic Netscape-style browser plugins. Much of the browser itself is written in XUL, CSS, and JavaScript. Toolkits and environments There are many tools available to help the process of cross-platform programming: 8th: a development language which utilizes Juce as its GUI layer. It currently supports Android, iOS, Windows, macOS, Linux and Raspberry Pi. Anant Computing: A mobile application platform that works in all Indian languages, including their keyboards, and also supports AppWallet and native performance in all OSs. AppearIQ: a framework that supports the workflow of app development and deployment in an enterprise environment. Natively developed containers present hardware features of the mobile devices or tablets through an API to HTML5 code thus facilitating the development of mobile apps that run on different platforms. Boden: a UI framework written in C++. Cairo: a free software library used to provide a vector graphics-based, device-independent API. It is designed to provide primitives for 2-dimensional drawing across a number of different backends. Cairo is written in C and has bindings for many programming languages. Cocos2d: an open-source toolkit and game engine for developing 2D and simple 3D cross-platform games and applications. Codename One: an open-source Write Once Run Anywhere (WORA) framework for Java and Kotlin developers. Delphi: an IDE which uses a Pascal-based language for development. It supports Android, iOS, Windows, macOS, Linux. Ecere SDK: a GUI and 2D/3D graphics toolkit and IDE, written in eC and with support for additional languages such as C and Python. It supports Linux, FreeBSD, Windows, Android, macOS and the Web through Emscripten or Binaryen (WebAssembly). Eclipse: an open-source development environment. Implemented in Java with a configurable architecture which supports many tools for software development. Add-ons are available for several languages, including Java and C++. FLTK: an open-source toolkit, but more lightweight because it restricts itself to the GUI. Flutter: A cross-platform UI framework for Android and iOS developed by Google. fpGUI: An open-source widget toolkit that is completely implemented in Object Pascal. It currently supports Linux, Windows and a bit of Windows CE. GeneXus: A Windows rapid software development solution for cross-platform application creation and deployment based on knowledge representation and supporting C#, COBOL, Java including Android and BlackBerry smart devices, Objective-C for Apple mobile devices, RPG, Ruby, Visual Basic, and Visual FoxPro. GLBasic: A BASIC dialect and compiler that generates C++ code. It includes cross compilers for many platforms and supports numerous platform (Windows, Mac, Linux, Android, iOS and some exotic handhelds). Godot: an SDK which uses Godot Engine. GTK+: An open-source widget toolkit for Unix-like systems with X11 and Microsoft Windows. Haxe: An open-source language. Juce: An application framework written in C++, used to write native software on numerous systems (Microsoft Windows, POSIX, macOS), with no change to the code. Kivy: an open-source cross-platform UI framework written in Python. It supports Android, iOS, Linux, OS X, Windows and Raspberry Pi. LEADTOOLS: Cross-platform SDK libraries to integrate recognition, document, medical, imaging, and multimedia technologies into Windows, iOS, macOS, Android, Linux and web applications. LiveCode: a commercial cross-platform rapid application development language inspired by HyperTalk. Lazarus: A programming environment for the FreePascal Compiler. It supports the creation of self-standing graphical and console applications and runs on Linux, MacOSX, iOS, Android, WinCE, Windows and WEB. Max/MSP: A visual programming language that encapsulates platform-independent code with a platform-specific runtime environment into applications for macOS and Windows A cross-platform Android runtime. It allows unmodified Android apps to run natively on iOS and macOS Mendix: a cloud-based low-code application development platform. MonoCross: an open-source model–view–controller design pattern where the model and controller are cross-platform but the view is platform-specific. Mono: An open-source cross-platform version of Microsoft .NET (a framework for applications and programming languages) MoSync: an open-source SDK for mobile platform app development in the C++ family. Mozilla application framework: an open-source platform for building macOS, Windows and Linux applications. A cross-platform JavaScript/TypeScript framework for Android and iOS development. OpenGL: a 3D graphics library. Pixel Game Maker MV: A proprietary 2D game development software for Windows for developing Windows and Nintendo Switch games. PureBasic: a proprietary language and IDE for building macOS, Windows and Linux applications. ReNative: The universal development SDK to build multi-platform projects with React Native. Includes latest iOS, tvOS, Android, Android TV, Web, Tizen TV, Tizen Watch, LG webOS, macOS/OSX, Windows, KaiOS, Firefox OS and Firefox TV platforms. Qt: an application framework and widget toolkit for Unix-like systems with X11, Microsoft Windows, macOS, and other systems—available under both proprietary and open-source licenses. Simple and Fast Multimedia Library: A multimedia C++ API that provides low and high level access to graphics, input, audio, etc. Simple DirectMedia Layer: an open-source multimedia library written in C that creates an abstraction over various platforms’ graphics, sound, and input APIs. It runs on OSs including Linux, Windows and macOS and is aimed at games and multimedia applications. Smartface: a native app development tool to create mobile applications for Android and iOS, using WYSIWYG design editor with JavaScript code editor. Tcl/Tk Ultimate++: a C++ rapid application development framework focused on programmers productivity. It includes a set of libraries (GUI, SQL, etc..), and an integrated development environment. It supports Windows and Unix-like OS-s. Unity: Another cross-platform SDK which uses Unity Engine. Uno Platform: Windows, macOS, iOS, Android, WebAssembly and Linux using C#. Unreal: A cross-platform SDK which uses Unreal Engine. V-Play Engine: V-Play is a cross-platform development SDK based on the popular Qt framework. V-Play apps and games are created within Qt Creator. WaveMaker: A low-code development tool to create responsive web and hybrid mobile (Android & iOS) applications. WinDev: an Integrated Development Environment for Windows, Linux, .Net and Java, and web browers. Optimized for business and industrial applications. wxWidgets: an open-source widget toolkit that is also an application framework. It runs on Unix-like systems with X11, Microsoft Windows and macOS. Xojo: a RAD IDE that uses an object-oriented programming language to create desktop, web and iOS apps. Xojo makes native, compiled desktop apps for macOS, Windows, Linux and Raspberry Pi. It creates compiled web apps that can be run as standalone servers or through CGI. And it recently added the ability to create native iOS apps. Challenges There are many challenges when developing cross-platform software. Testing cross-platform applications may be considerably more complicated, since different platforms can exhibit slightly different behaviors or subtle bugs. This problem has led some developers to deride cross-platform development as "write once, debug everywhere", a take on Sun Microsystems' "write once, run anywhere" marketing slogan. Developers are often restricted to using the lowest common denominator subset of features which are available on all platforms. This may hinder the application's performance or prohibit developers from using the most advanced features of each platform. Different platforms often have different user interface conventions, which cross-platform applications do not always accommodate. For example, applications developed for macOS and GNOME are supposed to place the most important button on the right-hand side of a window or dialog, whereas Microsoft Windows and KDE have the opposite convention. Though many of these differences are subtle, a cross-platform application which does not conform to these conventions may feel clunky or alien to the user. When working quickly, such opposing conventions may even result in data loss, such as in a dialog box confirming whether to save or discard changes. Scripting languages and VM bytecode must be translated into native executable code each time they are used, imposing a performance penalty. This penalty can be alleviated using techniques like just-in-time compilation; but some computational overhead may be unavoidable. Different platforms require the use of native package formats such as RPM and MSI. Multi-platform installers such as InstallAnywhere address this need. Cross-platform execution environments may suffer cross-platform security flaws, creating a fertile environment for cross-platform malware. See also Fat binary Cross-platform play Hardware-agnostic Software portability List of video games that support cross-platform play List of widget toolkits Platform virtualization Java (software platform) Language binding Transcompiler Binary code compatibility Xamarin Comparison of user features of messaging platforms Mobile development frameworks, many of which are cross-platform. References Computing platforms Interoperability
Operating System (OS)
494
Boot sector A boot sector is the sector of a persistent data storage device (e.g., hard disk, floppy disk, optical disc, etc.) which contains machine code to be loaded into random-access memory (RAM) and then executed by a computer system's built-in firmware (e.g., the BIOS). Usually, the very first sector of the hard disk is the boot sector, regardless of sector size (512 or 4096 bytes) and partitioning flavor (MBR or GPT). The purpose of defining one particular sector as the boot sector is inter-operability between various firmwares and various operating systems. The purpose of chainloading first a firmware (e.g., the BIOS), then some code contained in the boot sector, and then, for example, an operating system, is maximal flexibility. The IBM PC and compatible computers On an IBM PC compatible machine, the BIOS selects a boot device, then copies the first sector from the device (which may be an MBR, VBR or any executable code), into physical memory at memory address 0x7C00. On other systems, the process may be quite different. Unified Extensible Firmware Interface (UEFI) The UEFI (not legacy boot via CSM) does not rely on boot sectors, UEFI system loads the boot loader (EFI application file in USB disk or in the EFI system partition) directly. Additionally, the UEFI specification also contains "secure boot", which basically wants the UEFI code to be digitally signed. Damage to the boot sector In case a boot sector receives physical damage, the hard disk will no longer be bootable; unless when used with a custom BIOS, which defines a non-damaged sector as the boot sector. However, since the very first sector additionally contains data regarding the partitioning of the hard disk, the hard disk will become entirely unusable, except when used in conjunction with custom software. Partition tables A disk can be partitioned into multiple partitions and, on conventional systems, it is expected to be. There are two definitions on how to store the information regarding the partitioning: A master boot record (MBR) is the first sector of a data storage device that has been partitioned. The MBR sector may contain code to locate the active partition and invoke its Volume Boot Record. A volume boot record (VBR) is the first sector of a data storage device that has not been partitioned, or the first sector of an individual partition on a data storage device that has been partitioned. It may contain code to load an operating system (or other standalone program) installed on that device or within that partition. The presence of an IBM PC compatible boot loader for x86-CPUs in the boot sector is by convention indicated by a two-byte hexadecimal sequence 0x55 0xAA (called the boot sector signature) at the end of the boot sector (offsets 0x1FE and 0x1FF). This signature indicates the presence of at least a dummy boot loader which is safe to be executed, even if it may not be able actually to load an operating system. It does not indicate a particular (or even the presence of) file system or operating system, although some old versions of DOS 3 relied on it in their process to detect FAT-formatted media (newer versions do not). Boot code for other platforms or CPUs should not use this signature, since this may lead to a crash when the BIOS passes execution to the boot sector assuming that it contains valid executable code. Nevertheless, some media for other platforms erroneously contain the signature, anyway, rendering this check not 100% reliable in practice. The signature is checked for by most system BIOSes since (at least) the IBM PC/AT (but not by the original IBM PC and some other machines). Even more so, it is also checked by most MBR boot loaders before passing control to the boot sector. Some BIOSes (like the IBM PC/AT) perform the check only for fixed disk/removable drives, while for floppies and superfloppies, it is enough to start with a byte greater or equal to 06h and the first nine words not to contain the same value, before the boot sector is accepted as valid, thereby avoiding the explicit test for 0x55, 0xAA on floppies. Since old boot sectors (e.g., very old CP/M-86 and DOS media) sometimes do not feature this signature despite the fact that they can be booted successfully, the check can be disabled in some environments. If the BIOS or MBR code does not detect a valid boot sector and therefore cannot pass execution to the boot sector code, it will try the next boot device in the row. If they all fail it will typically display an error message and invoke INT 18h. This will either start up optional resident software in ROM (ROM BASIC), reboot the system via INT 19h after user confirmation or cause the system to halt the bootstrapping process until the next power-up. Systems not following the above described design are: CD-ROMs usually have their own structure of boot sectors; for IBM PC compatible systems this is subject to El Torito specifications. C128 or C64 software on Commodore DOS disks where data on Track 1, Sector 0 began with a magic number corresponding to string "CBM". IBM mainframe computers place a small amount of boot code in the first and second track of the first cylinder of the disk, and the root directory, called the Volume Table of Contents, is also located at the fixed location of the third track of the first cylinder of the disk. Other (non IBM-compatible) PC systems may have different boot sector formats on their disk devices. Operation On IBM PC compatible machines, the BIOS is ignorant of the distinction between VBRs and MBRs, and of partitioning. The firmware simply loads and runs the first sector of the storage device. If the device is a floppy or USB flash drive, that will be a VBR. If the device is a hard disk, that will be an MBR. It is the code in the MBR which generally understands disk partitioning, and in turn, is responsible for loading and running the VBR of whichever primary partition is set to boot (the active partition). The VBR then loads a second-stage bootloader from another location on the disk. Furthermore, whatever is stored in the first sector of a floppy diskette, USB device, hard disk or any other bootable storage device, is not required to immediately load any bootstrap code for an OS, if ever. The BIOS merely passes control to whatever exists there, as long as the sector meets the very simple qualification of having the boot record signature of 0x55, 0xAA in its last two bytes. This is why it is easy to replace the usual bootstrap code found in an MBR with more complex loaders, even large multi-functional boot managers (programs stored elsewhere on the device which can run without an operating system), allowing users a number of choices in what occurs next. With this kind of freedom, abuse often occurs in the form of boot sector viruses. Boot sector viruses Since code in the boot sector is executed automatically, boot sectors have historically been a common attack vector for computer viruses. To combat this behavior, the system BIOS often includes an option to prevent software from writing to the first sector of any attached hard drives; it could thereby protect the master boot record containing the partition table from being overwritten accidentally, but not the volume boot records in the bootable partitions. Depending on the BIOS, attempts to write to the protected sector may be blocked with or without user interaction. Most BIOSes, however, will display a popup message giving the user a chance to override the setting. The BIOS option is disabled by default because the message may not be displayed correctly in graphics mode and blocking access to the MBR may cause problems with operating system setup programs or disk access, encryption or partitioning tools like FDISK, which may not have been written to be aware of that possibility, causing them to abort ungracefully and possibly leaving the disk partitioning in an inconsistent state. As an example, the malware NotPetya attempts to gain administrative privileges on an operating system, and then would attempt to overwrite the boot sector of a computer. The CIA has also developed malware that attempts to modify the boot sector in order to load additional drivers to be used by other malware. See also Master boot record (MBR) Volume boot record (VBR) Notes References External links Computer file systems BIOS Booting
Operating System (OS)
495
Microsoft and open source Microsoft, a technology company historically known for its opposition to the open source software paradigm, turned to embrace the approach in the 2010s. From the 1970s through 2000s under CEOs Bill Gates and Steve Ballmer, Microsoft viewed the community creation and sharing of communal code, later to be known as free and open source software, as a threat to its business, and both executives spoke negatively against it. In the 2010s, as the industry turned towards cloud, embedded, and mobile computing—technologies powered by open source advances—CEO Satya Nadella led Microsoft towards open source adoption although Microsoft's traditional Windows business continued to grow throughout this period generating revenues of 26.8 billion in the third quarter of 2018, while Microsoft's Azure cloud revenues nearly doubled. Microsoft open sourced some of its code, including the .NET Framework, and made investments in Linux development, server technology, and organizations, including the Linux Foundation and Open Source Initiative. Linux-based operating systems power the company's Azure cloud services. Microsoft acquired GitHub, the largest host for open source project infrastructure, in 2018. Microsoft is among the site's most active contributors. This acquisition led a few projects to migrate away from GitHub. This proved a short lived phenomenon because by 2019 there were over 10 million new users of GitHub. Since 2017, Microsoft is one of the biggest open source contributors in the world, measured by the number of employees actively contributing to open source projects on GitHub, the largest host of source code in the world. History Initial stance on open source The paradigm of freely sharing computer source code—a practice known as open source—traces back to the earliest commercial computers, whose user groups shared code to reduce duplicate work and costs. Following an antitrust suit that forced the unbundling of IBM's hardware and software, a proprietary software industry grew throughout the 1970s, in which companies sought to protect their software products. The technology company Microsoft was founded in this period and has long been an embodiment of the proprietary paradigm and its tension with open source practices, well before the terms "free software" or "open source" were coined. Within a year of founding Microsoft, Bill Gates wrote an open letter that positioned the hobbyist act of copying software as a form of theft. Microsoft successfully expanded in personal computer and enterprise server markets through the 1990s, partially on the strength of the company's marketing strategies. By the late 1990s, Microsoft came to view the growing open source movement as a threat to their revenue and platform. Internal strategy memos from this period, known as the Halloween documents, describe the company's potential approaches to stopping open source momentum. One strategy was "embrace-extend-extinguish", in which Microsoft would adopt standard technology, add proprietary extensions, and upon establishing a customer base, would lock consumers into the proprietary extension to assert a monopoly of the space. The memos also acknowledged open source as a methodology capable of meeting or exceeding proprietary development methodology. Microsoft downplayed these memos as the opinions of an individual employee and not Microsoft's official position. While many major companies worked with open source software in the 2000s, the decade was also marked by a "perennial war" between Microsoft and open source in which Microsoft continued to view open source as a scourge on its business and developed a reputation as the archenemy of the free and open source movement. Bill Gates and Microsoft CEO Steve Ballmer suggested free software developers and the Linux kernel were communist. Ballmer also likened Linux to a kind of cancer on intellectual property. Microsoft sued Lindows, a Linux operating system that could run Microsoft Windows applications, as a trademark violation. The court rejected the claim and after Microsoft purchased its trademark, the software changed its name to Linspire. In 2002, Microsoft began experimenting with 'shared source', including the Shared Source Common Language Infrastructure, the core of .NET Framework. Adoption 2000s In April 2004, Windows Installer XML (WiX) was the first Microsoft project to be released under an open-source license, the Common Public License. Initially hosted on SourceForge, it was also the first Microsoft project to be hosted externally. In June 2004, for the first time Microsoft was represented with a booth at LinuxTag, a free software exposition, held annually in Germany. LinuxTag claims to be Europe's largest exhibition for open source software. In August 2004, Microsoft made the complete source code of the Windows Template Library (WTL) available under the Common Public License and released it through SourceForge. Since version 9.1, the library is licensed under the Microsoft Public License. In September 2004, Microsoft released its FlexWiki, making its source code available on SourceForge. The engine is open source, also licensed under the Common Public License. FlexWiki was the third Microsoft project to be distributed via SourceForge, after WiX and Windows Template Library. In 2005, Microsoft released the F# programming language under the Apache License 2.0. In 2006, Microsoft launched its CodePlex open source code hosting site, to provide hosting for open-source developers targeting Microsoft platforms. In the same year, Microsoft ported PHP to Windows under PHP License and also partnered with and commissioned Vertigo Software to create Family.Show, a free and open-source genealogy program, as a reference application for Microsoft's latest UI technology and software deployment mechanism at the time, Windows Presentation Foundation and ClickOnce. The source code has been published on CodePlex and is licensed under the Microsoft Public License. In November 2006, Microsoft and Novell announced a broad partnership to make sure Windows interoperates with SUSE Linux. The initial agreement endured until 2012 and included promises not to sue over patents as well as joint development, marketing and support of Windows – Linux interoperability solutions. In addition, Microsoft and Novell agreed to work to ensure documents created in the free OpenOffice.org productivity suite can seamlessly work in Office 2007, and vice versa. Both companies also agreed to develop on translators to improve interoperability between Office Open XML and OpenDocument formats. The company also purchased 70,000 one-year SUSE Linux Enterprise Server maintenance and update subscription coupons from Novell. Microsoft could distribute the coupons to customers as a way to convince them to choose Novell's Linux rather than a competitor's Linux distribution. Microsoft CEO Steve Ballmer acknowledged that more customers are running mixed systems and said about the partnership with Novell: In June 2007, Tom Hanrahan, former Director of Engineering at the Linux Foundation, became Microsoft's Director of Linux Interoperability. The Open Source Initiative approved the Microsoft Public License (MS-PL) and Microsoft Reciprocal License (MS-RL) in 2007. Microsoft open sourced IronRuby, IronPython, and xUnit.net under MS-PL in 2007. In 2008, Microsoft joined the Apache Software Foundation and co-founded the Open Web Foundation with Google, Facebook, Sun, IBM, Apache, and others. Also in 2008, Microsoft began distributing the open source jQuery JavaScript library together with the Visual Studio development environment for use within the ASP.NET AJAX and ASP.NET MVC frameworks. When Microsoft released Hyper-V in 2008, SUSE Linux Enterprise Server became the first non-Windows operating system officially supported on Hyper-V. Microsoft and Novell signed an agreement to work on interoperability two years earlier. Microsoft first began contributing to the Linux kernel in 2009. The CodePlex Foundation, an independent 501(c)(6) non-profit corporation founded by Microsoft and led mostly by Microsoft employees and affiliates, was founded in September 2009. Its goal was to "enable the exchange of code and understanding among software companies and open source communities." Later in September 2010, the name Outercurve Foundation was adopted. In November 2009, Microsoft released the source code of the .NET Micro Framework to the development community as free and open-source software under the Apache License 2.0. StyleCop, an originally proprietary static code analysis tool by Microsoft, was re-released as an open-source in April 2010 on CodePlex. Based on customer feedback, Microsoft relicensed IronRuby, IronPython, and the Dynamic Language Runtime (DLR) under Apache License 2.0 in July 2010. Microsoft signed the Joomla contributor agreement and started upstreaming improvements in 2010. 2010s In 2011, Microsoft started contributing code to the Samba project. The same year, Microsoft also ported Node.js to Windows, upstreaming the code under Apache License 2.0. The first version of Python Tools for Visual Studio (PTVS) was released in March 2011. After acquiring Skype in 2011, Microsoft continued maintaining the Skype Linux client. In July 2011, Microsoft was the fifth largest contributor to the Linux 3.0 kernel at 4% of the total changes. The company became a partner with LinuxTag for their 2011 event and also sponsored LinuxTag 2012. In 2012, Microsoft began hosting Linux virtual machines in the Azure cloud computing service and CodePlex introduced git support. The company also ported Apache Hadoop to Windows, upstreaming the code under MIT License. In March 2012, a completely rewritten version of ChronoZoom was made available as open source via the Outercurve Foundation. Also, ASP.NET, ASP.NET MVC, ASP.NET Razor, ASP.NET Web API, Reactive extensions, and IL2JS (an IL to JavaScript compiler) were released under Apache License 2.0. The TypeScript programming language was released under Apache License 2.0 in 2012. It was the first Microsoft project hosted on GitHub. In June 2012, Microsoft contributed Open Management Infrastructure to The Open Group with the goal "to remove all obstacles that stand in the way of implementing standards-based management so that every device in the world can be managed in a clear, consistent, coherent way and to nurture [and] spur a rich ecosystem of standards-based management products." In 2013, Microsoft relicensed the xUnit.net unit testing tool for the .NET Framework under Apache License 2.0 and transferred it to the Outercurve Foundation. Also in 2013, Microsoft added Git support to Visual Studio and Team Foundation Server using libgit2, the most widely deployed version of Git. The company is dedicating engineering hours to help further develop libgit2 and working with GitHub and other community programmers who devote time to the software. In 2014, Satya Nadella was named the new CEO of Microsoft. Microsoft began to adopt open source into its core business. In contrast to Ballmer's stance, Nadella presented a slide that read, "Microsoft loves Linux". At the time of the acquisition of GitHub, Nadella said of Microsoft, "We are all in on open source." As the industry trended towards cloud, embedded, and mobile computing, Microsoft turned to open source to stay apace in these open source dominated fields. Microsoft's adoption of open source included several surprising turns. In 2014, the company opened the source of its .NET Framework to promote its software ecosystem and stimulate cross-platform development. Microsoft also started contributing to the OpenJDK the same year. The Wireless Display Adapter, released in 2014, was Microsoft's first hardware device to use embedded Linux. In the beginning of 2015, Microsoft open sourced the Z3 Theorem Prover, a cross-platform satisfiability modulo theories (SMT) solver. Also in 2015, Microsoft co-founded the Node.js Foundation and joined the R Foundation. After completing the acquisition of Revolution Analytics in 2015, Microsoft integrated the open source R programming language into SQL Server 2016, SQL Server 2017, SQL Server 2019, Power BI, Azure SQL Managed Instance, Azure Cortana Intelligence, Microsoft ML Server and Visual Studio 2017. The same year, Microsoft also open sourced Matter Center, Microsoft's legal practice management software and also Chakra, the Microsoft Edge JavaScript engine at the time. Also in 2015, Microsoft released Windows 10 with native support for the open-source AllJoyn framework, which means that any Windows 10 device can control any AllJoyn-aware Internet of Things (IoT) device in the network. Microsoft has been developing AllJoyn support and contributing code upstream since 2014. Microsoft opened the keynote speech at All Things Open in 2015 by stating that: In August 2015, Microsoft released WinObjC, also known as Windows Bridge for iOS, an open-source middleware toolkit that allows iOS apps developed in Objective-C to be ported to Windows 10. On November 18, 2015, Visual Studio Code was released under the proprietary Microsoft License and a subset of its source code was posted to GitHub under the MIT License. In January 2016, Microsoft became Gold Sponsor of SCALE 14x – the fourteenth annual Southern California Linux Expo, a major convention. When Microsoft acquired Xamarin and LinkedIn in 2016, it relicensed the Mono framework under MIT License and continued maintaining the Kafka stream-processing software platform as open source. Also in 2016, Microsoft introduced the Windows Subsystem for Linux, which lets Linux applications run on the Windows operating system. The company invested in Linux server technology and Linux development to promote cross-platform compatibility and collaboration with open source companies and communities, culminating with Microsoft's platinum sponsorship of the Linux Foundation and seat on its Board of Directors. Microsoft released SQL Server and the now open source PowerShell for Linux. Also, Microsoft began porting Sysinternals tools, including ProcDump and ProcMon, to Linux. R Tools for Visual Studio were released under Apache License 2.0 in March 2016. In March 2016, Ballmer changed his stance on Linux, saying that he supports his successor Satya Nadella's open source commitments. He maintained that his comments in 2001 were right at the time but that times have changed. Commentators have noted the adoption of open source and the change of strategy at Microsoft: At EclipseCon in March 2016, Microsoft announced that the company is joining the Eclipse Foundation as a Solutions Member. The BitFunnel search engine indexing algorithm and various components of the Microsoft Bing search engine were made open source by Microsoft in 2016. vcpkg, a cross-platform open source package manager, was released in September 2016. Microsoft joined the Open Source Initiative, the Cloud Native Computing Foundation, and the MariaDB Foundation in 2017. The Open Source Initiative, formerly a target of Microsoft, used the occasion of Microsoft's sponsorship as a milestone for open source software's widespread acceptance. The Debian-based SONiC network operating system was open sourced by Microsoft in 2017. Also the same year, the Windows development was moved to Git and Microsoft open sourced the Git Virtual File System (GVFS) developed for that purpose. Other contributions to Git include a number of performance improvements useful when working with large repositories. Microsoft opened the Microsoft Store to open source applications and gave the keynote speech at the Open Source Summit North America 2017 in Los Angeles. In 2018, the Microsoft CTO of Data spoke with ZDNet about the growing importance of open source stating that: Microsoft became Platinum Sponsor and delivered the keynote of the 2018 Southern California Linux Expo – the largest community-run open-source and free software conference in North America. Microsoft developed Linux-based operating systems for use with its Azure cloud services. Azure Cloud Switch supports the Azure infrastructure and is based on open source and proprietary technology, and Azure Sphere powers Internet of things devices. As part of its announcement, Microsoft acknowledged Linux's role in small devices where the full Windows operating system would be unnecessary. Also in 2018, Microsoft acquired GitHub, the largest host for open source project infrastructure. Microsoft is among the site's most active contributors and the site hosts the source code for Microsoft's Visual Studio Code and .NET runtime system. The company, though, has received some criticism for only providing limited returns to the Linux community, since the GPL license lets Microsoft modify Linux source code for internal use without sharing those changes. In 2018, Microsoft included OpenSSH, tar, and curl commands in Windows. Also, Microsoft released Windows Calculator as open source under MIT License on GitHub. Since 2018, Microsoft has been a sponsor of the AdoptOpenJDK project. It is a drop-in replacement for Oracle's Java/JDK. In April 2018, Microsoft released the File Manager source code licensed under the MIT License. In August 2018, Microsoft added support for the open source Python programming language to Power BI. In October 2018, Microsoft joined the Open Invention Network and cross-licensed 60,000 patents with the open source community. In 2019, Microsoft's Windows Subsystem for Linux 2 transitioned from an emulated Linux kernel to a full Linux kernel within a virtual machine, improving processor performance manifold. In-keeping with the GPL open source license, Microsoft will submit its kernel improvements for accommodation into the master, public release. Also in 2019, Microsoft released Windows Terminal, PowerToys, and the Microsoft C++ Standard Library as open source and transitioned its Edge browser to use the open source Chromium as the basis. The Windows Console infrastructure was open-sourced under the MIT License alongside Windows Terminal. After publishing exFAT as an open specification, Microsoft contributed the patents to the Open Invention Network (OIN), and started upstreaming the device driver to the Linux kernel. At Build 2019, Microsoft announced that it is open-sourcing its Quantum Development Kit, including its Q# compilers and simulators. In December 2019, Microsoft released Microsoft Teams for Linux. This marked the first time Microsoft released an Office app for the Linux operating system. The app is available in native packages in .deb and .rpm formats. Also in December 2019, after JS Foundation and Node.js Foundation merged to form OpenJS Foundation, Microsoft contributed the popular cross-platform desktop application development tool Electron to OpenJS Foundation. 2020s Project Verona, a memory-safe research programming language, was open sourced in January 2020. Microsoft released DeepSpeed, an open source deep learning optimization library for PyTorch, in February 2020. In 2020, Microsoft open sourced the Java extension for Microsoft SQL Server, MsQuic (a Windows NT kernel library for the QUIC general-purpose transport layer network protocol), Project Petridish, a neural architecture search algorithm for deep learning, and the Fluid Framework for building distributed, real-time collaborative web applications. Microsoft also released the Linux-based Azure Sphere operating system. In March 2020, Microsoft acquired npm, the open source Node package manager. It is the world’s largest software registry with more than 1.3 million packages that have 75 billion downloads a month. Also in March 2020, Microsoft together with researchers and leaders from the Allen Institute for AI, the Chan Zuckerberg Initiative, the Georgetown University's Center for Security and Emerging Technhology, and the National Library of Medicine released CORD-19, a public dataset of academic articles about COVID-19 and research related to the COVID-19 pandemic. The dataset is created through the use of text mining of the current research literature. After exploring different alternative options and talking with various well-known commercial and open source package manager teams including Chocolatey, Scoop, Ninite and others such as AppGet, Npackd and the PowerShell based OneGet package manager-manager, Microsoft decided to develop and release the open source Windows Package Manager in 2020. Microsoft was one of the silver sponsors for the X.Org Developer’s Conference 2020 (XDC2020). Microsoft had multiple developers presenting on the opening day. Microsoft completed the first phase of porting the Java OpenJDK for Windows 10 on ARM devices in June 2020. In August 2020, Microsoft became founding member of the Open Source Security Foundation (OpenSSF), a cross-industry forum for a collaborative effort to improve open source software security. In September 2020, Microsoft released the Surface Duo, an Android-based smartphone with a Linux kernel. The same month, Microsoft released OneFuzz, a self-hosted fuzzing-as-a-service platform that automates the detection of software bugs. It supports Windows and Linux. Microsoft is a major contributor to the Chromium project with the highest percentage of all non-Google contributors coming from Microsoft (35.2%). The company has contributed 29.4% of all non-Google commits to the source code in 2020. CBL-Mariner, a cloud infrastructure operating system based on Linux and developed by the Linux Systems Group at Microsoft for its edge network services and as part of its Microsoft Azure cloud infrastructure was open sourced in 2020. In February 2021, Microsoft made the source code for its Extensible Storage Engine (ESE) available on GitHub under MIT License. Also in February 2021, Microsoft, together with four other founding companies (AWS, Huawei, Google, and Mozilla) formed the Rust Foundation as an independent non-profit organization to steward the open source Rust programming language and ecosystem. In March 2021, Microsoft became founding member of the new Eclipse Adoptium Working Group whose goal is to promote free, open source Java runtimes. Microsoft released a preview of the Microsoft Build of OpenJDK in April 2021. It is available for x64 server and desktop editions of Windows, as well as on Linux and macOS. The company provides long-term support for this distribution of the OpenJDK. In April 2021, Microsoft also released a Windows 10 test build that includes the ability to run Linux graphical user interface (GUI) apps using Windows Subsystem for Linux 2. In the following month, Microsoft launched an open source project to make the Berkeley Packet Filter work on Windows. At the Windows 11 announcement event in June 2021, Microsoft showcased the new Windows Subsystem for Android (WSA) that will enable support for the Android Open Source Project (AOSP) and will allow users to run Android apps on their Windows desktop. In August 2021, Microsoft announced that it is expanding its partnership to become a Strategic Member at the Eclipse Foundation. Support of open source organizations Microsoft is either founding member, joining member, contributing member, and/or sponsor of a number of open source related organizations and initiatives. Examples include: Selected products .NET – Managed code software framework for Windows, Linux, and macOS operating systems .NET Bio – Bioinformatics and genomics library created to enable simple loading, saving and analysis of biological data .NET Compiler Platform (Roslyn) – Compilers and code analysis APIs for C# and Visual Basic .NET programming languages .NET Gadgeteer – Rapid-prototyping standard for building small electronic devices .NET MAUI – A cross-platform UI toolkit .NET Micro Framework – .NET Framework platform for resource-constrained devices AirSim – Simulator for drones, cars and other objects, built as a platform for AI research Allegiance – Multiplayer online game providing a mix of real-time strategy and player piloted space combat gameplay ASP.NET ASP.NET AJAX ASP.NET Core ASP.NET MVC ASP.NET Razor ASP.NET Web Forms Atom – Text and source code editor for macOS, Linux, and Microsoft Windows Babylon.js – A real time 3D engine using a JavaScript library for displaying 3D graphics in a web browser via HTML5 BitFunnel – A signature-based search engine Blazor – Web framework that enables developers to create web apps using C# and HTML Bosque – Functional programming language C++/WinRT – C++ library for Microsoft's Windows Runtime platform, designed to provide access to modern Windows APIs C# – General-purpose, multi-paradigm programming language encompassing strong typing, lexically scoped, imperative, declarative, functional, generic, object-oriented (class-based), and component-oriented programming disciplines CBL-Mariner – Cloud infrastructure operating system based on Linux ChakraCore – JavaScript engine ChronoZoom – Project that visualizes time on the broadest possible scale from the Big Bang to the present day CLR Profiler – Memory profiler for the .NET Framework Conference XP – Video conferencing platform Dafny – Imperative compiled language that targets C# and supports formal specification through preconditions, postconditions, loop invariants and loop variants Dapr – Event-driven, portable runtime system designed to support cloud native and serverless computing DeepSpeed – Deep learning optimization library for PyTorch Detours – C++ library for intercepting, monitoring and instrumenting binary functions on Microsoft Windows DiskSpd – Command-line tool for storage benchmarking that generates a variety of requests against computer files, partitions or storage devices Dynamic Language Runtime – Runtime that runs on top of the CLR and provides computer language services for dynamic languages eBPF on Windows – Register-based virtual machine designed to run a custom 64-bit RISC-like architecture via just-in-time compilation inside the kernel Extensible Storage Engine – An ISAM database engine that provides transacted data update and retrieval F* – Functional programming language inspired by ML and aimed at program verification F# – General purpose, strongly typed, multi-paradigm programming language that encompasses functional, imperative, and object-oriented programming methods File Manager – File manager for Microsoft Windows Fluid Framework, a platform for real-time collaboration across applications FourQlib – Reference implementation of the FourQ elliptic curve GW-BASIC – Dialect of the BASIC programming language Microsoft C++ Standard Library – Implementation of the C++ Standard Library (also known as the STL) MonoDevelop – Integrated development environment for Linux, macOS, and Windows MSBuild – Build tool set for managed code as well as native C++ code MsQuic – Implementation of the IETF QUIC protocol Neural Network Intelligence – An AutoML toolkit npm – Package manager for the JavaScript programming language OneFuzz – Cross-platform fuzz testing framework Open Live Writer – Desktop blogging application Open Management Infrastructure – CIM management server Open XML SDK – set of managed code libraries to create and manipulate Office Open XML files programmatically Orleans – Cross-platform software framework for building scalable and robust distributed applications based on the .NET Framework P – Programming language for asynchronous event-driven programming and the IoT Power Fx – Low-code, general-purpose programming language for expressing logic across the Microsoft Power Platform PowerShell – Command-line shell and scripting language Process Monitor – Tool that monitors and displays in real-time all file system activity ProcDump – Command-line application for creating crash dumps during a CPU spike Project Mu – UEFI core used in Microsoft Surface and Hyper-V products Project Verona – Experimental memory-safe research programming language PowerToys for Windows 10 – System utilities for power users ReactiveX – A set of tools allowing imperative programming languages to operate on sequences of data regardless of whether the data is synchronous or asynchronous implementating reactive programming RecursiveExtractor – An archive file extraction library written in C# Sandcastle – Documentation generator StyleCop – Static code analysis tool that checks C# code for conformance to recommended coding styles and a subset of the .NET Framework design guidelines Terminal – Terminal emulator TypeScript – Programming language similar to JavaScript, among the most popular on GitHub U-Prove – Cross-platform technology and accompanying SDK for user-centric identity management vcpkg – Cross-platform package manager used to simplify the acquisition and installation of third-party libraries VFS for Git – Virtual file system extension to the Git version control system Visual Basic .NET – Multi-paradigm, object-oriented programming language Visual Studio Code – Source code editor and debugger for Windows, Linux and macOS, and GitHub's top open source project VoTT (Visual Object Tagging Tool) – Electron app for image annotation and labeling Vowpal Wabbit – online interactive machine learning system library and program WikiBhasha – Multi-lingual content creation application for the Wikipedia online encyclopedia Windows Calculator – Software calculator Windows Communication Foundation – runtime and a set of APIs for building connected, service-oriented applications Windows Console – Terminal emulator Windows Driver Frameworks – Tools and libraries that aid in the creation of device drivers for Microsoft Windows Windows Forms – Graphical user interface (GUI) class library Windows Package Manager – Package manager for Windows 10 Windows Presentation Foundation – Graphical subsystem (similar to WinForms) for rendering user interfaces in Windows-based applications Windows Template Library – Object-oriented C++ template library for Win32 development Windows UI Library – Set of UI controls and features for the Universal Windows Platform (UWP) WinJS – JavaScript library for cross-platform app development WinObjC – Middleware toolkit that allows iOS apps developed in Objective-C to be ported to Windows 10 WiX (Windows Installer XML Toolset) – Toolset for building Windows Installer packages from XML WorldWide Telescope – Astronomy software XML Notepad – XML editor XSP – Standalone web server written in C# that hosts ASP.NET for Unix-like operating systems xUnit.net – Unit testing tool for the .NET Framework Z3 Theorem Prover – Cross-platform satisfiability modulo theories (SMT) solver See also Free software movement History of free and open-source software Timeline of free and open-source software Comparison of open-source and closed-source software Business models for open-source software References Bibliography Further reading External links Open source releases from Microsoft Open source History of free and open-source software
Operating System (OS)
496
BESYS BESYS (Bell Operating System) was an early computing environment originally implemented as a batch processing operating system in 1957 at Bell Labs for the IBM 704 computer. Overview The system was developed because Bell recognized a "definite mismatch…between the 704's internal speed, the sluggishness of its on-line unit-record equipment, and the inherent slowness of manual operations associated with stand-alone use." According to Drummond, the name BESYS, though commonly thought to stand for BEll SYStem, is actually a concatenation of the preexisting SHARE-assigned installation code BE for Bell Telephone Laboratories, Murray Hill, NJ and the code assigned by SHARE for systems software, SYS. The goals of the system were: Flexible use of hardware, nonstop operation. Efficient batch processing, tape-to-tape operation with offline spooling of unit-record data. Use of control cards to minimize the need for operator intervention. Allow user programs access to input/output functions, system control and program libraries. Core dump facilities for debugging. Simulation of L1 and L2 interpreters to provide software compatibility with the IBM 650. The initial version of the system BESYS-1 was in use by October 16, 1957. It was created by George H. Mealy and Gwen Hansen with Wanda Lee Mammel and utilized IBM's FORTRAN and United Aircraft's Symbolic Assembly Program (SAP) programming languages. It was designed to efficiently deal with a large number of jobs originating on punched cards and producing results suitable for printing on paper and punched cards. The system also provided processing capabilities for data stored on magnetic tapes and magnetic disk storage units. Typically punched card and print processing was handled off line by peripheral Electronic Accounting Machines, IBM 1401 computers, and eventually direct coupled computers. The first system actually used at Bell Labs was BESYS-2. The system was resident on magnetic tape, and occupied the lowest 64 (36-bit) words and the highest 4K words of memory. The upper 4K words held the resident portion of the monitor, and could be partially swapped to magnetic drum to free up additional core for the user program if needed. "BESYS was a complex software package that provided convenient input/output and integrated disk file storage facilities." Internal use BESYS was used extensively by many departments of Bell Labs for over a decade. It was made available through the SHARE organization to others without charge or formal technical support. BESYS versions Versions of the BESYS environment (BESYS-3 (1960), BESYS-4 (1962), BESYS-5 (1963), BESYS-7 (1964), and BE90 (1968)) were implemented as the underlying computers transitioned through the IBM 709X family. BESYS development was discontinued when Bell Labs moved to the IBM System/360 in 1969. Throughout this period the head of the BESYS development project was George L. Baldwin. References 1957 software Bell Labs Discontinued operating systems
Operating System (OS)
497
Comparison of privilege authorization features A number of computer operating systems employ security features to help prevent malicious software from gaining sufficient privileges to compromise the computer system. Operating systems lacking such features, such as DOS, Windows implementations prior to Windows NT (and its descendants), CP/M-80, and all Mac operating systems prior to Mac OS X, had only one category of user who was allowed to do anything. With separate execution contexts it is possible for multiple users to store private files, for multiple users to use a computer at the same time, to protect the system against malicious users, and to protect the system against malicious programs. The first multi-user secure system was Multics, which began development in the 1960s; it wasn't until UNIX, BSD, Linux, and NT in the late 80s and early 90s that multi-tasking security contexts were brought to x86 consumer machines. Introduction to implementations Microsoft Windows Mac OS Unix and Unix-like Security considerations Falsified/intercepted user input A major security consideration is the ability of malicious applications to simulate keystrokes or mouse clicks, thus tricking or spoofing the security feature into granting malicious applications higher privileges. Using a terminal based client (standalone or within a desktop/GUI): su and sudo run in the terminal, where they are vulnerable to spoofed input. Of course, if the user was not running a multitasking environment (i.e. a single user in the shell only), this would not be a problem. Terminal windows are usually rendered as ordinary windows to the user, therefore on an intelligent client or desktop system used as a client, the user must take responsibility for preventing other malware on their desktop from manipulating, simulating, or capturing input. Using a GUI/desktop tightly integrated to the operating system: Commonly, the desktop system locks or secures all common means of input, before requesting passwords or other authentication, so that they cannot be intercepted, manipulated, or simulated: PolicyKit (GNOME - directs the X server to capture all keyboard and mouse input. Other desktop environments using PolicyKit may use their own mechanisms. gksudo - by default "locks" the keyboard, mouse, and window focus, preventing anything but the actual user from inputting the password or otherwise interfering with the confirmation dialog. UAC (Windows) - by default runs in the Secure Desktop, preventing malicious applications from simulating clicking the "Allow" button or otherwise interfering with the confirmation dialog. In this mode, the user's desktop appears dimmed and cannot be interacted with. If either gksudo's "lock" feature or UAC's Secure Desktop were compromised or disabled, malicious applications could gain administrator privileges by using keystroke logging to record the administrator's password; or, in the case of UAC if running as an administrator, spoofing a mouse click on the "Allow" button. For this reason, voice recognition is also prohibited from interacting with the dialog. Note that since gksu password prompt runs without special privileges, malicious applications can still do keystroke logging using e.g. the strace tool. (ptrace was restricted in later kernel versions) Fake authentication dialogs Another security consideration is the ability of malicious software to spoof dialogs that look like legitimate security confirmation requests. If the user were to input credentials into a fake dialog, thinking the dialog was legitimate, the malicious software would then know the user's password. If the Secure Desktop or similar feature were disabled, the malicious software could use that password to gain higher privileges. Though it is not the default behavior for usability reasons, UAC may be configured to require the user to press Ctrl+Alt+Del (known as the secure attention sequence) as part of the authentication process. Because only Windows can detect this key combination, requiring this additional security measure would prevent spoofed dialogs from behaving the same way as a legitimate dialog. For example, a spoofed dialog might not ask the user to press Ctrl+Alt+Del, and the user could realize that the dialog was fake. Or, when the user did press Ctrl+Alt+Del, the user would be brought to the screen Ctrl+Alt+Del normally brings them to instead of a UAC confirmation dialog. Thus the user could tell whether the dialog was an attempt to trick them into providing their password to a piece of malicious software. In GNOME, PolicyKit uses different dialogs, depending on the configuration of the system. For example, the authentication dialog for a system equipped with a fingerprint reader might look different from an authentication dialog for a system without one. Applications do not have access to the configuration of PolicyKit, so they have no way of knowing which dialog will appear and thus how to spoof it. Usability considerations Another consideration that has gone into these implementations is usability. Separate administrator account su require the user to know the password to at least two accounts: the regular-use account, and an account with higher privileges such as root. sudo, kdesu and gksudo use a simpler approach. With these programs, the user is pre-configured to be granted access to specific administrative tasks, but must explicitly authorize applications to run with those privileges. The user enters their own password instead of that of the superuser or some another account. UAC and Authenticate combine these two ideas into one. With these programs, administrators explicitly authorize programs to run with higher privileges. Non-administrators are prompted for an administrator username and password. PolicyKit can be configured to adopt any of these approaches. In practice, the distribution will choose one. Simplicity of dialog In order to grant an application administrative privileges, sudo, gksudo, and Authenticate prompt administrators to re-enter their password. With UAC, when logged in as a standard user, the user must enter an administrator's name and password each time they need to grant an application elevated privileges; but when logged in as a member of the Administrators group, they (by default) simply confirm or deny, instead of re-entering their password each time (though that is an option). While the default approach is simpler, it is also less secure, since if the user physically walks away from the computer without locking it, another person could walk up and have administrator privileges over the system. PolicyKit requires the user to re-enter his or her password or provide some other means of authentication (e.g. fingerprint). Saving credentials UAC prompts for authorization each time it is called to elevate a program. sudo, gksudo, and kdesu do not ask the user to re-enter their password every time it is called to elevate a program. Rather, the user is asked for their password once at the start. If the user has not used their administrative privileges for a certain period of time (sudo's default is 5 minutes), the user is once again restricted to standard user privileges until they enter their password again. sudo's approach is a trade-off between security and usability. On one hand, a user only has to enter their password once to perform a series of administrator tasks, rather than having to enter their password for each task. But at the same time, the surface area for attack is larger because all programs that run in that tty (for sudo) or all programs not running in a terminal (for gksudo and kdesu) prefixed by either of those commands before the timeout receive administrator privileges. Security-conscious users may remove the temporary administrator privileges upon completing the tasks requiring them by using the sudo -k command when from each tty or pts in which sudo was used (in the case of pts's, closing the terminal emulator is not sufficient). The equivalent command for kdesu is kdesu -s. There is no gksudo option to do the same; however, running sudo -k not within a terminal instance (e.g. through the Alt + F2 "Run Application" dialogue box, unticking "Run in terminal") will have the desired effect. Authenticate does not save passwords. If the user is a standard user, they must enter a username and a password. If the user is an administrator, the current user's name is already filled in, and only needs to enter their password. The name can still be modified to run as another user. The application only requires authentication once, and is requested at the time the application needs the privilege. Once "elevated", the application does not need to authenticate again until the application has been Quit and relaunched. However, there are varying levels of authentication, known as Rights. The right that is requested can be shown by expanding the triangle next to "details", underneath the password. Normally, applications use system.privilege.admin, but another may be used, such as a lower right for security, or a higher right if higher access is needed. If the right the application has is not suitable for a task, the application may need to authenticate again to increase the privilege level. PolicyKit can be configured to adopt either of these approaches. Identifying when administrative rights are needed In order for an operating system to know when to prompt the user for authorization, an application or action needs to identify itself as requiring elevated privileges. While it is technically possible for the user to be prompted at the exact moment that an operation requiring such privileges is executed, it is often not ideal to ask for privileges partway through completing a task. If the user were unable to provide proper credentials, the work done before requiring administrator privileges would have to be undone because the task could not be seen though to the end. In the case of user interfaces such as the Control Panel in Microsoft Windows, and the Preferences panels in Mac OS X, the exact privilege requirements are hard-coded into the system so that the user is presented with an authorization dialog at an appropriate time (for example, before displaying information that only administrators should see). Different operating systems offer distinct methods for applications to identify their security requirements: sudo centralises all privilege authorization information in a single configuration file, /etc/sudoers, which contains a list of users and the privileged applications and actions that those users are permitted to use. The grammar of the sudoers file is intended to be flexible enough to cover many different scenarios, such as placing restrictions on command-line parameters. For example, a user can be granted access to change anybody's password except for the root account, as follows: pete ALL = /usr/bin/passwd [A-z]*, !/usr/bin/passwd root User Account Control uses a combination of heuristic scanning and "application manifests" to determine if an application requires administrator privileges. Manifest (.manifest) files, first introduced with Windows XP, are XML files with the same name as the application and a suffix of ".manifest", e.g. Notepad.exe.manifest. When an application is started, the manifest is looked at for information about what security requirements the application has. For example, this XML fragment will indicate that the application will require administrator access, but will not require unfettered access to other parts of the user desktop outside the application: <security> <requestedPrivileges> <requestedExecutionLevel level="requireAdministrator" uiAccess="false" /> </requestedPrivileges> </security> Manifest files can also be compiled into the application executable itself as an embedded resource. Heuristic scanning is also used, primarily for backwards compatibility. One example of this is looking at the executable's file name; if it contains the word "Setup", it is assumed that the executable is an installer, and a UAC prompt is displayed before the application starts. UAC also makes a distinction between elevation requests from a signed executable and an unsigned executable; and if the former, whether or not the publisher is 'Windows Vista'. The color, icon, and wording of the prompts are different in each case: for example, attempting to convey a greater sense of warning if the executable is unsigned than if not. Applications using PolicyKit ask for specific privileges when prompting for authentication, and PolicyKit performs those actions on behalf of the application. Before authenticating, users are able to see which application requested the action and which action was requested. See also Privilege escalation, a type of security exploit Principle of least privilege, a security design pattern Privileged Identity Management, the methodology of managing privileged accounts Privileged password management, similar concept to privileged identity management: i.e., periodically scramble privileged passwords; and store password values in a secure, highly available vault; and apply policy regarding when, how and to whom these passwords may be disclosed. References Operating system security Privilege authorization features Computer access control
Operating System (OS)
498
Program Files Program Files is the directory name of a standard folder in Microsoft Windows operating systems in which applications that are not part of the operating system are conventionally installed. Typically, each application installed under the 'Program Files' directory will have a subdirectory for its application-specific resources. Shared resources, for example resources used by multiple applications from one company, are typically stored in the 'Common Program Files' directory. Overview In a standard Windows installation, the 'Program Files' directory will be at %SystemDrive%\Program Files (or the localized equivalent thereof), and the 'Common Program Files' (or the localized equivalent thereof) will be a subdirectory under 'Program Files'. In Windows Vista and later, the paths to the 'Program Files' and 'Common Program Files' directories are not localized on disk. Instead, the localized names are NTFS junction points to the non-localized locations. Additionally, the Windows shell localizes the name of the Program Files folder depending on the system's user interface display language. Both 'Program Files' and 'Common Program Files' can be moved. At system startup, the actual paths to 'Program Files' and 'Common Program Files' are loaded from the Windows registry, where they are stored in the ProgramFilesDir and CommonFilesDir values under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion. They are then made accessible to the rest of the system via the volatile environment variables %ProgramFiles%, and %CommonProgramFiles%. Applications can also obtain the locations of these paths by querying the Setup API using dirids, or through Windows Management Instrumentation, or by querying the shell using CSIDLs, or ShellSpecialFolderConstants. These are all localization-independent methods. x86-64 and IA-64 versions of Windows have two folders for application files: The Program Files folder serves as the default installation target for 64-bit programs, while the Program Files (x86) folder is the default installation target for 32-bit programs that need WoW64 emulation layer. While 64-bit Windows versions also have a %ProgramFiles(x86)% environment variable, the dirids and CSIDLs are not different between 32-bit and 64-bit environments; the APIs merely return different results, depending on whether the calling process is emulated or not. To be backwards compatible with the 8.3 limitations of the old File Allocation Table filenames, the names 'Program Files', 'Program Files (x86)' and 'Common Program Files' are shortened by the system to progra~N and common~N, where N is a digit, a sequence number that on a clean install will be 1 (or 1 and 2 when both 'Program Files' and 'Program Files (x86)' are present). If Windows is installed on an NTFS volume, by default, the 'Program Files' folder can only be modified by members of the 'Administrators' user groups. This can be an issue for programs created for Windows 9x. Those operating systems had no file system security, and programs could therefore also store their data in 'Program Files'. Programs that store their data in 'Program Files' will usually not run correctly on Windows NT systems with normal user privileges unless security is lowered for the affected subdirectories. Windows Vista addressed this issue by introducing File and Registry Virtualization. When this virtualization is enabled for a process, Windows saves changes to the 'Program Files' folder to %LocalAppData%\VirtualStore\Program Files (x86). Localization See also WinFS File system Directory (computing) 64-bit comparing References microsoft.com, Microsoft does not support changing the location of the Program Files folder by modifying the ProgramFilesDir registry value microsoft.com, Why is the Program Files directory called Program Files instead of just Programs? producthelp.sdl.com, The Windows system folders - General overview. Windows XP in a Nutshell, By David Aaron Karp, Tim O'Reilly, Troy Mott, page 512 superuser.com, How can I find the short path of a Windows directory/file? helpdeskgeek.com, Why Does Windows 7 64-bit Need Two Program Files Folders? quepublishing.com, The Windows XP Layout, By Stu Sjouwerman, Nov 7, 2003 File system directories Microsoft Windows
Operating System (OS)
499