text
stringlengths
101
134k
type
stringclasses
12 values
__index_level_0__
int64
0
14.7k
LinuxChix LinuxChix is a women-oriented Linux community. It was formed to provide both technical and social support for women Linux users, although men are encouraged to contribute. Members of the community are referred to as "a Linux chick" (singular) and "LinuxChix" or "Linux Chix" (plural) regardless of gender. History LinuxChix was founded in 1999 by Deb Richardson, who was a technical writer and web-master at an open source consulting firm. Her reason for founding LinuxChix was to create an alternative to the "locker room mentality" of some other Linux User Groups and forums. There are two core rules: "be polite and be helpful." LinuxChix started as an electronic mailing list called grrltalk. The growth of this mailing list led to the establishment of other mailing lists, beginning with techtalk for technical discussions and issues for discussion of women's political issues. LinuxChix received attention when ZDNet published an article on it, which was subsequently cross-posted on Slashdot. Leadership and structure Deb Richardson oversaw the activities of LinuxChix until 2001, when she handed over global coordination and hosting to Melbourne programmer and writer Jenn Vesperman. Jenn Vesperman led the community in a mostly hands-off fashion, delegating almost all tasks, including mailing list administration and website maintenance, to a group of volunteers. During Jenn Vesperman's tenure, the number of mailing lists tripled with the newchix mailing list for those new to Linux, the courses mailing list used by LinuxChix to teach each other specific topics, and the grrls-only mailing list (the only list closed to male subscribers) founded by Val Henson in 2002. At around the same time, a LinuxChix IRC server was created. The term LinuxChix refers to the organisation centered on the official website, the mailing lists and the IRC channels. The organisation has no official status, and the name is used by other loosely affiliated groups, including several local, continental, and national chapters which operate independently. In March 2007, Jenn Vesperman announced that she was retiring as the coordinator and invited nominations for a new leader. Mary Gardiner was announced as the new coordinator In April 2007, planning to serve as coordinator until 2009, however she resigned in June 2007. Currently the organization is led by three lead volunteers known as the "Tres Chix" who are elected by popular vote. In August 2007, Sulamita Garcia, Akkana Peck and Carla Schroder were elected to these positions. Regional chapters LinuxChix has over 15 regional chapters around the world. In 2004, a chapter was founded in Africa. In March 2007, on the International Women's Day, Australia's two LinuxChix chapters united to form a nationwide chapter called "AussieChix". The New Zealand chapter was established in February 2007. Events Some local LinuxChix chapters hold regular meetings. Others only meet up on special occasions, such as visits from non-local members or in conjunction with technical conferences. In 2007, members of the Sydney chapter organized a LinuxChix miniconf at linux.conf.au at the University of New South Wales. Events are held on other special occasions; in 2005, for example, LinuxChix Africa organized an event to celebrate Software Freedom Day at Wits University. LinuxChix labs The Indian chapter of LinuxChix (aka IndiChix) led an initiative to establish Linux labs in a number of cities in India. These labs provide spaces equipped with PCs and internet connections where women can learn more about Linux and collaborate on contributions to the Libre software community. Labs have gone live in Bangalore, Delhi, Mumbai, and Pune. See also Ada Initiative National Center for Women & Information Technology Anita Borg Institute for Women and Technology Girl Geek Dinners References External links LinuxChix website LinuxChix regional chapters Women in computing Electronic mailing lists Free and open-source software organizations Internet properties established in 1999 Linux user groups Organizations for women in science and technology Mass media companies
Operating System (OS)
900
Unix time Current Unix time () Unix time (also known as Epoch time, Posix time, seconds since the Epoch, or UNIX Epoch time) is a system for describing a point in time. It is the number of seconds that have elapsed since the Unix epoch, excluding leap seconds. The Unix epoch is 00:00:00 UTC on 1 January 1970 (an arbitrary date). Unix time is nonlinear with a leap second having the same Unix time as the second before it (or after it, implementation dependent), so that every day is treated as if it contains exactly seconds, with no seconds added to or subtracted from the day as a result of positive or negative leap seconds. Due to this treatment of leap seconds, Unix time is not a true representation of UTC. Unix time is widely used in operating systems and file formats. In Unix-like operating systems, date is a command which will print or set the current time; by default, it prints or sets the time in the system time zone, but with the flag, it prints or sets the time in UTC and, with the environment variable set to refer to a particular time zone, prints or sets the time in that time zone. Definition Two layers of encoding make up Unix time. The first layer encodes a point in time as a scalar real number which represents the number of seconds that have passed since 00:00:00UTC Thursday, 1 January 1970. The second layer encodes that number as a sequence of bits or decimal digits. As is standard with UTC, this article labels days using the Gregorian calendar and counts times within each day in hours, minutes, and seconds. Some of the examples also show International Atomic Time (TAI), another time scheme which uses the same seconds and is displayed in the same format as UTC, but every day is exactly seconds long, gradually losing synchronization with the Earth's rotation at a rate of roughly one second per year. Encoding time as a number Unix time is a single signed number that increments every second, which makes it easier for computers to store and manipulate than conventional date systems. Interpreter programs can then convert it to a human-readable format. The Unix epoch is the time 00:00:00UTC on 1 January 1970. There is a problem with this definition, in that UTC did not exist in its current form until 1972; this issue is discussed below. For brevity, the remainder of this section uses ISO 8601 date and time format, in which the Unix epoch is 1970-01-01T00:00:00Z. The Unix time number is zero at the Unix epoch and increases by exactly per day since the epoch. Thus 2004-09-16T00:00:00Z, days after the epoch, is represented by the Unix time number × = . This can be extended backwards from the epoch too, using negative numbers; thus 1957-10-04T00:00:00Z, days before the epoch, is represented by the Unix time number × = . This applies within days as well; the time number at any given time of a day is the number of seconds that has passed since the midnight starting that day added to the time number of that midnight. Sometimes, Unix time is mistakenly referred to as Epoch time, because Unix time is based on an epoch and because of a common misunderstanding that the Unix epoch is the only epoch (often called "the Epoch" ). Leap seconds The above scheme means that on a normal UTC day, which has a duration of seconds, the Unix time number changes in a continuous manner across midnight. For example, at the end of the day used in the examples above, the time representations progress as follows: When a leap second occurs, the UTC day is not exactly seconds long and the Unix time number (which always increases by exactly each day) experiences a discontinuity. Leap seconds may be positive or negative. No negative leap second has ever been declared, but if one were to be, then at the end of a day with a negative leap second, the Unix time number would jump up by 1 to the start of the next day. During a positive leap second at the end of a day, which occurs about every year and a half on average, the Unix time number increases continuously into the next day during the leap second and then at the end of the leap second jumps back by 1 (returning to the start of the next day). For example, this is what happened on strictly conforming POSIX.1 systems at the end of 1998: Unix time numbers are repeated in the second immediately following a positive leap second. The Unix time number is thus ambiguous: it can refer either to start of the leap second (2016-12-31 23:59:60) or the end of it, one second later (2017-01-01 00:00:00). In the theoretical case when a negative leap second occurs, no ambiguity is caused, but instead there is a range of Unix time numbers that do not refer to any point in UTC time at all. A Unix clock is often implemented with a different type of positive leap second handling associated with the Network Time Protocol (NTP). This yields a system that does not conform to the POSIX standard. See the section below concerning NTP for details. When dealing with periods that do not encompass a UTC leap second, the difference between two Unix time numbers is equal to the duration in seconds of the period between the corresponding points in time. This is a common computational technique. However, where leap seconds occur, such calculations give the wrong answer. In applications where this level of accuracy is required, it is necessary to consult a table of leap seconds when dealing with Unix times, and it is often preferable to use a different time encoding that does not suffer from this problem. A Unix time number is easily converted back into a UTC time by taking the quotient and modulus of the Unix time number, modulo . The quotient is the number of days since the epoch, and the modulus is the number of seconds since midnight UTC on that day. If given a Unix time number that is ambiguous due to a positive leap second, this algorithm interprets it as the time just after midnight. It never generates a time that is during a leap second. If given a Unix time number that is invalid due to a negative leap second, it generates an equally invalid UTC time. If these conditions are significant, it is necessary to consult a table of leap seconds to detect them. Non-synchronous Network Time Protocol-based variant Commonly a Mills-style Unix clock is implemented with leap second handling not synchronous with the change of the Unix time number. The time number initially decreases where a leap should have occurred, and then it leaps to the correct time 1 second after the leap. This makes implementation easier, and is described by Mills' paper. This is what happens across a positive leap second: This can be decoded properly by paying attention to the leap second state variable, which unambiguously indicates whether the leap has been performed yet. The state variable change is synchronous with the leap. A similar situation arises with a negative leap second, where the second that is skipped is slightly too late. Very briefly the system shows a nominally impossible time number, but this can be detected by the TIME_DEL state and corrected. In this type of system the Unix time number violates POSIX around both types of leap second. Collecting the leap second state variable along with the time number allows for unambiguous decoding, so the correct POSIX time number can be generated if desired, or the full UTC time can be stored in a more suitable format. The decoding logic required to cope with this style of Unix clock would also correctly decode a hypothetical POSIX-conforming clock using the same interface. This would be achieved by indicating the TIME_INS state during the entirety of an inserted leap second, then indicating TIME_WAIT during the entirety of the following second while repeating the seconds count. This requires synchronous leap second handling. This is probably the best way to express UTC time in Unix clock form, via a Unix interface, when the underlying clock is fundamentally untroubled by leap seconds. TAI-based variant Another, much rarer, non-conforming variant of Unix time keeping involves encoding TAI rather than UTC; some Linux systems are configured this way. Because TAI has no leap seconds, and every TAI day is exactly 86400 seconds long, this encoding is actually a pure linear count of seconds elapsed since 1970-01-01T00:00:00TAI. This makes time interval arithmetic much easier. Time values from these systems do not suffer the ambiguity that strictly conforming POSIX systems or NTP-driven systems have. In these systems it is necessary to consult a table of leap seconds to correctly convert between UTC and the pseudo-Unix-time representation. This resembles the manner in which time zone tables must be consulted to convert to and from civil time; the IANA time zone database includes leap second information, and the sample code available from the same source uses that information to convert between TAI-based time stamps and local time. Conversion also runs into definitional problems prior to the 1972 commencement of the current form of UTC (see section UTC basis below). This TAI-based system, despite its superficial resemblance, is not Unix time. It encodes times with values that differ by several seconds from the POSIX time values. A version of this system was proposed for inclusion in ISO C's , but only the UTC part was accepted in 2011. A does, however, exist in C++20. Representing the number A Unix time number can be represented in any form capable of representing numbers. In some applications the number is simply represented textually as a string of decimal digits, raising only trivial additional problems. However, certain binary representations of Unix times are particularly significant. The Unix time_t data type that represents a point in time is, on many platforms, a signed integer, traditionally of 32bits (but see below), directly encoding the Unix time number as described in the preceding section. Being 32 bits means that it covers a range of about 136 years in total. The minimum representable date is Friday 1901-12-13, and the maximum representable date is Tuesday 2038-01-19. One second after 03:14:07UTC 2038-01-19 this representation will overflow in what is known as the year 2038 problem. In some newer operating systems, time_t has been widened to 64 bits. This expands the times representable by approximately 292 billion years in both directions, which is over twenty times the present age of the universe per direction. There was originally some controversy over whether the Unix time_t should be signed or unsigned. If unsigned, its range in the future would be doubled, postponing the 32-bit overflow (by 68 years). However, it would then be incapable of representing times prior to the epoch. The consensus is for time_t to be signed, and this is the usual practice. The software development platform for version 6 of the QNX operating system has an unsigned 32-bit time_t, though older releases used a signed type. The POSIX and Open Group Unix specifications include the C standard library, which includes the time types and functions defined in the <time.h> header file. The ISO C standard states that time_t must be an arithmetic type, but does not mandate any specific type or encoding for it. POSIX requires time_t to be an integer type, but does not mandate that it be signed or unsigned. Unix has no tradition of directly representing non-integer Unix time numbers as binary fractions. Instead, times with sub-second precision are represented using composite data types that consist of two integers, the first being a time_t (the integral part of the Unix time), and the second being the fractional part of the time number in millionths (in struct timeval) or billionths (in struct timespec). These structures provide a decimal-based fixed-point data format, which is useful for some applications, and trivial to convert for others. UTC basis The present form of UTC, with leap seconds, is defined only starting from 1 January 1972. Prior to that, since 1 January 1961 there was an older form of UTC in which not only were there occasional time steps, which were by non-integer numbers of seconds, but also the UTC second was slightly longer than the SI second, and periodically changed to continuously approximate the Earth's rotation. Prior to 1961 there was no UTC, and prior to 1958 there was no widespread atomic timekeeping; in these eras, some approximation of GMT (based directly on the Earth's rotation) was used instead of an atomic timescale. The precise definition of Unix time as an encoding of UTC is only uncontroversial when applied to the present form of UTC. The Unix epoch predating the start of this form of UTC does not affect its use in this era: the number of days from 1 January 1970 (the Unix epoch) to 1 January 1972 (the start of UTC) is not in question, and the number of days is all that is significant to Unix time. The meaning of Unix time values below (i.e., prior to 1 January 1972) is not precisely defined. The basis of such Unix times is best understood to be an unspecified approximation of UTC. Computers of that era rarely had clocks set sufficiently accurately to provide meaningful sub-second timestamps in any case. Unix time is not a suitable way to represent times prior to 1972 in applications requiring sub-second precision; such applications must, at least, define which form of UT or GMT they use. , the possibility of ending the use of leap seconds in civil time is being considered. A likely means to execute this change is to define a new time scale, called International Time, that initially matches UTC but thereafter has no leap seconds, thus remaining at a constant offset from TAI. If this happens, it is likely that Unix time will be prospectively defined in terms of this new time scale, instead of UTC. Uncertainty about whether this will occur makes prospective Unix time no less predictable than it already is: if UTC were simply to have no further leap seconds the result would be the same. History The earliest versions of Unix time had a 32-bit integer incrementing at a rate of 60 Hz, which was the rate of the system clock on the hardware of the early Unix systems. The value 60 Hz still appears in some software interfaces as a result. The epoch also differed from the current value. The first edition Unix Programmer's Manual dated 3 November 1971 defines the Unix time as "the time since 00:00:00, 1 January 1971, measured in sixtieths of a second". The User Manual also commented that "the chronologically-minded user will note that 2**32 sixtieths of a second is only about 2.5 years". Because of this limited range, the epoch was redefined more than once, before the rate was changed to 1 Hz and the epoch was set to its present value of 1 January 1970 00:00:00 UTC. This yielded a range of about 136 years, half of it before 1970 and half of it afterwards. As indicated by the definition quoted above, the Unix time scale was originally intended to be a simple linear representation of time elapsed since an epoch. However, there was no consideration of the details of time scales, and it was implicitly assumed that there was a simple linear time scale already available and agreed upon. The first edition manual's definition does not even specify which time zone is used. Several later problems, including the complexity of the present definition, result from Unix time having been defined gradually by usage rather than fully defined from the outset. When POSIX.1 was written, the question arose of how to precisely define time_t in the face of leap seconds. The POSIX committee considered whether Unix time should remain, as intended, a linear count of seconds since the epoch, at the expense of complexity in conversions with civil time or a representation of civil time, at the expense of inconsistency around leap seconds. Computer clocks of the era were not sufficiently precisely set to form a precedent one way or the other. The POSIX committee was swayed by arguments against complexity in the library functions, and firmly defined the Unix time in a simple manner in terms of the elements of UTC time. This definition was so simple that it did not even encompass the entire leap year rule of the Gregorian calendar, and would make 2100 a leap year. The 2001 edition of POSIX.1 rectified the faulty leap year rule in the definition of Unix time, but retained the essential definition of Unix time as an encoding of UTC rather than a linear time scale. Since the mid-1990s, computer clocks have been routinely set with sufficient precision for this to matter, and they have most commonly been set using the UTC-based definition of Unix time. This has resulted in considerable complexity in Unix implementations, and in the Network Time Protocol, to execute steps in the Unix time number whenever leap seconds occur. Notable events in Unix time Unix enthusiasts have a history of holding "time_t parties" (pronounced "time tea parties") to celebrate significant values of the Unix time number. These are directly analogous to the new year celebrations that occur at the change of year in many calendars. As the use of Unix time has spread, so has the practice of celebrating its milestones. Usually it is time values that are round numbers in decimal that are celebrated, following the Unix convention of viewing time_t values in decimal. Among some groups round binary numbers are also celebrated, such as +230 which occurred at 13:37:04 UTC on Saturday, 10 January 2004. The events that these celebrate are typically described as "N seconds since the Unix epoch", but this is inaccurate; as discussed above, due to the handling of leap seconds in Unix time the number of seconds elapsed since the Unix epoch is slightly greater than the Unix time number for times later than the epoch. At 18:36:57 UTC on Wednesday, 17 October 1973, the first appearance of the date in ISO 8601 format within the digits of Unix time (119731017) took place. At 01:46:40 UTC on Sunday, 9 September 2001, the Unix billennium (Unix time number ) was celebrated. The name billennium is a portmanteau of billion and millennium. Some programs which stored timestamps using a text representation encountered sorting errors, as in a text sort times after the turnover, starting with a 1 digit, erroneously sorted before earlier times starting with a 9 digit. Affected programs included the popular Usenet reader KNode and e-mail client KMail, part of the KDE desktop environment. Such bugs were generally cosmetic in nature and quickly fixed once problems became apparent. The problem also affected many Filtrix document-format filters provided with Linux versions of WordPerfect; a patch was created by the user community to solve this problem, since Corel no longer sold or supported that version of the program. At 23:31:30 UTC on Friday, 13 February 2009, the decimal representation of Unix time reached seconds. Google celebrated this with a Google doodle. Parties and other celebrations were held around the world, among various technical subcultures, to celebrate the th second. At 03:33:20 UTC on Wednesday, 18 May 2033, the Unix time value will equal seconds. At 09:06:49 UTC on Friday, 16 June 2034, the Unix time value will equal the current year, month, date and hour (2034061609). At 06:28:16 UTC on Thursday, 7 February 2036, Network Time Protocol will loop over to the next epoch, as the 32-bit time stamp value used in NTP (unsigned, but based on 1 January 1900) will overflow. This date is close to the following date because the 136-year range of a 32-bit integer number of seconds is close to twice the 70-year offset between the two epochs. At 03:14:08 UTC on Tuesday, 19 January 2038, 32-bit versions of the Unix timestamp will cease to work, as it will overflow the largest value that can be held in a signed 32-bit number ( or ). Before this moment, software using 32-bit time stamps will need to adopt a new convention for time stamps, and file formats using 32-bit time stamps will need to be changed to support larger time stamps or a different epoch. If unchanged, the next second will be incorrectly interpreted as 20:45:52 Friday . This is referred to as the Year 2038 problem. At 05:20:00 UTC on Saturday, 24 January 2065, the Unix time value will equal seconds. At 06:28:15 UTC on Sunday, 7 February 2106, Unix time will reach or seconds, which, for systems that hold the time on 32-bit unsigned integers, is the maximum attainable. For some of these systems, the next second will be incorrectly interpreted as 00:00:00 Thursday . Other systems may experience an overflow error with unpredictable outcomes. At 15:30:08 UTC on Sunday, 4 December , Unix time will overflow the largest value that can be held in a signed 64-bit number. This duration is nearly 22 times the estimated current age of the universe, which is (13.7 billion) years. In literature and calendrics Vernor Vinge's novel A Deepness in the Sky describes a spacefaring trading civilization thousands of years in the future that still uses the Unix epoch. The "programmer-archaeologist" responsible for finding and maintaining usable code in mature computer systems first believes that the epoch refers to the time when man first walked on the Moon, but then realizes that it is "the 0-second of one of Humankind's first computer operating systems". See also Epoch (computing) System time Notes References External links Unix Programmer's Manual, first edition Personal account of the POSIX decisions by Landon Curt Noll chrono-Compatible Low-Level Date Algorithms – algorithms to convert between Gregorian and Julian dates and the number of days since the start of Unix time Calendaring standards Network time-related software Time measurement systems Time scales Time 1970
Operating System (OS)
901
Nitix Nitix (properly pronounced /nitiks/) was a retail Linux distribution, produced in Canada. The software is developed by Net Integration Technologies, Inc., which has been acquired by IBM as of January 2008 and currently operates as IBM Lotus Foundations. History Nitix, originally named Weaver was first created in September 1997 as a Linux-based server that required minimal configuration. Primarily built into pre-configured hardware platforms named Net Integrators, Nitix first became a standalone operating system capable of deployment on third-party hardware in January 2004. Programming of the earliest versions of Weaver were done primarily by Avery Pennarun and Dave Coombs, while students of the University of Waterloo. Nitix has claimed that it is the only Linux-based OS that has autonomic features. In June 2004, IBM Press' released a new book "Autonomic Computing," which mentions Nitix: "Nitix is one of the first companies to deliver on the promise of autonomic technology with a complete set of intelligent networking solutions for the SMB Market." In June 2005, Nitix Virtual Server was released, which allowed for the hosting of applications on its system. The architecture for the application services on Nitix allowed for applications to run in a virtual server environment, completely separated from the controlled OS environment. The Virtual Server is an RPM based filesystem which incorporates Yellow dog Updater, Modified as an application retrieval tool. Simultaneously Net Integration Technologies began sponsoring a "Ready For Nitix" program that encouraged independent software vendors to certify applications under Nitix. Also in June 2005, Nitix began to support NS3 (Scalable Services Structure), which allows for the centralized user management across multiple servers, as well as DNS propagation. In March 2007, NitixBlue was released as a new "flavor" of Nitix. NitixBlue supports the nearly hands-free installation of IBM Lotus Domino, touting no administrative headaches and complete automation of maintenance tasks. This is considered to be a large step for small and medium businesses, which previously did not have a realistic stepping stone towards enterprise-level functionality provided by IBM Lotus Domino. In January 2008, IBM announced to acquire Net Integration Technologies and now functions as a separate entity under the Lotus Software Group. Features Nitix includes an automated installation process in which it installs itself onto the hard disks, performs the proper partitioning and system setup. During this process it also performs a network scan, where it determines whether or not it should enable its DHCP server, finds its gateway and internet access, and automatically configures its firewall. For modifications to the installation process, a keyboard and monitor can be attached to the server and changes can be made on the console. Further modifications can be made through the web interface. The web interface is designed such that no other access is needed for configuration modifications in most cases. From this interface, you can set up users, teams, and file access; email, collaboration through ExchangeIt!, antivirus and antispam; web sites, FTP and rsync services, NFS, Samba, AppleTalk; and more. Nitix offers multi-layer security protection based on anti-virus technology from Kaspersky and anti-spam technology from Vircom and Engate. Nitix's claim to fame is its proprietary Intelligent Disk Backup (idb) that automatically backs up files, emails and databases incrementally as often as every 15 minutes. Backups are made to a hard drive located in the system that can be rotated to provide off-site redundancy. Restoring files can be done individually, by user or by entire system through its web-based interface. ( U.S. Patent No. 7,165,154 ) Nitix includes many open source applications that provide a lot of its functionality. IBM Lotus Foundations Start On January 18, 2008, IBM announced its intention to purchase Net Integrations Technologies. IBM has merged both products into an offering known as IBM Lotus Foundations. Lotus Foundations is offered as a Software-only application server platform as well as a hardware appliance known as Net Integrator. As of July 2, 2008, IBM has officially started offering Lotus Foundations as opposed to Nitix Blue . IBM Lotus Foundations products were withdrawn from marketing on March 14, 2013 and are no longer available for purchase. Support for IBM Lotus Foundations products will be withdrawn on September 30, 2014. Versions Nitix has been discontinued in favour of IBM Lotus Foundations Start. Nitix is currently sold through a distribution channel as either software-only, or on a Net Integrator. Software-only versions are "Nitix SB", "Nitix SE", and "Nitix PE", which come on 1 CD, and are geared towards partners that complement their own third party hardware systems with Nitix. The differences between these are the number of Client Access Licenses included, and software assurance prices. Nitix can also be pre-configured on hardware systems named "Micro", "Micro 2", "Mark I" and "Mark II". Hardware selection depends on number of hard drives and form factor. Distribution Nitix has been discontinued for sale through all distribution channels in favor of the latest Lotus Foundations Start . Support will be active. Value added resellers can purchase Lotus Foundations and resell it to end users as part of their complete solution, usually involving other IT services or custom made applications. References https://web.archive.org/web/20130923200526/http://www.lotusfoundations.com/ Lotus Foundations public website https://web.archive.org/web/20080307040932/http://www.open.nit.ca/wiki/ Nitix Wild open source website https://web.archive.org/web/20080423201757/http://www.crn.com/software/190600048 CRN's Article: Nitix: An Ideal Small-Business Server External links https://web.archive.org/web/20080311102155/http://www.nitix.com/ Nitix NitixBlue OS public website Discontinued Linux distributions Linux distributions
Operating System (OS)
902
Window class In computer programming a window class is a structure fundamental to the Microsoft Windows (Win16, Win32, and Win64) operating systems and its Application Programming Interface (API). The structure provides a template, from which windows may be created, by specifying a window's icons, menu, background color and a few other features. It also holds a pointer to a procedure that controls how the window behaves in response to user interaction. It finally tells the operating system how much additional storage space is needed for the class itself and for each window created from it. There have been two versions of window classes; the only non-technical addition brought by the second one is that of a small additional icon for the window. The first version was present in the Windows 3.x series; the second version appeared in Windows 95 and Windows NT 3.1. References External links A manual page for window class on Microsoft's website About Window Classes Microsoft application programming interfaces
Operating System (OS)
903
Computer hardware Computer hardware includes the physical parts of a computer, such as the case, central processing unit (CPU), monitor, mouse, keyboard, computer data storage, graphics card, sound card, speakers and motherboard. By contrast, software is the set of instructions that can be stored and run by hardware. Hardware is so-termed because it is "hard" or rigid with respect to changes, whereas software is "soft" because it is easy to change. Hardware is typically directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system, although other systems exist with only hardware. Von Neumann architecture The template for all modern computers is the Von Neumann architecture, detailed in a 1945 paper by Hungarian mathematician John von Neumann. This describes a design architecture for an electronic digital computer with subdivisions of a processing unit consisting of an arithmetic logic unit and processor registers, a control unit containing an instruction register and program counter, a memory to store both data and instructions, external mass storage, and input and output mechanisms. The meaning of the term has evolved to mean a stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus. This is referred to as the Von Neumann bottleneck and often limits the performance of the system. Types of computer systems Personal computer The personal computer is one of the most common types of computer due to its versatility and relatively low price. Desktop personal computers have a monitor, a keyboard, a mouse, and a computer case. The computer case holds the motherboard, fixed or removable disk drives for data storage, the power supply, and may contain other peripheral devices such as modems or network interfaces. Some models of desktop computers integrated the monitor and keyboard into the same case as the processor and power supply. Separating the elements allows the user to arrange the components in a pleasing, comfortable array, at the cost of managing power and data cables between them. Laptops are designed for portability but operate similarly to desktop PCs. They may use lower-power or reduced size components, with lower performance than a similarly priced desktop computer. Laptops contain the keyboard, display, and processor in one case. The monitor in the folding upper cover of the case can be closed for transportation, to protect the screen and keyboard. Instead of a mouse, laptops may have a touchpad or pointing stick. Tablets are portable computer that uses a touch screen as the primary input device. Tablets generally weigh less and are smaller than laptops. Some tablets include fold-out keyboards, or offer connections to separate external keyboards. Some models of laptop computers have a detachable keyboard, which allows the system to be configured as a touch-screen tablet. They are sometimes called "2-in-1 detachable laptops" or "tablet-laptop hybrids". Case The computer case encloses most of the components of the system. It provides mechanical support and protection for internal elements such as the motherboard, disk drives, and power supplies, and controls and directs the flow of cooling air over internal components. The case is also part of the system to control electromagnetic interference radiated by the computer and protects internal parts from electrostatic discharge. Large tower cases provide space for multiple disk drives or other peripherals and usually stand on the floor, while desktop cases provide less expansion room. All-in-one style designs include a video display built into the same case. Portable and laptop computers require cases that provide impact protection for the unit. Hobbyists may decorate the cases with colored lights, paint, or other features, in an activity called case modding. Power supply A power supply unit (PSU) converts alternating current (AC) electric power to low-voltage direct current (DC) power for the computer. Laptops can run on built-in rechargeable battery. The PSU typically uses a switched-mode power supply (SMPS), with power MOSFETs (power metal–oxide–semiconductor field-effect transistors) used in the converters and regulator circuits of the SMPS. Motherboard The motherboard is the main component of a computer. It is a board with integrated circuitry that connects the other parts of the computer including the CPU, the RAM, the disk drives (CD, DVD, hard disk, or any others) as well as any peripherals connected via the ports or the expansion slots. The integrated circuit (IC) chips in a computer typically contain billions of tiny metal–oxide–semiconductor field-effect transistors (MOSFETs). Components directly attached to or to part of the motherboard include: The CPU (central processing unit), which performs most of the calculations which enable a computer to function, and is referred to as the brain of the computer. It takes program instructions from random-access memory (RAM), interprets and processes them and then sends back results so that the relevant components can carry out the instructions. The CPU is a microprocessor, which is fabricated on a metal–oxide–semiconductor (MOS) integrated circuit (IC) chip. It is usually cooled by a heat sink and fan, or water-cooling system. Most newer CPU includes an on-die graphics processing unit (GPU). The clock speed of CPU governs how fast it executes instructions and is measured in GHz; typical values lie between 1 GHz and 5 GHz. Many modern computers have the option to overclock the CPU which enhances performance at the expense of greater thermal output and thus a need for improved cooling. The chipset, which includes the north bridge, mediates communication between the CPU and the other components of the system, including main memory; as well as south bridge, which is connected to the north bridge, and supports auxiliary interfaces and buses; and, finally, a Super I/O chip, connected through the south bridge, which supports the slowest and most legacy components like serial ports, hardware monitoring and fan control. Random-access memory (RAM), which stores the code and data that are being actively accessed by the CPU. For example, when a web browser is opened on the computer it takes up memory; this is stored in the RAM until the web browser is closed. It is typically a type of dynamic RAM (DRAM), such as synchronous DRAM (SDRAM), where MOS memory chips store data on memory cells consisting of MOSFETs and MOS capacitors. RAM usually comes on dual in-line memory modules (DIMMs) in the sizes of 2GB, 4GB, and 8GB, but can be much larger. Read-only memory (ROM), which stores the BIOS that runs when the computer is powered on or otherwise begins execution, a process known as Bootstrapping, or "booting" or "booting up". The ROM is typically a nonvolatile BIOS memory chip, which stores data on floating-gate MOSFET memory cells. The BIOS (Basic Input Output System) includes boot firmware and power management firmware. Newer motherboards use Unified Extensible Firmware Interface (UEFI) instead of BIOS. Buses that connect the CPU to various internal components and to expand cards for graphics and sound. The CMOS (complementary MOS) battery, which powers the CMOS memory for date and time in the BIOS chip. This battery is generally a watch battery. The video card (also known as the graphics card), which processes computer graphics. More powerful graphics cards are better suited to handle strenuous tasks, such as playing intensive video games or running computer graphics software. A video card contains a graphics processing unit (GPU) and video memory (typically a type of SDRAM), both fabricated on MOS integrated circuit (MOS IC) chips. Power MOSFETs make up the voltage regulator module (VRM), which controls how much voltage other hardware components receive. Expansion cards An expansion card in computing is a printed circuit board that can be inserted into an expansion slot of a computer motherboard or backplane to add functionality to a computer system via the expansion bus. Expansion cards can be used to obtain or expand on features not offered by the motherboard. Storage devices A storage device is any computing hardware and digital media that is used for storing, porting and extracting data files and objects. It can hold and store information both temporarily and permanently and can be internal or external to a computer, server or any similar computing device. Data storage is a core function and fundamental component of computers. Fixed media Data is stored by a computer using a variety of media. Hard disk drives (HDDs) are found in virtually all older computers, due to their high capacity and low cost, but solid-state drives (SSDs) are faster and more power efficient, although currently more expensive than hard drives in terms of dollar per gigabyte, so are often found in personal computers built post-2007. SSDs use flash memory, which stores data on MOS memory chips consisting of floating-gate MOSFET memory cells. Some systems may use a disk array controller for greater performance or reliability. Removable media To transfer data between computers, an external flash memory device (such as a memory card or USB flash drive) or optical disc (such as a CD-ROM, DVD-ROM or BD-ROM) may be used. Their usefulness depends on being readable by other systems; the majority of machines have an optical disk drive (ODD), and virtually all have at least one Universal Serial Bus (USB) port. Additionally, USB sticks are typically pre-formatted with the FAT32 file system, which is widely supported across operating systems. Input and output peripherals Input and output devices are typically housed externally to the main computer chassis. The following are either standard or very common to many computer systems. Input device Input devices allow the user to enter information into the system, or control its operation. Most personal computers have a mouse and keyboard, but laptop systems typically use a touchpad instead of a mouse. Other input devices include webcams, microphones, joysticks, and image scanners. Output device Output devices are designed around the senses of human beings. For example, monitors display text that can be read, speakers produce sound that can be heard. Such devices also could include printers or a Braille embosser. Mainframe computer A mainframe computer is a much larger computer that typically fills a room and may cost many hundreds or thousands of times as much as a personal computer. They are designed to perform large numbers of calculations for governments and large enterprises. Departmental computing In the 1960s and 1970s, more and more departments started to use cheaper and dedicated systems for specific purposes like process control and laboratory automation. A minicomputer, or colloquially mini, is a class of smaller computers that was developed in the mid-1960s and sold for much less than mainframe and mid-size computers from IBM and its direct competitors. Supercomputer A supercomputer is superficially similar to a mainframe but is instead intended for extremely demanding computational tasks. , the fastest supercomputer on the TOP500 supercomputer list is Fugaku, in Japan, with a LINPACK benchmark score of 415 PFLOPS, superseding the second fastest, Summit, in the United States, by around 294 PFLOPS. The term supercomputer does not refer to a specific technology. Rather it indicates the fastest computations available at any given time. In mid-2011, the fastest supercomputers boasted speeds exceeding one petaflop, or 1 quadrillion (10^15 or 1,000 trillion) floating-point operations per second. Supercomputers are fast but extremely costly, so they are generally used by large organizations to execute computationally demanding tasks involving large data sets. Supercomputers typically run military and scientific applications. Although costly, they are also being used for commercial applications where huge amounts of data must be analyzed. For example, large banks employ supercomputers to calculate the risks and returns of various investment strategies, and healthcare organizations use them to analyze giant databases of patient data to determine optimal treatments for various diseases and problems incurring to the country. Hardware upgrade When using computer hardware, an upgrade means adding new or additional hardware to a computer that improves its performance, increases its capacity, or adds new features. For example, a user could perform a hardware upgrade to replace the hard drive with a faster one or a Solid State Drive (SSD) to get a boost in performance. The user may also install more Random Access Memory (RAM) so the computer can store additional temporary data, or retrieve such data at a faster rate. The user may add a USB 3.0 expansion card to fully use USB 3.0 devices, or could upgrade the Graphics Processing Unit (GPU) for cleaner, more advanced graphics, or more monitors. Performing such hardware upgrades may be necessary for aged computers to meet a new, or updated program's system requirements. Sales Global revenue from computer hardware in 2016 reached 408 billion Euros. Recycling Because computer parts contain hazardous materials, there is a growing movement to recycle old and outdated parts. Computer hardware contain dangerous chemicals such as: lead, mercury, nickel, and cadmium. According to the EPA these e-wastes have a harmful effect on the environment unless they are disposed of properly. Making hardware requires energy, and recycling parts will reduce air pollution, water pollution, as well as greenhouse gas emissions. Disposing unauthorized computer equipment is in fact illegal. Legislation makes it mandatory to recycle computers through the government approved facilities. Recycling a computer can be made easier by taking out certain reusable parts. For example, the RAM, DVD drive, the graphics card, hard drive or SSD, and other similar removable parts can be reused. Many materials used in computer hardware can be recovered by recycling for use in future production. Reuse of tin, silicon, iron, aluminium, and a variety of plastics that are present in bulk in computers or other electronics can reduce the costs of constructing new systems. Components frequently contain copper, gold, tantalum, silver, platinum, palladium, and lead as well as other valuable materials suitable for reclamation. Toxic computer components The central processing unit contains many toxic materials. It contains lead and chromium in the metal plates. Resistors, semi-conductors, infrared detectors, stabilizers, cables, and wires contain cadmium. The circuit boards in a computer contain mercury, and chromium. When these types of materials, and chemicals are disposed improperly will become hazardous for the environment. Environmental effects According to the United States Environmental Protection Agency only around 15% of the e-waste actually is recycled. When e-waste byproducts leach into groundwater, are burned, or get mishandled during recycling, it causes harm. Health problems associated with such toxins include impaired mental development, cancer, and damage to the lungs, liver, and kidneys. That's why even wires have to be recycled. Different companies have different techniques to recycle a wire. The most popular one is the grinder that separates the copper wires from the plastic/rubber casing. When the processes are done there are two different piles left; one containing the copper powder, and the other containing plastic/rubber pieces. Computer monitors, mice, and keyboards all have a similar way of being recycled. For example, first, each of the parts are taken apart then all of the inner parts get separated and placed into its own bin. Computer components contain many toxic substances, like dioxins, polychlorinated biphenyls (PCBs), cadmium, chromium, radioactive isotopes and mercury. A typical computer monitor may contain more than 6% lead by weight, much of which is in the lead glass of the cathode ray tube (CRT). A typical 15 inch (38 cm) computer monitor may contain of lead but other monitors have been estimated to have up to of lead. Circuit boards contain considerable quantities of lead-tin solders that are more likely to leach into groundwater or create air pollution due to incineration. In US landfills, about 40% of the lead content levels are from e-waste. The processing (e.g. incineration and acid treatments) required to reclaim these precious substances may release, generate, or synthesize toxic byproducts. Recycling of computer hardware is considered environmentally friendly because it prevents hazardous waste, including heavy metals and carcinogens, from entering the atmosphere, landfill or waterways. While electronics consist a small fraction of total waste generated, they are far more dangerous. There is stringent legislation designed to enforce and encourage the sustainable disposal of appliances, the most notable being the Waste Electrical and Electronic Equipment Directive of the European Union and the United States National Computer Recycling Act. Efforts for minimizing computer hardware waste As computer hardware contain a wide number of metals inside, the United States Environmental Protection Agency (EPA) encourages the collection and recycling of computer hardware. "E-cycling", the recycling of computer hardware, refers to the donation, reuse, shredding and general collection of used electronics. Generically, the term refers to the process of collecting, brokering, disassembling, repairing and recycling the components or metals contained in used or discarded electronic equipment, otherwise known as electronic waste (e-waste). "E-cyclable" items include, but are not limited to: televisions, computers, microwave ovens, vacuum cleaners, telephones and cellular phones, stereos, and VCRs and DVDs just about anything that has a cord, light or takes some kind of battery. Recycling a computer is made easier by a few of the national services, such as Dell and Apple. Both companies will take back the computer of their make or any other make. Otherwise a computer can be donated to Computer Aid International which is an organization that recycles and refurbishes old computers for hospitals, schools, universities, etc. See also Computer architecture Electronic hardware Hardware for artificial intelligence Glossary of computer hardware terms History of computing hardware Microprocessor MOSFET List of computer hardware manufacturers Open-source computing hardware Open-source hardware Transistor References External links Digital electronics
Operating System (OS)
904
Open Platform Communications Open Platform Communications (OPC) is a series of standards and specifications for industrial telecommunication. They are based on Object Linking and Embedding (OLE) for process control. An industrial automation task force developed the original standard in 1996 under the name OLE for Process Control. OPC specifies the communication of real-time plant data between control devices from different manufacturers. After the initial release in 1996, the OPC Foundation was created to maintain the standards. Since OPC has been adopted beyond the field of process control, the OPC Foundation changed the name to Open Platform Communications in 2011. The change in name reflects the applications of OPC technology for applications in building automation, discrete manufacturing, process control and others. OPC has also grown beyond its original OLE implementation to include other data transportation technologies including Microsoft Corporation's .NET Framework, XML, and even the OPC Foundation's binary-encoded TCP format. History The OPC specification' was based on the OLE, COM, and DCOM technologies developed by Microsoft Corporation for the Microsoft Windows operating system family. The specification defined a standard set of objects, interfaces e.g. IDL and methods for use in process control and manufacturing automation applications to facilitate interoperability. The most common OPC specification is OPC Data Access, which is used for reading and writing real-time data. When vendors refer to "OPC" generically, they typically mean OPC Data Access (OPC DA). OPC DA itself has gone through three major revisions since its inception. Versions are backwards compatible, in that a version 3 OPC Server can still be accessed by a version 1 OPC Client, since the specifications add functionality, but still require the older version to be implemented as well. However, a client could be written that does not support the older functions since everything can be done using the newer ones, thus a DA-3-compatible client will not necessarily work with a DA 1.0 Server. In addition OPC DA specification, the OPC Foundation maintains the OPC Historical Data Access (HDA) specification. In contrast to the real time data that is accessible with OPC DA, OPC HDA allows access and retrieval of archived data. The OPC Alarms and Events specification is maintained by the OPC Foundation, and defines the exchange of alarm and event type message information, as well as variable states and state management. By 2002, the specification was compared to Fieldbus and other previous standards. An OPC Express Interface, known as OPC Xi, was approved in November, 2009, for the .NET Framework. OPC Xi used Windows Communication Foundation instead of DCOM, so it can be configured for communication across the enhanced security of network address translation (NAT). About the same time, the OPC Unified Architecture (UA) was developed for platform independence. UA can be implemented with Java, Microsoft .NET, or C, eliminating the need to use a Microsoft Windows platform of earlier OPC versions. UA combined the functionality of the existing OPC interfaces with new technologies such as XML and Web services to deliver higher level manufacturing execution system (MES) and enterprise resource planning (ERP) support. The first working group for UA met in 2003, version 1.0 was published in 2006. On September 16, 2010, The OPC Foundation and the MTConnect Institute announced cooperation to ensure interoperability and consistency between the two standards. Design OPC was designed to provide a common bridge for Windows-based software applications and process control hardware. Standards define consistent methods of accessing field data from plant floor devices. This method remains the same regardless of the type and source of data. An OPC Server for one hardware device provides the same methods for an OPC client to access its data as any other OPC Server for any hardware device. The aim was to reduce the amount of duplicated effort required from hardware manufacturers and their software partners, and from the supervisory control and data acquisition (SCADA) and other human-machine interface (HMI) producers in order to interface the two. Once a hardware manufacturer had developed their OPC Server for the new hardware device, their work was done with regards to allowing any 'top end' to access their device, and once the SCADA producer had developed their OPC client, it allowed access to any hardware with an OPC compliant server. OPC servers provide a method for different software packages (as long as it is an OPC client) to access data from a process control device, such as a programmable logic controller (PLC) or distributed control system. Traditionally, any time a package needed access to data from a device, a custom interface or driver had to be written. There is nothing in the OPC specifications to restrict the server to providing access to a process control device. OPC Servers can be written for anything from getting the internal temperature of a microprocessor to the current temperature in Monument Valley. Once an OPC Server is written for a particular device, it can be reused by any application that is able to act as an OPC client. OPC servers can be linked and communicate to other servers. OPC servers use Microsoft's OLE technology (also known as the Component Object Model, or COM) to communicate with clients. COM technology permits a standard for real-time information exchange between software applications and process hardware to be defined. Some OPC specifications are published, but others are available only to members of the OPC Foundation. So while no company "owns" OPC and anyone can develop an OPC server whether or not they are a member of the OPC Foundation, non-members will not necessarily be using the latest specifications. It is up to each company that requires OPC products to ensure that their products are certified and that their system integrators have the necessary training. See also Modbus RTU Lonworks KNX (standard) IEC 61850 MTConnect References External links FatRat Library - free OPC server developers toolkit OpenOPC - Open Source OPC client development in Python OPC Foundation OPC Programmers' Connection OPC Unified Architecture Address Space e-book prOpc Library - Open Source OPC server/client toolkit in Delphi Application programming interfaces Automation Computer standards
Operating System (OS)
905
Organic computing Organic computing is computing that behaves and interacts with humans in an organic manner. The term "organic" is used to describe the system's behavior, and does not imply that they are constructed from organic materials. It is based on the insight that we will soon be surrounded by large collections of autonomous systems, which are equipped with sensors and actuators, aware of their environment, communicate freely, and organize themselves in order to perform the actions and services that seem to be required. The goal is to construct such systems as robust, safe, flexible, and trustworthy as possible. In particular, a strong orientation towards human needs as opposed to a pure implementation of the technologically possible seems absolutely central. In order to achieve these goals, our technical systems will have to act more independently, flexibly, and autonomously, i.e. they will have to exhibit lifelike properties. We call such systems "organic". Hence, an "Organic Computing System" is a technical system which adapts dynamically to exogenous and endogenous change. It is characterized by the properties of self-organization, self-configuration, self-optimization, self-healing, self-protection, self-explaining, and context awareness. It can be seen as an extension of the Autonomic computing vision of IBM. In a variety of research projects the priority research program SPP 1183 of the German Research Foundation (DFG) addresses fundamental challenges in the design of Organic Computing systems; its objective is a deeper understanding of emergent global behavior in self-organizing systems and the design of specific concepts and tools to support the construction of Organic Computing systems for technical applications. See also Biologically inspired computing Autonomic computing References Müller-Schloer, Christian; v.d. Malsburg, Christoph and Würtz, Rolf P. Organic Computing. Aktuelles Schlagwort in Informatik Spektrum (2004) pp. 332–336. Müller-Schloer, Christian. Organic Computing – On the Feasibility of Controlled Emergence. CODES + ISSS 2004 Proceedings (2004) pp 2–5, ACM Press, . Rochner, Fabian and Müller-Schloer, Christian. Emergence in Technical Systems. it Special Issue on Organic Computing (2005) pp. 188–200, Oldenbourg Verlag, Jahrgang 47, ISSN 1611-2776. Schmeck, Hartmut. Organic Computing – A New Vision for Distributed Embedded Systems. Proceedings of the Eighth IEEE International Symposium on Object-Oriented Real-Time Distributed Computing (ISORC’05) (2005) pp. 201–203, IEEE, IEEE Computer Society 2005. Würtz, Rolf P. (Editor): Organic Computing (Understanding Complex Systems). Springer, 2008. . External links DFG SPP 1183 Organic Computing Position Paper Organic Computing (German) Self-Organising Systems (SOS) FAQ The Organic Computing Page The PUPS/P3 Organic Computing Environment for Linux (Free Software) SeSAm Multiagent simulator and graphical modelling environment. (Free Software) Programming paradigms
Operating System (OS)
906
Graphical Kernel System The Graphical Kernel System (GKS) was the first ISO standard for low-level computer graphics, introduced in 1977. A draft international standard was circulated for review in September 1983. Final ratification of the standard was achieved in 1985. Overview GKS provides a set of drawing features for two-dimensional vector graphics suitable for charting and similar duties. The calls are designed to be portable across different programming languages, graphics devices and hardware, so that applications written to use GKS will be readily portable to many platforms and devices. GKS was fairly common on computer workstations in the 1980s and early 1990s. GKS formed the basis of Digital Research's GSX and GEM products; the latter was common on the Atari ST and was occasionally seen on PCs particularly in conjunction with Ventura Publisher. It was little used commercially outside these markets, but remains in use in some scientific visualization packages. It is also the underlying API defining the Computer Graphics Metafile. A descendant of GKS was PHIGS. One popular application based on an implementation of GKS is the GR Framework, a C library for high-performance scientific visualization that has become a common plotting backend among Julia users. A main developer and promoter of the GKS was José Luis Encarnação, formerly director of the Fraunhofer Institute for Computer Graphics (IGD) in Darmstadt, Germany. GKS has been standardized in the following documents: ANSI standard ANSI X3.124 of 1985. ISO 7942:1985 standard, revised as ISO 7942:1985/Amd 1:1991 and ISO/IEC 7942-1:1994, as well as ISO/IEC 7942-2:1997, ISO/IEC 7942-3:1999 and ISO/IEC 7942-4:1998 The language bindings are ISO standard ISO 8651. GKS-3D (Graphical Kernel System for Three Dimensions) functional definition is ISO standard ISO 8805, and the corresponding C bindings are ISO 8806. The functionality of GKS is wrapped up as a data model standard in the STEP standard, section ISO 10303-46. See also General Graphics Interface GSS-KERNEL IGES (Initial Graphics Exchange Specification) NAPLPS References Further reading External links Unofficial source of current implementation information GKS at FOLDOC Computer graphics Application programming interfaces Graphics standards Graphical Kernel System
Operating System (OS)
907
Central processing unit A central processing unit (CPU), also called a central processor, main processor or just processor, is the electronic circuitry that executes instructions comprising a computer program. The CPU performs basic arithmetic, logic, controlling, and input/output (I/O) operations specified by the instructions in the program. This contrasts with external components such as main memory and I/O circuitry, and specialized processors such as graphics processing units (GPUs). The form, design, and implementation of CPUs have changed over time, but their fundamental operation remains almost unchanged. Principal components of a CPU include the arithmetic–logic unit (ALU) that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that orchestrates the fetching (from memory), decoding and execution of instructions by directing the coordinated operations of the ALU, registers and other components. Most modern CPUs are implemented on integrated circuit (IC) microprocessors, with one or more CPUs on a single IC chip. Microprocessor chips with multiple CPUs are multi-core processors. The individual physical CPUs, processor cores, can also be multithreaded to create additional virtual or logical CPUs. An IC that contains a CPU may also contain memory, peripheral interfaces, and other components of a computer; such integrated devices are variously called microcontrollers or systems on a chip (SoC). Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central. Virtual CPUs are an abstraction of dynamical aggregated computational resources. History Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers". The "central processing unit" term has been in use since as early as 1955. Since the term "CPU" is generally defined as a device for software (computer program) execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer. The idea of a stored-program computer had been already present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was initially omitted so that it could be finished sooner. On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would eventually be completed in August 1949. EDVAC was designed to perform a certain number of instructions (or operations) of various types. Significantly, the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, which was the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program that EDVAC ran could be changed simply by changing the contents of the memory. EDVAC, however, was not the first stored-program computer; the Manchester Baby, a small-scale experimental stored-program computer, ran its first program on 21 June 1948 and the Manchester Mark 1 ran its first program during the night of 16–17 June 1949. Early CPUs were custom designs used as part of a larger and sometimes distinctive computer. However, this method of designing custom CPUs for a particular application has largely given way to the development of multi-purpose processors produced in large quantities. This standardization began in the era of discrete transistor mainframes and minicomputers and has rapidly accelerated with the popularization of the integrated circuit (IC). The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, and sometimes even in toys. While von Neumann is most often credited with the design of the stored-program computer because of his design of EDVAC, and the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas. The so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also used a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both. Most modern CPUs are primarily von Neumann in design, but CPUs with the Harvard architecture are seen as well, especially in embedded applications; for instance, the Atmel AVR microcontrollers are Harvard architecture processors. Relays and vacuum tubes (thermionic tubes) were commonly used as switching elements; a useful computer requires thousands or tens of thousands of switching devices. The overall speed of a system is dependent on the speed of the switches. Vacuum-tube computers such as EDVAC tended to average eight hours between failures, whereas relay computers like the (slower, but earlier) Harvard Mark I failed very rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, limited largely by the speed of the switching devices they were built with. Transistor CPUs The design complexity of CPUs increased as various technologies facilitated building smaller and more reliable electronic devices. The first such improvement came with the advent of the transistor. Transistorized CPUs during the 1950s and 1960s no longer had to be built out of bulky, unreliable and fragile switching elements like vacuum tubes and relays. With this improvement, more complex and reliable CPUs were built onto one or several printed circuit boards containing discrete (individual) components. In 1964, IBM introduced its IBM System/360 computer architecture that was used in a series of computers capable of running the same programs with different speed and performance. This was significant at a time when most electronic computers were incompatible with one another, even those made by the same manufacturer. To facilitate this improvement, IBM used the concept of a microprogram (often called "microcode"), which still sees widespread usage in modern CPUs. The System/360 architecture was so popular that it dominated the mainframe computer market for decades and left a legacy that is still continued by similar modern computers like the IBM zSeries. In 1965, Digital Equipment Corporation (DEC) introduced another influential computer aimed at the scientific and research markets, the PDP-8. Transistor-based computers had several distinct advantages over their predecessors. Aside from facilitating increased reliability and lower power consumption, transistors also allowed CPUs to operate at much higher speeds because of the short switching time of a transistor in comparison to a tube or relay. The increased reliability and dramatically increased speed of the switching elements (which were almost exclusively transistors by this time); CPU clock rates in the tens of megahertz were easily obtained during this period. Additionally, while discrete transistor and IC CPUs were in heavy usage, new high-performance designs like single instruction, multiple data (SIMD) vector processors began to appear. These early experimental designs later gave rise to the era of specialized supercomputers like those made by Cray Inc and Fujitsu Ltd. Small-scale integration CPUs During this period, a method of manufacturing many interconnected transistors in a compact space was developed. The integrated circuit (IC) allowed a large number of transistors to be manufactured on a single semiconductor-based die, or "chip". At first, only very basic non-specialized digital circuits such as NOR gates were miniaturized into ICs. CPUs based on these "building block" ICs are generally referred to as "small-scale integration" (SSI) devices. SSI ICs, such as the ones used in the Apollo Guidance Computer, usually contained up to a few dozen transistors. To build an entire CPU out of SSI ICs required thousands of individual chips, but still consumed much less space and power than earlier discrete transistor designs. IBM's System/370, follow-on to the System/360, used SSI ICs rather than Solid Logic Technology discrete-transistor modules. DEC's PDP-8/I and KI10 PDP-10 also switched from the individual transistors used by the PDP-8 and PDP-10 to SSI ICs, and their extremely popular PDP-11 line was originally built with SSI ICs but was eventually implemented with LSI components once these became practical. Large-scale integration CPUs Lee Boysel published influential articles, including a 1967 "manifesto", which described how to build the equivalent of a 32-bit mainframe computer from a relatively small number of large-scale integration circuits (LSI). The only way to build LSI chips, which are chips with a hundred or more gates, was to build them using a metal–oxide–semiconductor (MOS) semiconductor manufacturing process (either PMOS logic, NMOS logic, or CMOS logic). However, some companies continued to build processors out of bipolar transistor–transistor logic (TTL) chips because bipolar junction transistors were faster than MOS chips up until the 1970s (a few companies such as Datapoint continued to build processors out of TTL chips until the early 1980s). In the 1960s, MOS ICs were slower and initially considered useful only in applications that required low power. Following the development of silicon-gate MOS technology by Federico Faggin at Fairchild Semiconductor in 1968, MOS ICs largely replaced bipolar TTL as the standard chip technology in the early 1970s. As the microelectronic technology advanced, an increasing number of transistors were placed on ICs, decreasing the number of individual ICs needed for a complete CPU. MSI and LSI ICs increased transistor counts to hundreds, and then thousands. By 1968, the number of ICs required to build a complete CPU had been reduced to 24 ICs of eight different types, with each IC containing roughly 1000 MOSFETs. In stark contrast with its SSI and MSI predecessors, the first LSI implementation of the PDP-11 contained a CPU composed of only four LSI integrated circuits. Microprocessors Since the introduction of the first commercially available microprocessor, the Intel 4004 in 1971, and the first widely used microprocessor, the Intel 8080 in 1974, this class of CPUs has almost completely overtaken all other central processing unit implementation methods. Mainframe and minicomputer manufacturers of the time launched proprietary IC development programs to upgrade their older computer architectures, and eventually produced instruction set compatible microprocessors that were backward-compatible with their older hardware and software. Combined with the advent and eventual success of the ubiquitous personal computer, the term CPU is now applied almost exclusively to microprocessors. Several CPUs (denoted cores) can be combined in a single processing chip. Previous generations of CPUs were implemented as discrete components and numerous small integrated circuits (ICs) on one or more circuit boards. Microprocessors, on the other hand, are CPUs manufactured on a very small number of ICs; usually just one. The overall smaller CPU size, as a result of being implemented on a single die, means faster switching time because of physical factors like decreased gate parasitic capacitance. This has allowed synchronous microprocessors to have clock rates ranging from tens of megahertz to several gigahertz. Additionally, the ability to construct exceedingly small transistors on an IC has increased the complexity and number of transistors in a single CPU many fold. This widely observed trend is described by Moore's law, which had proven to be a fairly accurate predictor of the growth of CPU (and other IC) complexity until 2016. While the complexity, size, construction and general form of CPUs have changed enormously since 1950, the basic design and function has not changed much at all. Almost all common CPUs today can be very accurately described as von Neumann stored-program machines. As Moore's law no longer holds, concerns have arisen about the limits of integrated circuit transistor technology. Extreme miniaturization of electronic gates is causing the effects of phenomena like electromigration and subthreshold leakage to become much more significant. These newer concerns are among the many factors causing researchers to investigate new methods of computing such as the quantum computer, as well as to expand the usage of parallelism and other methods that extend the usefulness of the classical von Neumann model. Operation The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions that is called a program. The instructions to be executed are kept in some kind of computer memory. Nearly all CPUs follow the fetch, decode and execute steps in their operation, which are collectively known as the instruction cycle. After the execution of an instruction, the entire process repeats, with the next instruction cycle normally fetching the next-in-sequence instruction because of the incremented value in the program counter. If a jump instruction was executed, the program counter will be modified to contain the address of the instruction that was jumped to and program execution continues normally. In more complex CPUs, multiple instructions can be fetched, decoded and executed simultaneously. This section describes what is generally referred to as the "classic RISC pipeline", which is quite common among the simple CPUs used in many electronic devices (often called microcontrollers). It largely ignores the important role of CPU cache, and therefore the access stage of the pipeline. Some instructions manipulate the program counter rather than producing result data directly; such instructions are generally called "jumps" and facilitate program behavior like loops, conditional program execution (through the use of a conditional jump), and existence of functions. In some processors, some other instructions change the state of bits in a "flags" register. These flags can be used to influence how a program behaves, since they often indicate the outcome of various operations. For example, in such processors a "compare" instruction evaluates two values and sets or clears bits in the flags register to indicate which one is greater or whether they are equal; one of these flags could then be used by a later jump instruction to determine program flow. Fetch The first step, fetch, involves retrieving an instruction (which is represented by a number or sequence of numbers) from program memory. The instruction's location (address) in program memory is determined by the program counter (PC; called the "instruction pointer" in Intel x86 microprocessors), which stores a number that identifies the address of the next instruction to be fetched. After an instruction is fetched, the PC is incremented by the length of the instruction so that it will contain the address of the next instruction in the sequence. Often, the instruction to be fetched must be retrieved from relatively slow memory, causing the CPU to stall while waiting for the instruction to be returned. This issue is largely addressed in modern processors by caches and pipeline architectures (see below). Decode The instruction that the CPU fetches from memory determines what the CPU will do. In the decode step, performed by binary decoder circuitry known as the instruction decoder, the instruction is converted into signals that control other parts of the CPU. The way in which the instruction is interpreted is defined by the CPU's instruction set architecture (ISA). Often, one group of bits (that is, a "field") within the instruction, called the opcode, indicates which operation is to be performed, while the remaining fields usually provide supplemental information required for the operation, such as the operands. Those operands may be specified as a constant value (called an immediate value), or as the location of a value that may be a processor register or a memory address, as determined by some addressing mode. In some CPU designs the instruction decoder is implemented as a hardwired, unchangeable binary decoder circuit. In others, a microprogram is used to translate instructions into sets of CPU configuration signals that are applied sequentially over multiple clock pulses. In some cases the memory that stores the microprogram is rewritable, making it possible to change the way in which the CPU decodes instructions. Execute After the fetch and decode steps, the execute step is performed. Depending on the CPU architecture, this may consist of a single action or a sequence of actions. During each action, control signals electrically enable or disable various parts of the CPU so they can perform all or part of the desired operation. The action is then completed, typically in response to a clock pulse. Very often the results are written to an internal CPU register for quick access by subsequent instructions. In other cases results may be written to slower, but less expensive and higher capacity main memory. For example, if an addition instruction is to be executed, registers containing operands (numbers to be summed) are activated, as are the parts of the arithmetic logic unit (ALU) that perform addition. When the clock pulse occurs, the operands flow from the source registers into the ALU, and the sum appears at its output. On subsequent clock pulses, other components are enabled (and disabled) to move the output (the sum of the operation) to storage (e.g., a register or memory). If the resulting sum is too large (i.e., it is larger than the ALU's output word size), an arithmetic overflow flag will be set, influencing the next operation. Structure and implementation Hardwired into a CPU's circuitry is a set of basic operations it can perform, called an instruction set. Such operations may involve, for example, adding or subtracting two numbers, comparing two numbers, or jumping to a different part of a program. Each instruction is represented by a unique combination of bits, known as the machine language opcode. While processing an instruction, the CPU decodes the opcode (via a binary decoder) into control signals, which orchestrate the behavior of the CPU. A complete machine language instruction consists of an opcode and, in many cases, additional bits that specify arguments for the operation (for example, the numbers to be summed in the case of an addition operation). Going up the complexity scale, a machine language program is a collection of machine language instructions that the CPU executes. The actual mathematical operation for each instruction is performed by a combinational logic circuit within the CPU's processor known as the arithmetic–logic unit or ALU. In general, a CPU executes an instruction by fetching it from memory, using its ALU to perform an operation, and then storing the result to memory. Beside the instructions for integer mathematics and logic operations, various other machine instructions exist, such as those for loading data from memory and storing it back, branching operations, and mathematical operations on floating-point numbers performed by the CPU's floating-point unit (FPU). Control unit The control unit (CU) is a component of the CPU that directs the operation of the processor. It tells the computer's memory, arithmetic and logic unit and input and output devices how to respond to the instructions that have been sent to the processor. It directs the operation of the other units by providing timing and control signals. Most computer resources are managed by the CU. It directs the flow of data between the CPU and the other devices. John von Neumann included the control unit as part of the von Neumann architecture. In modern computer designs, the control unit is typically an internal part of the CPU with its overall role and operation unchanged since its introduction. Arithmetic logic unit The arithmetic logic unit (ALU) is a digital circuit within the processor that performs integer arithmetic and bitwise logic operations. The inputs to the ALU are the data words to be operated on (called operands), status information from previous operations, and a code from the control unit indicating which operation to perform. Depending on the instruction being executed, the operands may come from internal CPU registers or external memory, or they may be constants generated by the ALU itself. When all input signals have settled and propagated through the ALU circuitry, the result of the performed operation appears at the ALU's outputs. The result consists of both a data word, which may be stored in a register or memory, and status information that is typically stored in a special, internal CPU register reserved for this purpose. Address generation unit Address generation unit (AGU), sometimes also called address computation unit (ACU), is an execution unit inside the CPU that calculates addresses used by the CPU to access main memory. By having address calculations handled by separate circuitry that operates in parallel with the rest of the CPU, the number of CPU cycles required for executing various machine instructions can be reduced, bringing performance improvements. While performing various operations, CPUs need to calculate memory addresses required for fetching data from the memory; for example, in-memory positions of array elements must be calculated before the CPU can fetch the data from actual memory locations. Those address-generation calculations involve different integer arithmetic operations, such as addition, subtraction, modulo operations, or bit shifts. Often, calculating a memory address involves more than one general-purpose machine instruction, which do not necessarily decode and execute quickly. By incorporating an AGU into a CPU design, together with introducing specialized instructions that use the AGU, various address-generation calculations can be offloaded from the rest of the CPU, and can often be executed quickly in a single CPU cycle. Capabilities of an AGU depend on a particular CPU and its architecture. Thus, some AGUs implement and expose more address-calculation operations, while some also include more advanced specialized instructions that can operate on multiple operands at a time. Furthermore, some CPU architectures include multiple AGUs so more than one address-calculation operation can be executed simultaneously, bringing further performance improvements by capitalizing on the superscalar nature of advanced CPU designs. For example, Intel incorporates multiple AGUs into its Sandy Bridge and Haswell microarchitectures, which increase bandwidth of the CPU memory subsystem by allowing multiple memory-access instructions to be executed in parallel. Memory management unit (MMU) Many microprocessors (in smartphones and desktop, laptop, server computers) have a memory management unit, translating logical addresses into physical RAM addresses, providing memory protection and paging abilities, useful for virtual memory. Simpler processors, especially microcontrollers, usually don't include an MMU. Cache A CPU cache is a hardware cache used by the central processing unit (CPU) of a computer to reduce the average cost (time or energy) to access data from the main memory. A cache is a smaller, faster memory, closer to a processor core, which stores copies of the data from frequently used main memory locations. Most CPUs have different independent caches, including instruction and data caches, where the data cache is usually organized as a hierarchy of more cache levels (L1, L2, L3, L4, etc.). All modern (fast) CPUs (with few specialized exceptions) have multiple levels of CPU caches. The first CPUs that used a cache had only one level of cache; unlike later level 1 caches, it was not split into L1d (for data) and L1i (for instructions). Almost all current CPUs with caches have a split L1 cache. They also have L2 caches and, for larger processors, L3 caches as well. The L2 cache is usually not split and acts as a common repository for the already split L1 cache. Every core of a multi-core processor has a dedicated L2 cache and is usually not shared between the cores. The L3 cache, and higher-level caches, are shared between the cores and are not split. An L4 cache is currently uncommon, and is generally on dynamic random-access memory (DRAM), rather than on static random-access memory (SRAM), on a separate die or chip. That was also the case historically with L1, while bigger chips have allowed integration of it and generally all cache levels, with the possible exception of the last level. Each extra level of cache tends to be bigger and be optimized differently. Other types of caches exist (that are not counted towards the "cache size" of the most important caches mentioned above), such as the translation lookaside buffer (TLB) that is part of the memory management unit (MMU) that most CPUs have. Caches are generally sized in powers of two: 2, 8, 16 etc. KiB or MiB (for larger non-L1) sizes, although the IBM z13 has a 96 KiB L1 instruction cache. Clock rate Most CPUs are synchronous circuits, which means they employ a clock signal to pace their sequential operations. The clock signal is produced by an external oscillator circuit that generates a consistent number of pulses each second in the form of a periodic square wave. The frequency of the clock pulses determines the rate at which a CPU executes instructions and, consequently, the faster the clock, the more instructions the CPU will execute each second. To ensure proper operation of the CPU, the clock period is longer than the maximum time needed for all signals to propagate (move) through the CPU. In setting the clock period to a value well above the worst-case propagation delay, it is possible to design the entire CPU and the way it moves data around the "edges" of the rising and falling clock signal. This has the advantage of simplifying the CPU significantly, both from a design perspective and a component-count perspective. However, it also carries the disadvantage that the entire CPU must wait on its slowest elements, even though some portions of it are much faster. This limitation has largely been compensated for by various methods of increasing CPU parallelism (see below). However, architectural improvements alone do not solve all of the drawbacks of globally synchronous CPUs. For example, a clock signal is subject to the delays of any other electrical signal. Higher clock rates in increasingly complex CPUs make it more difficult to keep the clock signal in phase (synchronized) throughout the entire unit. This has led many modern CPUs to require multiple identical clock signals to be provided to avoid delaying a single signal significantly enough to cause the CPU to malfunction. Another major issue, as clock rates increase dramatically, is the amount of heat that is dissipated by the CPU. The constantly changing clock causes many components to switch regardless of whether they are being used at that time. In general, a component that is switching uses more energy than an element in a static state. Therefore, as clock rate increases, so does energy consumption, causing the CPU to require more heat dissipation in the form of CPU cooling solutions. One method of dealing with the switching of unneeded components is called clock gating, which involves turning off the clock signal to unneeded components (effectively disabling them). However, this is often regarded as difficult to implement and therefore does not see common usage outside of very low-power designs. One notable recent CPU design that uses extensive clock gating is the IBM PowerPC-based Xenon used in the Xbox 360; that way, power requirements of the Xbox 360 are greatly reduced. Clockless CPUs Another method of addressing some of the problems with a global clock signal is the removal of the clock signal altogether. While removing the global clock signal makes the design process considerably more complex in many ways, asynchronous (or clockless) designs carry marked advantages in power consumption and heat dissipation in comparison with similar synchronous designs. While somewhat uncommon, entire asynchronous CPUs have been built without using a global clock signal. Two notable examples of this are the ARM compliant AMULET and the MIPS R3000 compatible MiniMIPS. Rather than totally removing the clock signal, some CPU designs allow certain portions of the device to be asynchronous, such as using asynchronous ALUs in conjunction with superscalar pipelining to achieve some arithmetic performance gains. While it is not altogether clear whether totally asynchronous designs can perform at a comparable or better level than their synchronous counterparts, it is evident that they do at least excel in simpler math operations. This, combined with their excellent power consumption and heat dissipation properties, makes them very suitable for embedded computers. Voltage regulator module Many modern CPUs have a die-integrated power managing module which regulates on-demand voltage supply to the CPU circuitry allowing it to keep balance between performance and power consumption. Integer range Every CPU represents numerical values in a specific way. For example, some early digital computers represented numbers as familiar decimal (base 10) numeral system values, and others have employed more unusual representations such as ternary (base three). Nearly all modern CPUs represent numbers in binary form, with each digit being represented by some two-valued physical quantity such as a "high" or "low" voltage. Related to numeric representation is the size and precision of integer numbers that a CPU can represent. In the case of a binary CPU, this is measured by the number of bits (significant digits of a binary encoded integer) that the CPU can process in one operation, which is commonly called word size, bit width, data path width, integer precision, or integer size. A CPU's integer size determines the range of integer values it can directly operate on. For example, an 8-bit CPU can directly manipulate integers represented by eight bits, which have a range of 256 (28) discrete integer values. Integer range can also affect the number of memory locations the CPU can directly address (an address is an integer value representing a specific memory location). For example, if a binary CPU uses 32 bits to represent a memory address then it can directly address 232 memory locations. To circumvent this limitation and for various other reasons, some CPUs use mechanisms (such as bank switching) that allow additional memory to be addressed. CPUs with larger word sizes require more circuitry and consequently are physically larger, cost more and consume more power (and therefore generate more heat). As a result, smaller 4- or 8-bit microcontrollers are commonly used in modern applications even though CPUs with much larger word sizes (such as 16, 32, 64, even 128-bit) are available. When higher performance is required, however, the benefits of a larger word size (larger data ranges and address spaces) may outweigh the disadvantages. A CPU can have internal data paths shorter than the word size to reduce size and cost. For example, even though the IBM System/360 instruction set was a 32-bit instruction set, the System/360 Model 30 and Model 40 had 8-bit data paths in the arithmetic logical unit, so that a 32-bit add required four cycles, one for each 8 bits of the operands, and, even though the Motorola 68000 series instruction set was a 32-bit instruction set, the Motorola 68000 and Motorola 68010 had 16-bit data paths in the arithmetic logical unit, so that a 32-bit add required two cycles. To gain some of the advantages afforded by both lower and higher bit lengths, many instruction sets have different bit widths for integer and floating-point data, allowing CPUs implementing that instruction set to have different bit widths for different portions of the device. For example, the IBM System/360 instruction set was primarily 32 bit, but supported 64-bit floating point values to facilitate greater accuracy and range in floating point numbers. The System/360 Model 65 had an 8-bit adder for decimal and fixed-point binary arithmetic and a 60-bit adder for floating-point arithmetic. Many later CPU designs use similar mixed bit width, especially when the processor is meant for general-purpose usage where a reasonable balance of integer and floating point capability is required. Parallelism The description of the basic operation of a CPU offered in the previous section describes the simplest form that a CPU can take. This type of CPU, usually referred to as subscalar, operates on and executes one instruction on one or two pieces of data at a time, that is less than one instruction per clock cycle (). This process gives rise to an inherent inefficiency in subscalar CPUs. Since only one instruction is executed at a time, the entire CPU must wait for that instruction to complete before proceeding to the next instruction. As a result, the subscalar CPU gets "hung up" on instructions which take more than one clock cycle to complete execution. Even adding a second execution unit (see below) does not improve performance much; rather than one pathway being hung up, now two pathways are hung up and the number of unused transistors is increased. This design, wherein the CPU's execution resources can operate on only one instruction at a time, can only possibly reach scalar performance (one instruction per clock cycle, ). However, the performance is nearly always subscalar (less than one instruction per clock cycle, ). Attempts to achieve scalar and better performance have resulted in a variety of design methodologies that cause the CPU to behave less linearly and more in parallel. When referring to parallelism in CPUs, two terms are generally used to classify these design techniques: instruction-level parallelism (ILP), which seeks to increase the rate at which instructions are executed within a CPU (that is, to increase the use of on-die execution resources); task-level parallelism (TLP), which purposes to increase the number of threads or processes that a CPU can execute simultaneously. Each methodology differs both in the ways in which they are implemented, as well as the relative effectiveness they afford in increasing the CPU's performance for an application. Instruction-level parallelism One of the simplest methods for increased parallelism is to begin the first steps of instruction fetching and decoding before the prior instruction finishes executing. This is a technique known as instruction pipelining, and is used in almost all modern general-purpose CPUs. Pipelining allows multiple instruction to be executed at a time by breaking the execution pathway into discrete stages. This separation can be compared to an assembly line, in which an instruction is made more complete at each stage until it exits the execution pipeline and is retired. Pipelining does, however, introduce the possibility for a situation where the result of the previous operation is needed to complete the next operation; a condition often termed data dependency conflict. Therefore pipelined processors must check for these sorts of conditions and delay a portion of the pipeline if necessary. A pipelined processor can become very nearly scalar, inhibited only by pipeline stalls (an instruction spending more than one clock cycle in a stage). Improvements in instruction pipelining led to further decreases in the idle time of CPU components. Designs that are said to be superscalar include a long instruction pipeline and multiple identical execution units, such as load–store units, arithmetic–logic units, floating-point units and address generation units. In a superscalar pipeline, instructions are read and passed to a dispatcher, which decides whether or not the instructions can be executed in parallel (simultaneously). If so, they are dispatched to execution units, resulting in their simultaneous execution. In general, the number of instructions that a superscalar CPU will complete in a cycle is dependent on the number of instructions it is able to dispatch simultaneously to execution units. Most of the difficulty in the design of a superscalar CPU architecture lies in creating an effective dispatcher. The dispatcher needs to be able to quickly determine whether instructions can be executed in parallel, as well as dispatch them in such a way as to keep as many execution units busy as possible. This requires that the instruction pipeline is filled as often as possible and requires significant amounts of CPU cache. It also makes hazard-avoiding techniques like branch prediction, speculative execution, register renaming, out-of-order execution and transactional memory crucial to maintaining high levels of performance. By attempting to predict which branch (or path) a conditional instruction will take, the CPU can minimize the number of times that the entire pipeline must wait until a conditional instruction is completed. Speculative execution often provides modest performance increases by executing portions of code that may not be needed after a conditional operation completes. Out-of-order execution somewhat rearranges the order in which instructions are executed to reduce delays due to data dependencies. Also in case of single instruction stream, multiple data stream—a case when a lot of data from the same type has to be processed—, modern processors can disable parts of the pipeline so that when a single instruction is executed many times, the CPU skips the fetch and decode phases and thus greatly increases performance on certain occasions, especially in highly monotonous program engines such as video creation software and photo processing. In the case where just a portion of the CPU is superscalar, the part which is not suffers a performance penalty due to scheduling stalls. The Intel P5 Pentium had two superscalar ALUs which could accept one instruction per clock cycle each, but its FPU could not. Thus the P5 was integer superscalar but not floating point superscalar. Intel's successor to the P5 architecture, P6, added superscalar abilities to its floating point features. Simple pipelining and superscalar design increase a CPU's ILP by allowing it to execute instructions at rates surpassing one instruction per clock cycle. Most modern CPU designs are at least somewhat superscalar, and nearly all general purpose CPUs designed in the last decade are superscalar. In later years some of the emphasis in designing high-ILP computers has been moved out of the CPU's hardware and into its software interface, or instruction set architecture (ISA). The strategy of the very long instruction word (VLIW) causes some ILP to become implied directly by the software, reducing the CPU’s work in boosting ILP and thereby reducing design complexity. Task-level parallelism Another strategy of achieving performance is to execute multiple threads or processes in parallel. This area of research is known as parallel computing. In Flynn's taxonomy, this strategy is known as multiple instruction stream, multiple data stream (MIMD). One technology used for this purpose was multiprocessing (MP). The initial flavor of this technology is known as symmetric multiprocessing (SMP), where a small number of CPUs share a coherent view of their memory system. In this scheme, each CPU has additional hardware to maintain a constantly up-to-date view of memory. By avoiding stale views of memory, the CPUs can cooperate on the same program and programs can migrate from one CPU to another. To increase the number of cooperating CPUs beyond a handful, schemes such as non-uniform memory access (NUMA) and directory-based coherence protocols were introduced in the 1990s. SMP systems are limited to a small number of CPUs while NUMA systems have been built with thousands of processors. Initially, multiprocessing was built using multiple discrete CPUs and boards to implement the interconnect between the processors. When the processors and their interconnect are all implemented on a single chip, the technology is known as chip-level multiprocessing (CMP) and the single chip as a multi-core processor. It was later recognized that finer-grain parallelism existed with a single program. A single program might have several threads (or functions) that could be executed separately or in parallel. Some of the earliest examples of this technology implemented input/output processing such as direct memory access as a separate thread from the computation thread. A more general approach to this technology was introduced in the 1970s when systems were designed to run multiple computation threads in parallel. This technology is known as multi-threading (MT). This approach is considered more cost-effective than multiprocessing, as only a small number of components within a CPU is replicated to support MT as opposed to the entire CPU in the case of MP. In MT, the execution units and the memory system including the caches are shared among multiple threads. The downside of MT is that the hardware support for multithreading is more visible to software than that of MP and thus supervisor software like operating systems have to undergo larger changes to support MT. One type of MT that was implemented is known as temporal multithreading, where one thread is executed until it is stalled waiting for data to return from external memory. In this scheme, the CPU would then quickly context switch to another thread which is ready to run, the switch often done in one CPU clock cycle, such as the UltraSPARC T1. Another type of MT is simultaneous multithreading, where instructions from multiple threads are executed in parallel within one CPU clock cycle. For several decades from the 1970s to early 2000s, the focus in designing high performance general purpose CPUs was largely on achieving high ILP through technologies such as pipelining, caches, superscalar execution, out-of-order execution, etc. This trend culminated in large, power-hungry CPUs such as the Intel Pentium 4. By the early 2000s, CPU designers were thwarted from achieving higher performance from ILP techniques due to the growing disparity between CPU operating frequencies and main memory operating frequencies as well as escalating CPU power dissipation owing to more esoteric ILP techniques. CPU designers then borrowed ideas from commercial computing markets such as transaction processing, where the aggregate performance of multiple programs, also known as throughput computing, was more important than the performance of a single thread or process. This reversal of emphasis is evidenced by the proliferation of dual and more core processor designs and notably, Intel's newer designs resembling its less superscalar P6 architecture. Late designs in several processor families exhibit CMP, including the x86-64 Opteron and Athlon 64 X2, the SPARC UltraSPARC T1, IBM POWER4 and POWER5, as well as several video game console CPUs like the Xbox 360's triple-core PowerPC design, and the PlayStation 3's 7-core Cell microprocessor. Data parallelism A less common but increasingly important paradigm of processors (and indeed, computing in general) deals with data parallelism. The processors discussed earlier are all referred to as some type of scalar device. As the name implies, vector processors deal with multiple pieces of data in the context of one instruction. This contrasts with scalar processors, which deal with one piece of data for every instruction. Using Flynn's taxonomy, these two schemes of dealing with data are generally referred to as single instruction stream, multiple data stream (SIMD) and single instruction stream, single data stream (SISD), respectively. The great utility in creating processors that deal with vectors of data lies in optimizing tasks that tend to require the same operation (for example, a sum or a dot product) to be performed on a large set of data. Some classic examples of these types of tasks include multimedia applications (images, video and sound), as well as many types of scientific and engineering tasks. Whereas a scalar processor must complete the entire process of fetching, decoding and executing each instruction and value in a set of data, a vector processor can perform a single operation on a comparatively large set of data with one instruction. This is only possible when the application tends to require many steps which apply one operation to a large set of data. Most early vector processors, such as the Cray-1, were associated almost exclusively with scientific research and cryptography applications. However, as multimedia has largely shifted to digital media, the need for some form of SIMD in general-purpose processors has become significant. Shortly after inclusion of floating-point units started to become commonplace in general-purpose processors, specifications for and implementations of SIMD execution units also began to appear for general-purpose processors. Some of these early SIMD specifications - like HP's Multimedia Acceleration eXtensions (MAX) and Intel's MMX - were integer-only. This proved to be a significant impediment for some software developers, since many of the applications that benefit from SIMD primarily deal with floating-point numbers. Progressively, developers refined and remade these early designs into some of the common modern SIMD specifications, which are usually associated with one instruction set architecture (ISA). Some notable modern examples include Intel's Streaming SIMD Extensions (SSE) and the PowerPC-related AltiVec (also known as VMX). Hardware performance counter Many modern architechtures (including embedded ones) often include hardware performance counters (HPC), which enables low-level (instruction-level) collection, benchmarking, debugging or analysis of running software metrics. HPC may also be used to discover and analayze unusual or suspicious activity of the software, such as return-oriented programming (ROP) or sigreturn-oriented programming (SROP) exploits etc. This is usually done by software-security teams to assess and find malicious binary programs. Many major vendors (such as IBM, Intel, AMD, and ARM etc.) provide software interfaces (usually written in C/C++) that can be used to collected data from CPUs registers in order to get metrics. Operating system vendors also provide software like perf (Linux) to record, benchmark, or trace CPU events running kernels and applications. Virtual CPUs Cloud computing can involve subdividing CPU operation into virtual central processing units (vCPUs). A host is the virtual equivalent of a physical machine, on which a virtual system is operating. When there are several physical machines operating in tandem and managed as a whole, the grouped computing and memory resources form a cluster. In some systems, it is possible to dynamically add and remove from a cluster. Resources available at a host and cluster level can be partitioned out into resources pools with fine granularity. Performance The performance or speed of a processor depends on, among many other factors, the clock rate (generally given in multiples of hertz) and the instructions per clock (IPC), which together are the factors for the instructions per second (IPS) that the CPU can perform. Many reported IPS values have represented "peak" execution rates on artificial instruction sequences with few branches, whereas realistic workloads consist of a mix of instructions and applications, some of which take longer to execute than others. The performance of the memory hierarchy also greatly affects processor performance, an issue barely considered in MIPS calculations. Because of these problems, various standardized tests, often called "benchmarks" for this purposesuch as SPECinthave been developed to attempt to measure the real effective performance in commonly used applications. Processing performance of computers is increased by using multi-core processors, which essentially is plugging two or more individual processors (called cores in this sense) into one integrated circuit. Ideally, a dual core processor would be nearly twice as powerful as a single core processor. In practice, the performance gain is far smaller, only about 50%, due to imperfect software algorithms and implementation. Increasing the number of cores in a processor (i.e. dual-core, quad-core, etc.) increases the workload that can be handled. This means that the processor can now handle numerous asynchronous events, interrupts, etc. which can take a toll on the CPU when overwhelmed. These cores can be thought of as different floors in a processing plant, with each floor handling a different task. Sometimes, these cores will handle the same tasks as cores adjacent to them if a single core is not enough to handle the information. Due to specific capabilities of modern CPUs, such as simultaneous multithreading and uncore, which involve sharing of actual CPU resources while aiming at increased utilization, monitoring performance levels and hardware use gradually became a more complex task. As a response, some CPUs implement additional hardware logic that monitors actual use of various parts of a CPU and provides various counters accessible to software; an example is Intel's Performance Counter Monitor technology. See also Addressing mode AMD Accelerated Processing Unit CISC Computer bus Computer engineering CPU core voltage CPU socket Digital signal processor GPU List of instruction sets Protection ring RISC Stream processing True Performance Index TPU Wait state Notes References External links . 25 Microchips that shook the world – an article by the Institute of Electrical and Electronics Engineers. Digital electronics Electronic design Electronic design automation
Operating System (OS)
908
IBM System Management Facilities IBM System Management Facility (SMF) is a component of IBM's z/OS for mainframe computers, providing a standardised method for writing out records of activity to a file (or data set to use a z/OS term). SMF provides full "instrumentation" of all baseline activities running on that IBM mainframe operating system, including I/O, network activity, software usage, error conditions, processor utilization, etc. One of the most prominent components of z/OS that uses SMF is the IBM Resource Measurement Facility (RMF). RMF provides performance and usage instrumentation of resources such as processor, memory, disk, cache, workload, virtual storage, XCF and Coupling Facility. RMF is technically a priced (extra cost) feature of z/OS. BMC sells a competing alternative, CMF. SMF forms the basis for many monitoring and automation utilities. Each SMF record has a numbered type (e.g. "SMF 120" or "SMF 89"), and installations have great control over how much or how little SMF data to collect. Records written by software other than IBM products generally have a record type of 128 or higher. Some record types have subtypes - for example Type 70 Subtype 1 records are written by RMF to record CPU activity. SMF record types Here is a list of the most common SMF record types: RMF records are in the range 70 through to 79. RMF's records are generally supplemented - for serious performance analysis - by Type 30 (subtypes 2 and 3) address space records. RACF type 80 records are written to record security issues, i.e. password violations, denied resource access attempts, etc. TopSecret, another security system, also writes type 80 records. ACF2 provides equivalent information in, by default, type 230 records but this SMF record type can be changed for each installed site. SMF type 89 records indicate software product usage and are used to calculate reduced sub-capacity software pricing. DB2 writes type 100, 101 and 102 records, depending on specific DB2 subsystem options. CICS writes type 110 records, depending on specific CICS options. Websphere MQ writes type 115 and 116 records, depending on specific Websphere MQ subsystem options. WebSphere Application Server for z/OS writes type 120. Version 7 introduced a new subtype to overcome shortcomings in the earlier subtype records. The new Version 7 120 Subtype 9 record provide a unified request-based view with lower overhead. Evolving records The major record types, especially those created by RMF, continue to evolve at a rapid pace. Each release of z/OS brings new fields. Different processor families and Coupling Facility levels also change the data model. SMF data recording SMF can record data in two ways: The standard and classical way: Using buffers the SMF address space, together with a set of preallocated datasets (VSAM datasets) to use when a buffer fills up. The standard name for the datasets is SYS1.MANx, where x is a numerical suffix (starting from 0). The relatively new way: Using log streams. SMF utilizes System Logger to record collected data, which improves the writing rate and avoids buffer shortages. It has more flexibility, allowing the z/OS system to straightforwardly record to multiple log streams, and (using keywords on the dump program) allowing z/OS to read a set of SMF data once and write it many times. Both the two ways can be declared for the use, but only one is used at a time in order to have the other as a fallback alternative. This data is then periodically dumped to sequential files (for example, tape drives) using the IFASMFDP SMF Dump Utility (or IFASMFDL when using log streams). IFASMFDP can also be used to split existing SMF sequential files and copy them to other files. The two dump programs produce the same output, so it does not involve changes in the SMF records elaboration chain, other than changing the JCL with the call of the new dump utility. SMF data collection and analysis SMF data can be collected through IBM Z Operational Log and Data Analytics. IBM Z Operational Log and Data Analytics collects SMF data, transforms it in a consumable format and then sends the data to third-party enterprise analytics platforms like the Elastic Stack and Splunk, or to the included operational data analysis platform, for further analysis. IBM Z Operational Log and Data Analytics collects SMF data in the following three ways: In log stream mode with SMF in-memory buffer When SMF is run in the log stream mode, IBM Z Operational Log and Data Analytics can be configured to collect SMF from the SMF in-memory buffer with the SMF real-time interface. In data set recording mode When SMF is run in the data set recording mode, IBM Z Operational Log and Data Analytics collects and streams SMF data via a set of SMF user exits. In batch mode The System Data Engine component of IBM Z Operational Log and Data Analytics can be run stand-alone in batch mode to read SMF data from a data set and then write it to a file. The System Data Engine batch jobs can be created to write SMF data to data sets and send SMF data to the Data Streamer. SMF data can be analyzed on the following analytics platforms: Z Data Analytics platform, a component of IBM Z Operational Log and Data Analytics and IntelliMagic Vision for z/OS, from IntelliMagic. These platforms can provide insights and recommended actions to the system owners, which are based on expert knowledge about z Systems and applications. Enterprise platforms such as Splunk, the Elastic Stack, Apache Kafka, or Humio that can receive and process operational data for analysis. The platforms like the Elastic Stack and Splunk do not include expert knowledge about z Systems and applications, but users can create or import their own analytics to run against the data. IBM Db2 Analytics Accelerator for z/OS, a database application that provides query-based reporting. External links IBM z/OS SMF Reference Performance Instrumentation Management Techniques wiki (requires "My developerWorks: Sign in" 20090224) CA ACF2 for z/OS - 15.0 & 16.0 Documentation CA Top Secret® for z/OS - 16.0 Documentation References IBM Redbooks. ABCs of z/OS System Programming Volume 2, International Technical Support Organization, July 2008. BMC CMF Monitor - http://www.bmc.com/products/proddocview/0,2832,19052_19429_23401_1365,00.html Examples of SMF Reports - http://www.pacsys.com/smf/smf_example_list.htm Streaming z/OS IT Operational Data with IBM Z Common Data Provider. Planet Mainframe. August 27, 2020. IBM Z Operational Log and Data Analytics documentation - https://www.ibm.com/docs/en/z-logdata-analytics/5.1.0?topic=overview-z-common-data-provider IBM Z Operational Log and Data Analytics Product Page - https://www.ibm.com/products/z-log-and-data-analytics System Management Facilities
Operating System (OS)
909
Taligent Taligent (a portmanteau of "talent" and "intelligent") was an American software company. Based on the Pink object-oriented operating system conceived by Apple in 1988, Taligent Inc. was incorporated as an Apple/IBM partnership in 1992, and was dissolved into IBM in 1998. In 1988, after launching System 6 and MultiFinder, Apple initiated the exploratory project named Pink to design the next generation of Mac OS. Though diverging into a sprawling new dream system unrelated to Mac OS, Pink was wildly successful within Apple and a subject of industry hype without. In 1992, the new AIM alliance spawned an Apple/IBM partnership corporation named Taligent Inc., with the purpose of bringing Pink to market. In 1994, Hewlett-Packard joined the partnership with a 15% stake. After a two-year series of goal-shifting delays, Taligent OS was eventually canceled, but the CommonPoint application framework was launched in 1995 for AIX with a later beta for OS/2. CommonPoint had technological acclaim but an extremely complex learning curve, so sales were very low. Taligent OS and CommonPoint mirrored the sprawling scope of IBM's complementary Workplace OS, in redundantly overlapping attempts to become the ultimate universal system to unify all of the world's computers and operating systems with a single microkernel. From 1993 to 1996, Taligent was seen as competing with Microsoft Cairo and NeXTSTEP, even though Taligent didn't ship a product until 1995 and Cairo never shipped at all. From 1994 to 1996, Apple floated the Copland operating system project intended to succeed System 7, but never had a modern OS sophisticated enough to run Taligent technology. In 1995, Apple and HP withdrew from the Taligent partnership, licensed its technology, and left it as a wholly owned subsidiary of IBM. In January 1998, Taligent Inc. was finally dissolved into IBM. Taligent's legacy became the unbundling of CommonPoint's best compiler and application components and converting them into VisualAge C++ and the globally adopted Java Development Kit 1.1 (especially internationalization). In 1996, Apple instead bought NeXT and began synthesizing classic Mac OS onto the NeXTSTEP operating system. Mac OS X was launched on March 24, 2001 as the future of the Macintosh and eventually the iPhone. In the late 2010s, some of Apple's personnel and design concepts from Pink and from Purple (the first iPhone's codename) would resurface and blend into Google's Fuchsia operating system. Along with Workplace OS, Copland, and Cairo, Taligent is cited as a death march project of the 1990s, suffering from development hell as a result of feature creep and the second-system effect. History Development The entire history of Pink and Taligent from 1988 to 1998 is that of a widely admired, anticipated, and theoretically competitive staff and its system, but is also overall defined by development hell, second-system effect, empire building, secrecy, and vaporware. Pink team Apple's cofounders Steve Wozniak and Steve Jobs had departed the company in 1985. This vacuum of entrepreneurial leadership created a tendency to promote low-level engineers up to management and allowed increasingly redundant groups of engineers to compete and co-lead by consensus, and to manifest their own bottom-up corporate culture. In 1988, Apple released System 6, a major release of the flagship Macintosh operating system, to a lackluster reception. The system's architectural limits, set forth by the tight hardware constraints of its original 1984 release, now demanded increasingly ingenious workarounds for incremental gains such as MultiFinder's cooperative multitasking, while still lacking memory protection and virtual memory. Having committed these engineering triumphs which were often blunted within such a notoriously fragile operating system, a restless group of accomplished senior engineers were nicknamed the Gang of Five: Erich Ringewald, David Goldsmith, Bayles Holt, Gene Pope, and Gerard Schutten. The Gang gave an ultimatum that they should either be allowed to break from their disliked management and take the entrepreneurial and engineering risks needed to develop the next generation of the Macintosh operating system, or else leave the company. In March 1988, the Gang, their management, and software manager and future Taligent CTO Mike Potel, met at the Sonoma Mission Inn and Spa. To roadmap the future of the operating system and thus of the organizational chart, ideas were written on colored index cards and pinned to a wall. Ideas that were incremental updates to the existing system were written on blue colored cards, those that were more technologically advanced or long-term were written on pink cards, and yet more radical ideas were on red cards because they "would be pinker than Pink". The Blue group would receive the Gang's former management duo, along with incremental improvements in speed, RAM size, and hard drive size. Pink would receive the Gang, with Erich Ringewald as technical lead, plus preemptive multitasking and a componentized application design. Red would receive speech recognition and voice commands, thought to be as futuristic as the Star Trek science fiction series. Erich Ringewald led the Gang of Five as the new Pink group, located one floor below the Apple software headquarters in the De Anza 3 building, to begin a feasibility study with a goal of product launch in two years. Remembering the small but powerful original Macintosh group, he maintained secrecy and avoided the micromanagement of neighboring senior executives, by immediately relocating his quintet off the main Apple campus. They used the nondescript Bubb Road warehouse which was already occupied by the secretly sophisticated Newton project. Pink briefly garnered an additional code name, "Defiant". Pink system The Pink team was faced with the two possible architectural directions of either using legacy System 6 code or starting from scratch. Having just delivered the System 6 overhaul in the form of MultiFinder, Ringewald was adamant that Pink's intense ambitions were deliverable within a realistic two year timeframe only if the team heavily improved its legacy compatibility code. He pragmatically warned them, "We're going to have enough trouble just reimplementing the Mac." In Apple's contentious corporate culture of consensus, this mandate was soon challenged; David Goldsmith resigned from Pink after making a counter-ultimatum for a complete redesign which obviates all legacy problems, and some other staff escalated their complaints to upward management in agreement with that. Months later, a senior executive finally overrode Ringewald, thus redeveloping Pink from scratch as a new and unique system with no System 6 legacy. The Pink team numbered eleven when the six-person kernel team within Apple's Advanced Technology Group (ATG) was merged into Pink to begin designing its new microkernel named Opus. Embellishing upon the pink index cards, Pink's overall key design goals were now total object-orientation, memory protection, preemptive multitasking, internationalization, and advanced graphics. Many ideas from the red cards would later be adopted. After its first two months, Pink had a staff of about 25. By October 1988, the Gang of Five had become only one Bayles Holt, because Gene Pope, Gerard Schutten, and Erich Ringewald then exited the sprawling Pink. The former leader held "grave doubts" over the feasibility of this "living, breathing, money-consuming thing" which was "out of control". Meanwhile, the remaining group and all of Apple were enamored and doubtless of Pink's world-changing vision, trying to join its staff of more than 100 by April 1989. It was a flourishing project that drained personnel from various other departments. All groups outside of Blue became defensively secretive in a company-wide culture of empire-building. Pink's secretive and turf warring culture didn't share source code or product demonstrations, even with the next generation Jaguar workstation design group, until so ordered by CEO John Sculley, and only then under extreme security and monitoring. Throughout Apple, the project and the system were considered successful, but from April 1989 and on into the 1990s, the running joke had always been and would always be, "When is Pink going to ship? Two years." In 1990, Pink became the Object Based Systems group with Senior Vice President Ed Birss and a diverse staff of 150, including marketing and secretaries. Meanwhile, the hundreds of personnel in the Blue design group were constrained by the commercial pragmatism of maintaining their billion-dollar legacy operating system, which required them to refuse many new features, which earned them the infamous nickname "Blue Meanies". This group had well established the evolution of System 6 which would be released in 1991 as System 7. RAM chips and hard drives were extremely expensive so most personal computers were critically resource constrained, and System 7 would already barely fit onto existing Macintosh systems. Pink would therefore be hard-pressed to include backward compatibility for System 7 applications atop itself — assuming the team wanted to do so. This physical and economical constraint is a crucial aspect of the second-system effect. By late 1989, Pink was a functional prototype of a desktop operating system on Macintosh hardware, featuring advanced graphics and dynamic internationalized text. Pink engineer Dave said it was "a real OS that could demonstrate the core technology" much deeper than System 6 could do. In June 1990, Bill Bruffey abandoned the idea of Pink becoming a new Mac OS. He got permission to create yet another new microkernel named NuKernel, intended explicitly for a new Mac OS. His team of six engineers worked a few months to demonstrate a microkernel-based Mac OS on a Macintosh IIci — which would years later become Copland and the proposed Mac OS 8. In the early 1990s, Pink's graphical user interface (GUI) is based on a faux 3D motif of isometric icons, beveled edges, non-rectangular windows, and drop shadows. One designer said "The large UI team included interaction and visual designers, and usability specialists." That essential visual design language would be an influence for several years into Copland, Mac OS 8, and CommonPoint. Magazines throughout the early 1990s showed various mock-ups of what Pink could look like. The People, Places, and Things metaphor extends beyond the traditional desktop metaphor and provides the user with GUI tools to easily drag documents between people and things, such as fax machines and printers. The component-based document model is similar to what would become OpenDoc. In mid-1991, Apple CEO John Sculley bragged that Apple had written 1.5 million lines of code for Pink. An IBM engineer described the first impression of this sophisticated prototype in 1991: AIM alliance On October 2, 1991, the historic AIM alliance was formed and announced by Apple, IBM, and Motorola. It was conceived to cross-pollinate Apple's personal products and IBM's enterprise products, to better confront Microsoft's monopoly, and to design a new grandly unified platform for the computing industry. This alliance spun off two partner corporations: Kaleida Labs to develop multimedia software, and Taligent Inc. to bring Pink to market sometime in the mid-90s. Pink was a massive draw for this alliance, where Apple had been initially approached by two different parts of IBM. One IBM group sought customers for its new POWER CPU hardware, therefore discovering Pink and a new desire to port it to this hardware. The other IBM group sought third party interest in its Grand Unifying Theory of Systems (GUTS) as the solution to the deeply endemic crisis that is software development, which would soon result in Workplace OS. In an April 12, 1991 demonstration of Pink and its architecture, IBM was profoundly impressed and its GUTS outline was immediately impacted. By 1993, IBM's ambitious global roadmap would include the unification of the diverse world of computing by converting Pink to become one of many personalities of Workplace OS, and the ending of the need to write new major applications by instead making smaller additions to Pink's generalized frameworks. Even before the signing of the alliance contract, the very existence of Pink was identified as a potential second-system threat if its revolutionary aura could prompt customers to delay their adoption of OS/2. Taligent Inc. On March 2, 1992, Taligent Inc. was launched as the first product of the AIM alliance. Moving from a temporary lease at Apple headquarters to an office down the street in Cupertino, the company launched with 170 employees, most of whom had been re-hired directly from Apple plus CEO Joe Guglielmi. At age 50, he was a 30 year marketing veteran of IBM and former leader of the OS/2 platform up to its soon-launched version 2.0. The company's mission was to bring Pink to market. Culture and purpose Enthusiastically dismissing industry skepticism, he said Taligent would form its own corporate culture, independent of the established cultures and potential failures of its two founding investors and future customers, Apple and IBM. The two were recent allies carrying five other joint initiatives, and a deep rivalry of more than a decade. Dr. Dobb's Journal reflected, "It was fairly surreal for the Apple and IBM employees who went to Taligent and found themselves working for bosses still loyal to the opposition. Not a typical Silicon Valley career move, maybe, but perhaps a portent of other weird twists to come. Ignoring the politics as much as possible, the Taligent programmers buckled down and wrote a lotta lines of code." Commenting on the corporate culture shock of combining free-spirited Apple and formal IBM personnel, Fortune compared the company's cultural engineering challenge as possibly exceeding its software engineering challenge. The openminded but sensible CEO reigned it in, saying, "I'm tired of [Apple] folklore ... I want some data." Comparing the eager startup Taligent to its billion dollar investors, a leader at Kaleida said "The culture of IBM and Apple is largely about getting more benefits, perks, larger offices, fancier computers, and more employees". Dr. Dobb's Journal would describe the increased abstraction in corporate culture resulting from Hewlett-Packard's upcoming 1994 addition to the partnership: "Now you could be [a former] Apple programmer working for [a former] IBM boss who reported [externally] to HP. Or some combination thereof. Twisteder and twisteder." Apple and IBM did share a progressive culture of object orientation, as seen in their deep portfolios since the early 1980s. IBM had delivered objects on System/38 and AS/400, partnered with Patriot Partners, and integrated System Object Model (SOM) and Distributed SOM into OS/2 and AIX. Apple had already delivered Lisa, prototyped the fully object-oriented Pink operating system, and delivered object oriented frameworks using MacApp. Both companies had worked with Smalltalk. Within one month of its founding, there was immediate industry-wide confusion about Taligent's purpose and scope. An industry analyst said "IBM and Apple blew it ... they should have announced everything [about Taligent] or nothing." Especially regarding Taligent's potential relationship to the Macintosh, Apple reiterated that its existing flagship legacy would continue indefinitely with System 7 and Macintosh hardware. COO Michael Spindler said "The Mac is not dead" and others said that they had never claimed that Pink would supersede the Macintosh. Charles Oppenheimer, Director of Marketing for Macintosh system software, said "We can't say for sure how [the two] will fit together." The industry was further confused as to the very existence of any Taligent software, not realizing that it was already beyond the concept stage and in fact consisted of volumes of Pink-based software in development by Apple for years. One year later in February 1993, Wired magazine would assert its suspicion that Apple and IBM's core messengers are maintaining "the big lie"—that Taligent's technology is merely a concept, has no existing software, and is actually years away from production—in order to protect their established multi-billion-dollar core legacy of Macintosh and OS/2 products from a potentially superior replacement and to divert the second system effect. Upon its launch, CEO Joe Guglielmi soon organized the company into three divisions: a native system group for its self-hosted Pink OS, a development tools group, and a complementary products group for application frameworks to be ported to other OSes. Taligent spent much of its first two years developing its operating system and simultaneously trying to find a market for it. They started a large project surveying potential customers, only to find little interest in a new OS. It is a point of controversy whether the lack of interest was real or the survey fell prey to question-framing problems and political issues with investors. If asked the question "Do you want a new OS?", there were few who would say yes. The survey did, however, show there was sufficient support for the benefits TalOS would bring. Technology The Pink operating system is now formally named Taligent Object Services (TOS or TalOS) whether hosted natively on its microkernel or non-natively on a third party OS, but the nickname "Pink" will always remain industry lore, such as with the developer phone number 408-TO-B-PINK. The entire graphics subsystem is 3D, including the 2D portions which are actually 3D constructs. It is based extensively on object-oriented frameworks from the kernel upward, including device drivers, the Taligent input/output (I/O) system, and ensembles. By 1993, IBM discussed decoupling most of TalOS away from its native Opus microkernel, and retargeting most of TalOS onto the IBM Microkernel which was already used as the base for IBM's tandem project, Workplace OS. Its text handling and localization via Unicode was intended to begin enabling the globalization of software development, especially in simplifying Japanese. In January 1993, Taligent's VP of Marketing said the strong progress of native TalOS development could encourage its early incremental release prior to the full 1995 schedule for TalAE. Apple's business manager to Taligent Chris Espinosa acknowledged the irony of Apple and IBM building competing Taligent-based platforms, which had originated at Apple as Pink. He forecast Apple's adoption of Taligent components into the irreplaceably personal Mac OSwhile empowering its competitiveness with IBM's future Taligent-based general purpose systems, and easing corporate users' migration toward Apple's Enterprise Systems Division's future Taligent-based computers. On January 10, 1993, The Wall Street Journal reported on the state of Taligent, saying the company and its platform had the broad optimistic support of Borland, WordPerfect, and Novell. Borland CEO Philippe Kahn said "Technically, [Pink] is brilliant, and Taligent is running much faster than I expected." A software venture capitalist expected new entrepreneurs to appreciate the platform's newness and lack of legacy baggage, and the industry expected Apple loyalists to embrace a new culture. Regardless of genuine merit, many in the industry reportedly expected Taligent's success to depend upon wounding Microsoft's monopoly. On January 18, InfoWorld reported, "Taligent draws rave reviews from software developers". By April 1993, Taligent, Inc. had grown to about 260 employees, mostly from Apple or "some other loose Silicon Valley culture". MacWEEK reported that the company remained on schedule or ahead through 1993 into 1994. On June 23, 1993, Apple preannounced MacApp's direct successor, the new object-oriented crossplatform SDK codenamed Bedrock. Positioned as "the most direct path for migration" from System 7 to Pink, it was intended to provide source code compatibility between System 7, Windows 3.1, Windows NT, OS/2, and Pink. Bedrock would be abruptly discontinued 18 months later with no successor, and leaving Apple with no connection between System 7 and Pink. By 1994, the platform consisted of Taligent Object Services (TOS or TalOS), Taligent Application Environment (TAE or TalAE), and the Taligent Development System (TDS or TalDS). The initial plan was to deploy TalAE in early 1994 to help seed the market with a base of applications for TalOS, which was intended to be launched in 1995, with the whole platform going mainstream in two to five years—surely expecting a modern OS from Apple by 1994 or 1995. Influenced by the results of the survey effort, CEO Joe Guglielmi acknowledged the unavoidable risk of creating its own second-system effect, if the TalAE enhancements could make third party operating systems into competitors of native TalOS. The first internal development environment was an IBM RS/6000 model 250 with a PowerPC 601 CPU running AIX, building TalOS natively for the 68k Macintosh. HP, CommonPoint beta In January 1994, fellow object technology pioneer Hewlett-Packard joined Apple and IBM as the third co-owner of Taligent at 15% holding. HP held deeply vested experience in object technology since the 1980s with the NewWave desktop environment, the Softbench IDE, Distributed Smalltalk, Distributed Object Management Facility (DOMF), and having cofounded the Object Management Group. Taligent's object oriented portfolio was broadened with HP's compilers, DOMF, and intention to integrate TalOS and TalAE into HP-UX. HP had already partnered with Taligent's well-established competitor NeXT to integrate OpenStep into HP-UX, and Taligent had pursued partnerships with both Sun and HP for several months, all serving to improve HP's competitive bargaining in its offer to Taligent. A Taligent engineer reportedly said, "It wasn't that HP was driven by OpenStep to go to Taligent, but that OpenStep allowed them to make a much better deal." NeXTWORLD summarized that "[HP covered] all bets in the race for the object market", and Sun CEO Scott McNealy derided the partnership as HP being Taligent's "trophy spouse". Dr. Dobb's Journal quipped: "Now you could be [a former] Apple programmer working for [a former] IBM boss who reported [externally] to HP. Or some combination thereof. Twisteder and twisteder." By March 1994, Taligent had reportedly begun shipping code to its three investors, and some parts of TalAE had shipped to developers though without source code by policy. The first public Taligent technology demonstration was at SFA in Atlanta as an "amazingly fast" and crash-tolerant five-threaded 3D graphics application on native TalOS on a Macintosh IIci. Also in March 1994 at the PC Forum conference, Taligent gave the first public demonstration of TalAE applications, to an impressed but hesitant reception. A show of hands indicated one out of approximately 500 attendees were actively developing on TalAE, but Taligent reported 60 members in its future second wave of developer program. The frameworks already present allowed the integration of advanced TalAE features into pre-existing platform-native applications. CEO Joe Guglielmi reported on TalAE gaining the ongoing outside interest of IBM, but suffering relative uninvolvement from Applepossibly due to Apple's failure to deliver a mainstream OS capable of running it. On April 18, 1994, InfoWorld reported Taligent's future plans for its SDK to be distributed. In November 1994 at Comdex, the public debut of third-party TalAE applications was on an RS/6000 running AIX to demonstrate prototypes made by seven vendors. In late 1994, TalAE was renamed to CommonPoint, TalDE was renamed to cpProfessional, and Taligent User Interface Builder was renamed to cpConstructor. CommonPoint was being beta tested at 100 sites, with an initial target market of internal corporate developers. TalOS was still scheduled to ship in 1996. Apple considered MacApp's lifespan to have "run its course" as the primary Macintosh SDK, while Taligent considered MacApp to be prerequisite experience for its own platform. Meanwhile, Apple and CILabs had begun an internal mandate for all new development to be based on the complementary and already published OpenDoc. CILabs was committed to publishing its source code, while Taligent was committed against publishing its own. Taligent was now considered to be a venerable competitor in the desktop operating system and enterprise object markets even without any product release, and being late. John C. Dvorak described Taligent as a threat in the desktop market of integrated application suites, particularly to the "spooked" Microsoft which responded with many vaporware product announcements (such as Chicago, Cairo, Daytona, and Snowball) to distract the market's attention from Taligent. ComputerWorld described the enterprise computing market as shifting away from monolithic and procedural application models and even application suites, toward object-oriented component-based application frameworks — all in Taligent's favor. Its theoretical newness was often compared to NeXT's older but mature and commercially established platform. Sun Microsystems held exploratory meetings with Taligent before deciding upon building out its object application framework OpenStep in partnership with NeXT as a "preemptive move against Taligent and [Microsoft's] Cairo". Having given up on seeing Pink go to market soon, Apple publicly announced Copland in March 1994 intended to compete with the upcoming Windows 95. Apple was and will remain the only vendor of a desired target OS which is physically incapable of receiving Taligent's heavy payload due to System 7's critical lack of modern features such as preemptive multitasking. However, Taligent reportedly remains so committed to boosting the industry's confidence in Apple's modernization that it is considering creating a way to hybridize TalOS applications for the nascent System 7, and Apple reportedly intends for the upcoming Power Macintosh to boot native TalOS as a next-generation alternative to System 7. The second-system effect is uniquely intensified because Apple is beginning to view the architecturally superior TalOS as a competitor against the protractedly weak System 7 which has no successor in sight. InfoWorld reported this: "Developers and analysts also said that Taligent's fate is closely tied to that of OS/2 and the other as-yet-undelivered operating systems that it is designed to run on top of." This included Apple, Windows NT, and the yet unreleased Windows 95. A 1994 detailed report by INPUT assesses that Taligent's "very risky" future will depend not on its technology, but on support from IBM and major developers, the rapid and cheap development of applications and complex integration tasks, and the ability to create new markets. In June 1994, Taligent shipped its first deliverable, considered to be somewhat late for its three investors and approximately 100 developer companies. It is a prebeta developer preview called the Partners Early Experience Kit (PEEK), consisting of 80 frameworks for AIX and OS/2. It received mixed reviews, with InfoWorld saying it is "inhibited by a massive footprint, a shortage of development tools, and a mind-boggling complexity". TalDE was scheduled to ship in Q2 1995. At this point, Apple was reportedly "hedging its bets" in formulating a strategy to deliver the second-system TalAE, while remaining primarily devout to System 7. The company intended to soon introduce the PowerOpen platform of PowerPC AIX, which would deliver TalAE for running a hopefully forthcoming class of applications, simultaneously alongside Macintosh Application Services for running legacy System 7 personal applications. In May 1995, Taligent canceled the delayed release of its natively hosted TalOS, to focus on its TalAE application framework programming environment that would run on any modern operating system. Having been developed mainly upon AIX, the plan was to port TalAE to HP-UX, OS/2, Windows NT, and Copland. Those vendors are intended to port and bundle TalAE directly with their operating systems, and Taligent will port for those who don't. CommonPoint Taligent said that it wants CommonPoint to be the definitive software industry standard, like a local app store in every computer. "Shake n bake" application development in four steps. Each app would have minimal package delivery size because customers already have most of the code in the form of the shared CommonPoint framework. The CommonPoint frameworks are divided into three categories: Application, Domain, and Support. On July 28, 1995, Taligent shipped its first final product, CommonPoint 1.1, after seven years in development as Pink and then TalAE. First released only for its reference platform of AIX, it was initially priced at for only the runtime framework for users; or for the runtime framework and the software development kit, which further requires the Cset++ compiler because TalDE is still scheduled for a later release. The runtime has an overhead of for each machine and total system RAM is recommended. Though essentially on schedule by the company's own PEEK projections last year, some analysts considered it to be "too little, too late" especially compared to the maturely established NeXT platform. Several PEEK beta test sites and final release customers were very pleased with the platform, though disappointed in the marked lack of crossplatform presence on HP/UX, Mac OS, and Windows NT which strictly limited any adoption of CommonPoint even among enthusiasts. Hewlett-Packard wrote a beginner's guide for CommonPoint programmers to address its steep learning curve, saying that its survey showed that experienced C++ framework programmers needed at least three months to even approach their first application. At its launch, InfoWorld told CEO Joe Guglielmi that "corporate users don't generally understand what CommonPoint is for" and have trouble differentiating CommonPoint and OpenDoc. IBM reportedly conducted a "full-court press" to analyze and promote customers' awareness of CommonPoint, by training its direct sales and consulting staff, attending industry conferences to make CommonPoint presentations, and "talking with any third-party software vendor and systems integrator who will listen". The CommonPoint beta for OS/2 was released on December 15, 1995. This was coincidentally the same day as the gold master of the Workplace OS final beta, IBM's complementary cousin operating system to TalOS. The final beta of Workplace OS was released on January 5, 1996 in the form of OS/2 Warp Connect (PowerPC Edition) and then immediately discontinued without ever receiving a release of CommonPoint. Meanwhile, at Apple, the one-year-old Copland reached a primitive and notoriously unstable developer preview release, and Apple's frustrated lack of operating system strategy still had not shipped anything physically capable of running any Taligent software. New leadership By 1995, it was estimated that the three investors had spent more than $100 million on Taligent, Inc., with its closure being predicted by sources of the Los Angeles Times due to the decline of its parent companies and due to the inherent difficulty of anyone in the IT industry remaining committed beyond 18 months. In September 1995, CEO Joe Guglielmi unexpectedly exited Taligent to become VP of Motorola, intensifying the industry's concerns. Dick Gurino, a general manager of a PC and software development division at IBM, was named the interim CEO and tasked with searching for a permanent CEO. In October 1995, Gurino died of a heart attack while jogging, leaving the company without a CEO. On December 19, 1995, founding Taligent employee and Apple veteran Debbie Coutant was promoted to CEO. On that same day it received what would be its final CEO, Taligent Inc. also ended its partnership form. Apple and HP sold out their holdings in the company, making Taligent Inc. a wholly owned subsidiary of IBM alone. While dissolving the partnership, each of the three former partners expressed approval of Taligent's progress. In what they called overall enterprise-wide cost-cutting processes, Apple and HP wanted to simply maintain technology licenses, IBM wanted to use its own redundant marketing and support departments, and Taligent wanted to focus only on technology. In the process, nearly 200 of the 375 employees were laid off, leaving only engineering staff. Apple veteran and Taligent cofounding employee, Mike Potel, was promoted from VP of Technology to CTO, saying, "We're better protected inside the IBM world than we would be trying to duke it out as an independent company that has to pay its bills every day." In November 1996, the final public demonstration of the complete native TalOS was given, titled "The Cutting Edge Scenario". While referring to the original codename of "Pink", Taligent had already officially abandoned the never-published native TalOS in favor of CommonPoint. Unbundling In 1997, Taligent's mission as an IBM subsidiary was to unbundle the technology of CommonPoint, and to redistribute it across IBM's existing products or license it to other companies — all with a special overall focus on Java. On September 1, 1997, Dr. Dobb's Journal observed, "I guess it's easier to develop hot technology when the guys before you have already written most of it. Like inheriting from a rich uncle. And having another rich uncle to sell it for you doesn't hurt, either." The wider mass market debut of CommonPoint technology was in the form of VisualAge C++ 3.5 for Windows, with the bundling of the Compound Document Framework to handle OLE objects. In February 1997, the first comprehensive shipment of CommonPoint technology was its adoption into IBM's well-established VisualAge for C++ 4.0, which PC Magazine said was "unmatched" in "sheer breadth of features" because "Now, the best of the CommonPoint technology is being channeled into Open Class for VisualAge." This bundled SDK adaptation includes several CommonPoint frameworks: desktop (infrastructure for building unified OCX or OpenDoc components); web (called WebRunner, for making drag-and-drop compound documents for the web, and server CGIs); graphics for building 2D GUI apps; international text for Unicode and localization; filesystems; printing; and unit tests. Through 1997, Taligent was at the core of IBM's companywide shift to a Java-based middleware strategy. Taligent provided all Unicode internationalization support for Sun's 1997 release of Java Development Kit 1.1 through 1.1.4. Taligent was still leasing the same building from Apple, and JavaSoft was located across the street. But its parent IBM, and the related Lotus, were located on the east coast and were not fully aware of Taligent's plans and deliverables. WebRunner is a set of Java- and JavaBeans-based development tools at $149. In June 1997, Places for Project Teams was launched at $49 per user as a groupware GUI which hides the ugly interface of Lotus Notes. Taligent had several products, licenses, trademarks, and patents. Apple canceled the unstable and unfinished Copland project in August 1996, which had already been presumptively renamed "Mac OS 8", again leaving only a System 7 legacy. Apple's own book Mac OS 8 Revealed (1996) had been the definitive final roadmap for Copland, naming the platform's competitors and allies, and yet its 336 pages contain no mention of Pink or Taligent. In late 1996, Apple was ever more desperately scrambling to find any operating system strategy whatsoever beyond System 7, even after having already planned its upcoming announcement of it to be made in December 1997. The company had failed to deliver even a functional developer preview of Copland in two years; and it discarded the successful A/UX and PowerOpen platforms in 1995, and the new AIX-based Apple Network Server of 1996-1997. To build the future Mac OS, the company seriously explored licensing other third party OSes such as Solaris, Windows NT, and TalOS. Dissolution On September 16, 1997, IBM announced that Taligent Inc. would be dissolved by the end of the year, with its approximately 100 software engineers being "offered positions at IBM's Santa Teresa Laboratory to work on key components for IBM's VisualAge for Java programming tools, and at the recently announced Java porting center that IBM is setting up with Sun Microsystems and Netscape". IBM withdrew CommonPoint for OS/2 from the market on August 3, 1999. Reception By 1993, one year after incorporation and two years before shipping its first product, Taligent was nonetheless seen as a significant competitor in the industry. UnixWorld said that "NeXT needs to increase its volume three-fold [over its existing 50,000 installations] in order to build enough momentum to forestall Microsoft and Taligent in the object-oriented software business." In 1994, several PEEK beta test sites were impressed with CommonPoint, including one production success story at American Express which replaced its existing six month legacy application in only six weeks. At first in 1994, they'd said "We are almost overwhelmed by the complexity of [CommonPoint]. I don't know if the typical corporate developer is going to be able to assimilate this in their shop." but in 1995 they concluded the project with, "The CommonPoint frameworks — and I'm not exaggerating — are brilliant in the way they cover the technical issues [of that project]." Others were disappointed in the marked lack of crossplatform presence on HP/UX, Mac OS, and Windows NT which strictly limited any adoption of CommonPoint even among enthusiasts. In March 1995, IEEE Software magazine said "Taligent's very nature could change the contour of the application landscape. ... [I]t's clear that Taligent is sitting on, using, and refining what is ostensibly the world's best developed, comprehensive, object-oriented development and system environment." The system was described as virtually "a whole OS of nothing but hooks"which rests upon, integrates deeply with, and "replaces the host's original operating system", leaving "no lowest common denominator". Therefore, any Taligent native application is expected to run just the same on any supported host OS. Any degree of clean portability, especially with native integration, in the software industry was described as a holy grail to which many aspire and few deliver, citing the fact that Microsoft Word 6.0 for Macintosh still works like a foreign Windows application because the foundation was redundantly ported with each application. In February 1997, at the first comprehensive mass release of Taligent technology in the form of VisualAge C++ 4.0, PC Magazine said "Now, the best of the CommonPoint technology is being channeled into Open Class for VisualAge. ... Although the technology was lauded by many, the size and complexity of the CommonPoint frameworks proved too daunting for practical purposes. ... For sheer breadth of features, the Taligent frameworks are unmatched. An all-encompassing OOP framework has always proved a difficult ideal to realize, but VisualAge's Open Class Technology Preview is by far the most credible attempt we've seen.". In 2008, PCWorld named the native Taligent OS as number 4 of the 15 top vaporware products of all time. Due to the second-system effect and corporate immune response, Wired writer Fred Davis compared Taligent's relationship with Apple and IBM to a classic Greek tragedy: "A child is born, destined to kill its father and commit even more unspeakable acts against its mother. The parents love their child and are unwilling to kill it, so they imprison it in a secret dungeon. Despite its mistreatment, the child grows stronger, even more intent on committing its destined crimes." In 1995, IT journalist Don Tennant asked Bill Gates to reflect upon "what trend or development over the past 20 years had really caught him by surprise". Gates responded with what Tennant described as biting, deadpan sarcasm: "Kaleida and Taligent had less impact than we expected." Tennant believed the explanation to be that "Microsoft's worst nightmare is a conjoined Apple and IBM. No other single change in the dynamics of the IT industry could possibly do as much to emasculate Windows." Legacy The founding lead engineer of Pink, Erich Ringewald, departed Apple in 1990 to become the lead software architect at Be Inc. and design the new BeOS. Mark Davis, who had previously cofounded the Unicode Consortium, had at Apple co-written WorldScript, Macintosh Script Manager, and headed the localization of Macintosh to Arabic, Hebrew, and Japanese (KanjiTalk), was Taligent's Director of Core Technologies and architect of all its internationalization technology, and then became IBM's Chief Software Globalization Architect, moved to Google to work on internationalization and Unicode, and now helps to choose the emojis for the world's smartphones. Ike Nassi had been VP of Development Tools at Apple, launched MkLinux, served on the boards of Taligent and the OpenDoc Foundation, and worked on the Linksys iPhone. IBM harvested parts of CommonPoint to create the Open Class libraries for VisualAge for C++, and spawned an open-source project called International Components for Unicode from part of this effort. Resulting from Taligent's work led by Mark Davis, IBM published all of the internationalization libraries that are in Java Development Kit 1.1 through 1.1.4 along with source code which was ported to C++ and partially to C. Enhanced versions of some of these classes went into ICU for Java (ICU4J) and ICU for C (ICU4C). The JDK 1.1 received Taligent's JavaBeans Migration Assistant for ActiveX, to convert ActiveX into JavaBeans. Davis's group became the Unicode group at the IBM Globalization Center of Competency in Cupertino. Taligent created a set of Java- and JavaBeans-based development tools called WebRunner, a groupware product based on Lotus Notes called Places for Project Teams. Taligent licensed various technologies to Sun which are today part of Java, and to Oracle Corporation and Netscape. HP released the Taligent C++ compiler technology (known within Taligent as "CompTech") as its "ANSI C++" compiler, aCC. HP also released some Taligent graphics libraries. In the 2010s, some of Apple's personnel and design concepts from Pink and Purple (the first iPhone's codename) would resurface and blend into Google's Fuchsia operating system. Based on an object-oriented kernel and application frameworks, its open-source code repository was launched in 2016 with the phrase "Pink + Purple == Fuchsia". Publications The following were written by Taligent personnel about its system and about software engineering in general. Whitepapers Books The Taligent Reference Library series: Manuals Patents Notes References Defunct software companies of the United States Companies established in 1992 Companies disestablished in 1998 Apple Inc. operating systems IBM operating systems Former IBM subsidiaries Microkernel-based operating systems Object-oriented operating systems PowerPC operating systems Microkernels Apple Inc. partnerships HP software
Operating System (OS)
910
DECsystem DECsystem was a line of server computers from Digital Equipment Corporation. They were based on MIPS architecture processors and ran DEC's version of the UNIX operating system, called ULTRIX. They ranged in size from workstation-style desktop enclosures to large pedestal cabinets. The DECSYSTEM name was also used for later models of the PDP-10, namely the DECSYSTEM-10 and DECSYSTEM-20 series. Models DECsystem 3100 Identical to the DECstation 3100, but was intended to be used as a multiuser system. It was announced in early May 1989 at the UniForum exhibition in San Francisco. It was shipped in June 1989. Code name PMAX. DECsystem 5000 Series Rebranded Personal DECstation 5000 Series without any graphics. Code name MAXINE. DECsystem 5000 Model 100 Series Rebranded DECstation 5000 Model 100 Series without any graphics. Codename 3MIN. DECsystem 5000 Model 200 Series Rebranded DECstation 5000 Model 200 Series without any graphics. Code name 3MAX. (5000/260 3MAX+) DECsystem 5100 A desktop uniprocessor entry-level server. It replaced the DECsystem 3100. Code name MIPSMATE. DECsystem 5400 A pedestal uniprocessor system based on the Q-Bus. It shared many hardware options with the 3x00-series MAYFAIR VAXes, including TK70 tape drive, MS650-BA memory and DSSI disk drives. SCSI was not available except with third party add in hardware. The unit shipped with a MicroVAX diagnostic processor, which would run similar ROM diagnostics to the MicroVAX series, as well as boot the tape based MicroVAX Diagnostic TK70 tape. Once the console transferred control of the computer to the MIPS processor the MicroVAX sat essentially unused until the next boot. Code name MIPSFAIR. DECsystem 5500 A pedestal uniprocessor system based on the Q-Bus. It replaced the DECsystem 5400. The 5500 came with native SCSI support, as well as support for DSSI disk drives. Code name MIPSFAIR-2. The DECsystem 5500 is shipped in a BA430 enclosure, which provides a 12-slot backplane and room for four mass storage devices. The base system contains the following: * A KN220-AA module set. This consists of a KN220 I/O module in slot 1 and a KN220 CPU module in slot 2. The CPU module contains: o 30 MHz R3000 CPU with R3010 FPU. o 512 kBytes of Prestoserve (NFS accelerator) battery backed RAM. The I/O module contains: o Q22-bus interface o SGEC Second Generation Ethernet Controller o DSSI Digital Storage System Interconnect port o SII-based SCSI port. * From 1 to 4 MS220-AA 32MB memory modules, installed in slots 3 through 7. DECsystem 5800 Series The DECsystem 5800 Series are high-end multiprocessor systems. The series comprised the DECsystem 5810, 5820, 5830, and 5840, with the third digit referring to the number of processors. These systems can be considered to be the MIPS/RISC alternatives of the VAX 6000 operating the XMI and BI bus. The 5810 and 5820, using 25 MHz R3000 microprocessors and R3010 floating-point coprocessors, were introduced on 11 July 1989. Code name ISIS. DECsystem 5900 and DECsystem 5900/260 The DECsystem 5900 and DECsystem 5900/260 are rack-mounted DECstation 5000 Model 240 and DECstation Model 260 workstations, respectively, positioned as mid-range servers by Digital. The DECsystem 5900 was introduced in early December 1991. Both models were discontinued on 28 January 1994. Their intended replacement was the DEC 3000 Model 800S AXP packaged in a similar rack-mountable enclosure. The DECstation system module is repackaged in a CPU drawer, which is mounted in a rack with a mounting kit which permits the drawer to be slid in and out. The CPU module also contained an integrated TURBOchannel Extender, the power supply and a blower, which cooled the system. However, as the system module that these systems use does not feature multiprocessing capabilities, the presence of two CPU drawers in a rack simply meant that there were two separate systems. The mass storage drawers, in such a case, would be divided between the CPU drawers, with a minimum of one per a CPU drawer. There are two models of mass storage drawers. One model may contain one to four 5.25-inch full-height non-removable, one 5.25-inch full-height removable or non-removable and two 5.25-inch half-height removable devices. The other model may contain one to five 5.25-inch full-height non-removable, one 5.25-inch removable and two 5.25-inch half-height removable devices. In both models, a 400 W power supply is located at the rear of the drawer. The H9A00 enclosure, a 19-inch rack, contains a minimum of one CPU drawer and one mass storage drawer. A power controller at the bottom of the enclosure distributed power to the CPU and mass storage drawers. The rack can contain a maximum of two CPU drawers and four mass storage drawers. The DECsystem 5900 has a width of 61 cm (24 in), a height of 170 cm (66.9 in), a depth of 86.4 cm (34 in) and a weight of 265 to 485 kg (480 to 1,070 lb) depending on the configuration. See also DECstation PDP-10 References External links Notes on System Models DEC computers
Operating System (OS)
911
Program counter The program counter (PC), commonly called the instruction pointer (IP) in Intel x86 and Itanium microprocessors, and sometimes called the instruction address register (IAR), the instruction counter, or just part of the instruction sequencer, is a processor register that indicates where a computer is in its program sequence. Usually, the PC is incremented after fetching an instruction, and holds the memory address of ("points to") the next instruction that would be executed. Processors usually fetch instructions sequentially from memory, but control transfer instructions change the sequence by placing a new value in the PC. These include branches (sometimes called jumps), subroutine calls, and returns. A transfer that is conditional on the truth of some assertion lets the computer follow a different sequence under different conditions. A branch provides that the next instruction is fetched from elsewhere in memory. A subroutine call not only branches but saves the preceding contents of the PC somewhere. A return retrieves the saved contents of the PC and places it back in the PC, resuming sequential execution with the instruction following the subroutine call. Hardware implementation In a simple central processing unit (CPU), the PC is a digital counter (which is the origin of the term "program counter") that may be one of several hardware registers. The instruction cycle begins with a fetch, in which the CPU places the value of the PC on the address bus to send it to the memory. The memory responds by sending the contents of that memory location on the data bus. (This is the stored-program computer model, in which a single memory space contains both executable instructions and ordinary data.) Following the fetch, the CPU proceeds to execution, taking some action based on the memory contents that it obtained. At some point in this cycle, the PC will be modified so that the next instruction executed is a different one (typically, incremented so that the next instruction is the one starting at the memory address immediately following the last memory location of the current instruction). Like other processor registers, the PC may be a bank of binary latches, each one representing one bit of the value of the PC. The number of bits (the width of the PC) relates to the processor architecture. For instance, a “32-bit” CPU may use 32 bits to be able to address 232 units of memory. On some processors, the width of program counter instead depends on the addressable memory; for example, some AVR controllers have a PC which wraps around after 12 bits. If the PC is a binary counter, it may increment when a pulse is applied to its COUNT UP input, or the CPU may compute some other value and load it into the PC by a pulse to its LOAD input. To identify the current instruction, the PC may be combined with other registers that identify a segment or page. This approach permits a PC with fewer bits by assuming that most memory units of interest are within the current vicinity. Consequences in machine architecture Use of a PC that normally increments assumes that what a computer does is execute a usually linear sequence of instructions. Such a PC is central to the von Neumann architecture. Thus programmers write a sequential control flow even for algorithms that do not have to be sequential. The resulting “von Neumann bottleneck” led to research into parallel computing, including non-von Neumann or dataflow models that did not use a PC; for example, rather than specifying sequential steps, the high-level programmer might specify desired function and the low-level programmer might specify this using combinatory logic. This research also led to ways to making conventional, PC-based, CPUs run faster, including: Pipelining, in which different hardware in the CPU executes different phases of multiple instructions simultaneously. The very long instruction word (VLIW) architecture, where a single instruction can achieve multiple effects. Techniques to predict out-of-order execution and prepare subsequent instructions for execution outside the regular sequence. Consequences in high-level programming Modern high-level programming languages still follow the sequential-execution model and, indeed, a common way of identifying programming errors is with a “procedure execution” in which the programmer's finger identifies the point of execution as a PC would. The high-level language is essentially the machine language of a virtual machine, too complex to be built as hardware but instead emulated or interpreted by software. However, new programming models transcend sequential-execution programming: When writing a multi-threaded program, the programmer may write each thread as a sequence of instructions without specifying the timing of any instruction relative to instructions in other threads. In event-driven programming, the programmer may write sequences of instructions to respond to events without specifying an overall sequence for the program. In dataflow programming, the programmer may write each section of a computing pipeline without specifying the timing relative to other sections. Symbol Vendors use different characters to symbolize the program counter in assembly language programs. While the usage of a "$" character is prevalent in Intel, Zilog, Texas Instruments, Toshiba, NEC, Siemens and AMD processor documentation, Motorola, Rockwell Semiconductor, Microchip Technology and Hitachi instead use a "*" character, whereas SGS-Thomson Microelectronics uses "PC". See also Branch prediction Instruction cache Instruction cycle Instruction unit Instruction pipeline Instruction register Instruction scheduling Program status word Notes References Control flow Central processing unit Digital registers
Operating System (OS)
912
Amiga Hombre chipset Hombre is a RISC chipset for the Amiga, designed by Commodore, which was intended as the basis of a range of Amiga personal computers and multimedia products, including a successor to the Amiga 1200, a next generation game machine called CD64 and a 3D accelerator PCI card. Hombre was canceled along with the bankruptcy of Commodore International. History In 1993, Commodore International ceased the development of the AAA chipset when they concluded conventional PC clones would have similar performance shortly after the AAA machines would be released. In the place of AAA, Commodore began to design a new 64-bit 3D graphics chipset based on Hewlett-Packard's PA-RISC architecture to serve as the new basis of the Amiga personal computer series. It was codenamed Hombre (pronounced "ómbre" which means man in Spanish) and was developed in conjunction with Hewlett-Packard over an estimated eighteen-month period. Backward compatibility Hombre does not support any planar mode, nor any emulation for the legacy Amiga chipset or Motorola 680x0 CPU registers, so it was completely incompatible with former Amiga models. According to Hombre designer Dr. Ed Hepler, Commodore intended to produce an AGA Amiga upon a single chip to solve the backward compatibility issues. This single chip would include Motorola MC680x0 core, plus the AGA chipset. The chip could be integrated in Hombre based computers for backward compatibility with AGA software. Design Hombre is based around two chips: Nathaniel, a System Controller chip, and Natalie, a Display Controller chip. The System Controller chip was designed by Dr. Ed Hepler, well known as the designer of the AAA Andrea chip. The chip is similar in principle to the chip bus controller found in Agnus, Alice, and Andrea of the Amiga chipsets. Nathaniel features the following: An inhouse designed 100+ MHz 64-bit integer PA-RISC microprocessor with SIMD and additional graphics processing related instructions An advanced DMA engine and blitter with fixed-point arithmetic 3D texture mapping and gouraud shading using trapezoids as primitives 64-bit risc-like Copper co-processor 16-bit resolution sound processor with twelve voices Additional logic has been included to permit some floating point operations to be performed in hardware, a floating point register file is included. The inclusion of a full double precision floating point unit was also under consideration. The Display Controller Chip was designed by Tim McDonald, also known as the designer of the AAA Monica chip. It is similar in principle to the Denise, Lisa, and Monica chips found on original Amigas. In addition, the chipset also supported future official or third-party upgrades through extension for an external PA-RISC processor. Natalie features the following: VGA monitor control Built in genlock and framegrabber Logic for 2 analog game port joysticks These chips and some other circuitry would be part of a PCI card, through the ReTargetable Graphics system. Additional IO for peripherals such as floppy drive, keyboard and mice would have been provided with a separate dedicated peripheral ASIC. There were plans to port the AmigaOS Exec kernel to low-end systems, but this was not possible due to financial troubles facing Commodore at that time. Therefore, a licensed OpenGL library was to be used for the low-end entertainment system. The original plan for the Hombre-based computer system was to have Windows NT compatibility, with native AmigaOS recompiled for the new big-endian CPU to run legacy 68k Amiga software through emulation. Commodore chose the PA-RISC instruction set over the MIPS architecture and first generation embedded PowerPC microprocessors, mainly because these low-cost microprocessors were unqualified to run Windows NT. This wasn't the case for the 64-bit MIPS R4200, but it was rejected for its high price at the time. Features Hombre was designed as a clean break from traditional Amiga chipset architecture with no planar graphics mode support. Hombre also doesn't feature the original eight Amiga sprites, early iterations of Hombre featured a new, incompatible sprite engine but Commodore decided to drop sprites because sprites had become less attractive to developers compared with fast blitters. Despite lack of compatibility, Hombre introduced modern technologies including these: Fill rate of 30 million 3D rendered pixels per second (similar to Sony's PlayStation performance) Special Function Unit (SFU) SIMD extension for rasterizing multiple pixels with a single 64-bit operation 16-bit chunky graphic modes (to reduce costs, Commodore abandoned 256 color mode with Color LUT registers) 32-bit chunky with 8-bit alpha channel 1280 × 1024 pixel progressive resolution with a 24-bit color palette One sprite with a 24-bit color palette, used for the mouse pointer Four scalable playfields, each with their own graphics mode (e.g. 16bpp, HAM-8) 512 25-bit color look up tables (24-bit color + 1 bit for genlock) 3D texture mapping engine Gouraud shading Z-buffering YUV compatibility with JPEG support Standard TV and HDTV compatibility 64-bit internal data bus and registers The chipset could be sold either as a high end PCI graphics card with minimal peripherals ASICs and 64-bit DRAM, or as a lower cost CD-ROM based game system (CD64) using cheap 32-bit DRAM. It could also be used for set-top box embedded systems. According to Dr. Ed Hepler, Hombre was to be fabricated in 3-level metal CMOS with the help of Hewlett-Packard. HP had fabricated the AGA Lisa chip and collaborated in the design of the AAA chipset. Commodore was planning to adopt the Acutiator architecture designed by Dave Haynie for Hombre before it filed bankruptcy and went out of business. See also PA-RISC family processors References External links Amiga history 1993-1994 Hombre hardware design documents 1998 Dr. Edward L. Hepler interview about Hombre Hombre History - RISC Selection By Dr. Edward L. Hepler The Dave Haynie Archive with much detailed info and specs Chris Ludwig Interview Article about Hombre CBM's Plans for the RISC-Chipset, by Dave Haynie OpenPA Hitachi PA/50L article - 1993 Hombre CPU candidate Amiga chipsets Graphics processing units Sound chips
Operating System (OS)
913
MuLinux muLinux was an Italian, English-language lightweight Linux distribution maintained by mathematics and physics professor Michele Andreoli, meant to allow very old and obsolete computers (80386, 80486 and Pentium Pro hardware dating from 1986 through 1998) to be used as basic intranet/Internet servers or text-based workstations with a UNIX-like operating system. It was also designed for quickly turning any 80386 or later computer into a temporary, powerful Linux machine, along with system repair, education, forensic analysis and what the developer called proselytizing. In 2004 reviewer Paul Zimmer wrote, "Although there are several other single-floppy Linux distributions, none can match muLinux's extensive and unique combination of useful features." The last version update was in 2004, when further development of this "linux-on-a-floppy" distribution ended. Name The name muLinux comes from the Greek letter mu which is the SI symbol meaning one millionth, harking to the very small size of this OS. Minimalist design muLinux was based on the Linux 2.0.36 kernel. Development was frozen in 2004 at version 14r0, with some of the code and packages taken from software releases going back to 1998 (owing only to their smaller sizes). An experimental, unstable version called Lepton had the 2.4 kernel. muLinux could be both booted or installed to a hard drive on an obsolete machine from floppy disks. A highly functional UNIX-like, network-enabled server with a Unix shell could be had from but one floppy disk. Another floppy disk added workstation functionality and a legacy X Window VGA GUI came with a third floppy. One reviewer noted, "It's not gorgeous, but the whole X subsystem fits onto a single floppy. Egad." muLinux could also be unpacked and installed by a self-executable archive, or extracted directly, onto an old DOS or Windows 9x (umsdos) partition without harming the current OS. If the machine had a floppy disk drive muLinux also would run on an otherwise diskless computer and no CD-ROM drive was needed. Owing to its minimalist design muLinux was a single-user OS, with all operations performed by the root user. It used the ext2 Linux native file system (rather than the slower Minix file system seen in other single-floppy takes on Linux). The OS was robust when used for text-based tasks along with basic file, light web page or email serving. It could also be adapted as a very tiny, stand-alone embedded system. muLinux was sometimes installed by Windows users who wanted to learn about the commands and configuration of a Unix-like operating system before taking the step of installing a full Linux distribution or BSD release, although on later computers this could easily be done with any one of many live CD distributions. Since the distribution was always wholly targeted at old hardware and meant to have a tiny footprint, Andreoli warned at the time that muLinux should not be used to evaluate Linux or open source software. The OS came with a lean and pithy online help system which also happened to be an introduction to UNIX, written in an English which the developer called "fractured." The OS had "cheery dialogues" and a friendly sense of humour sprinkled throughout. System requirements muLinux needed only minimal hardware, hence it would run on many thoroughly obsolete but still-working computers. Some machines from the later 1980s or very early 1990s may have needed additional SIMMs for enough RAM but overall, the requirements were only slightly higher than those for Windows 3.1 so a still-working machine which when new in 1992 ran Windows 3.1 would likely be able to handle a hard drive installation of muLinux: 4 MB RAM if run from a hard drive 16 MB RAM if booted from floppies, can boot from floppy with only 8MB about 20 MB of hard drive space an Intel 80386 or later processor Packages muLinux came with many packages, each of which fit on one floppy. muLinux was somewhat unusual in that all of the packages were wholly optional. SRV - basic server package with a web server, mail, samba and more WKS - basic work station package with mutt, lynx, ssh, pgp and many other Unix shell applications X11 - legacy X Window 16 colour VGA environment (see below for SVGA) along with early versions of both fvwm95 and Afterstep window managers (based on the Windows 95 and NeXTSTEP GUI respectively) VNC - for virtual network computing GCC - C compiler TCL - Tcl/Tk+ scripting language, which also brings a few more X applications and tools TEX - TeX typesetting system PERL - Perl interpreter with modules EMU - Wine and Dosemu emulators JVM - Kaffe Java virtual machine NS1 - SVGA X server along with part of a small but highly obsolete version of Netscape Navigator NS2 - second part of Netscape Navigator Packages by other authors were also made available. References External links muLinux official Web page Light-weight Linux distributions Floppy-based Linux distributions Linux distributions
Operating System (OS)
914
K computer The K computer named for the Japanese word/numeral , meaning 10 quadrillion (1016) was a supercomputer manufactured by Fujitsu, installed at the Riken Advanced Institute for Computational Science campus in Kobe, Hyōgo Prefecture, Japan. The K computer was based on a distributed memory architecture with over 80,000 compute nodes. It was used for a variety of applications, including climate research, disaster prevention and medical research. The K computer's operating system was based on the Linux kernel, with additional drivers designed to make use of the computer's hardware. In June 2011, TOP500 ranked K the world's fastest supercomputer, with a computation speed of over 8 petaflops, and in November 2011, K became the first computer to top 10 petaflops. It had originally been slated for completion in June 2012. In June 2012, K was superseded as the world's fastest supercomputer by the American IBM Sequoia. , the K computer holds the third place for the HPCG benchmark. It held the first place until June 2018, when it was superseded by Summit and Sierra. , K is the world's 20th-fastest computer, with the IBM's Summit and Sierra being the fastest supercomputers. The K supercomputer was decommissioned on 30 August 2019. In Japan, the K computer was succeeded by the Fugaku supercomputer, in 2020, which took the top spot, and is three times faster than 2nd most powerful supercomputer. Performance On 20 June 2011, the TOP500 Project Committee announced that K had set a LINPACK record with a performance of 8.162 petaflops, making it the fastest supercomputer in the world at the time; it achieved this performance with a computing efficiency ratio of 93.0%. The previous record holder was the Chinese National University of Defense Technology's Tianhe-1A, which performed at 2.507 petaflops. The TOP500 list is revised semiannually, and the rankings change frequently, indicating the speed at which computing power is increasing. In November 2011, Riken reported that K had become the first supercomputer to exceed 10 petaflops, achieving a LINPACK performance of 10.51 quadrillion computations per second with a computing efficiency ratio of 93.2%. K received top ranking in all four performance benchmarks at the 2011 HPC Challenge Awards. On 18 June 2012, the TOP500 Project Committee announced that the California-based IBM Sequoia supercomputer replaced K as the world's fastest supercomputer, with a LINPACK performance of 16.325 petaflops. Sequoia is 55% faster than K, using 123% more CPU processors, but is also 150% more energy efficient. On the TOP500 list, it became first in June 2011, falling down through time to lower positions, to eighteenth in November 2018. K computer holds third place in the HPCG benchmark test proposed by Jack Dongarra, with 0.6027 HPCG PFLOPS in November 2018. Specifications Node architecture The K computer comprises 88,128 2.0 GHz eight-core SPARC64 VIIIfx processors contained in 864 cabinets, for a total of 705,024 cores, manufactured by Fujitsu with 45 nm CMOS technology. Each cabinet contains 96 computing nodes, in addition to six I/O nodes. Each computing node contains a single processor and 16 GB of memory. The computer's water cooling system is designed to minimize failure rate and power consumption. Network The nodes are connected by Fujitsu's proprietary torus fusion (Tofu) interconnect. File system The system adopts a two-level local/global file system with parallel/distributed functions, and provides users with an automatic staging function for moving files between global and local file systems. Fujitsu developed an optimized parallel file system based on Lustre, called the Fujitsu Exabyte File System (FEFS), which is scalable to several hundred petabytes. Power consumption Although the K computer reported the highest total power consumption of any 2011 TOP500 supercomputer (9.89 MW the equivalent of almost 10,000 suburban homes), it is relatively efficient, achieving 824.6 GFlop/kW. This is 29.8% more efficient than China's NUDT TH MPP (ranked #2 in 2011), and 225.8% more efficient than Oak Ridge's Jaguar-Cray XT5-HE (ranked #3 in 2011). However, K's power efficiency still falls far short of the 2097.2 GFlops/kWatt supercomputer record set by IBM's NNSA/SC Blue Gene/Q Prototype 2. For comparison, the average power consumption of a TOP 10 system in 2011 was 4.3 MW, and the average efficiency was 463.7 GFlop/kW. According to TOP500 compiler Jack Dongarra, professor of electrical engineering and computer science at the University of Tennessee, the K computer's performance equals "one million linked desktop computers". The computer's annual running costs are estimated at US$10 million. K Computer Mae rapid transit station On 1 July 2011, Kobe's Port Island Line rapid transit system renamed one of its stations from "Port Island Minami" to "K Computer Mae" (meaning "In front of K Computer") denoting its vicinity. See also PRIMEHPC FX10 Supercomputing in Japan Graph500 Notes References External links Riken Advanced Institute for Computational Science Riken Next-Generation Supercomputer R&D Center K computer: Fujitsu Global Fujitsu Scientific & Technical Journal, July 2012 (Vol. 48, No. 3, The K computer Special Interview: Taking on the Challenge of a 10-Petaflop Computer, Riken News, No. 298, April 2006. June 2017 Top 500 2011 in science Fujitsu supercomputers One-of-a-kind computers Petascale computers Riken SPARC microprocessor products Supercomputing in Japan 64-bit computers
Operating System (OS)
915
System Idle Process In Windows NT operating systems, the System Idle Process contains one or more kernel threads which run when no other runnable thread can be scheduled on a CPU. In a multiprocessor system, there is one idle thread associated with each CPU core. For a system with hyperthreading enabled, there is an idle thread for each logical processor. The primary purpose of the idle process and its threads is to eliminate what would otherwise be a special case in the scheduler. Without the idle threads, there could be cases when no threads were runnable (or "Ready" in terms of Windows scheduling states). Since the idle threads are always in a Ready state (if not already Running), this can never happen. Thus whenever the scheduler is called due to the current thread leaving its CPU, another thread can always be found to run on that CPU, even if it is only the CPU's idle thread. The CPU time attributed to the idle process is therefore indicative of the amount of CPU time that is not needed or wanted by any other threads in the system. The scheduler treats the idle threads as special cases in terms of thread scheduling priority. The idle threads are scheduled as if they each had a priority lower than can be set for any ordinary thread. Because of the idle process's function, its CPU time measurement (visible through, for example, Windows Task Manager) may make it appear to users that the idle process is monopolizing the CPU. However, the idle process does not use up computer resources (even when stated to be running at a high percent). Its CPU time "usage" is a measure of how much CPU time is not being used by other threads. In Windows 2000 and later the threads in the System Idle Process are also used to implement CPU power saving. The exact power saving scheme depends on the operating system version and on the hardware and firmware capabilities of the system in question. For instance, on x86 processors under Windows 2000, the idle thread will run a loop of halt instructions, which causes the CPU to turn off many internal components until an interrupt request arrives. Later versions of Windows implement more complex CPU power saving methods. On these systems the idle thread will call routines in the Hardware Abstraction Layer to reduce CPU clock speed or to implement other power-saving mechanisms. There are more detailed sources of such information available through Windows' performance monitoring system (accessible with the perfmon program), which includes more finely grained categorization of CPU usage. A limited subset of the CPU time categorization is also accessible through the Task Manager, which can display CPU usage by CPU, and categorized by time spent in user vs. kernel code. See also List of Microsoft Windows components Idle (CPU) Microsoft Windows HLT (x86 instruction) Process Explorer References Windows NT architecture
Operating System (OS)
916
Central Point Software Central Point Software, Inc. (CP, CPS, Central Point) was a leading software utilities maker for the PC market, supplying utilities software for the DOS and Microsoft Windows markets. It also made Apple II copy programs. Through a series of mergers, the company was ultimately acquired by Symantec in 1994. History CPS was founded by Michael Burmeister-Brown (Mike Brown) in 1980 in Central Point, Oregon, for which the company was named. Building on the success of its Copy II PC backup utility, it moved to Beaverton, Oregon. In 1993 CPS acquired the XTree Company. It was itself acquired by Symantec in 1994, for around $60 million. Products The company's most important early product was a series of utilities which allowed exact duplicates to be made of copy-protected diskettes. The first version, Copy II Plus v1.0 (for the Apple II), was released in June 1981. With the success of the IBM PC and compatibles, a version for that platform - Copy II PC (copy2pc) - was released in 1983. CPS also offered a hardware add-in expansion card, the Copy II PC Deluxe Board, which was bundled with its own software. The Copy II PC Deluxe Board was able to read, write and copy disks from Apple II and Macintosh computer systems as well. COPY II PC's main competitor was Quaid Software's CopyWrite, which did not have a hardware component. CPS also released Option Board hardware with TransCopy software for duplicating copy-protected floppy diskettes. In 1985 CPS released PC Tools, an integrated graphical DOS shell and utilities package. PC Tools was an instant success and became Central Point's flagship product, and positioned the company as the major competitor to Peter Norton Computing and its Norton Utilities and Norton Commander. CPS later manufactured a Macintosh version called Mac Tools. CPS licensed the Mirror, Undelete, and Unformat components of PC Tools to Microsoft for inclusion in MS-DOS versions 5.x and 6.x as external DOS utilities. CPS File Manager was ahead of its time, with features such as view ZIP archives as directories and a file/picture viewer. In 1993 CPS released PC Tools for Windows 2.0 which ran on Windows 3.1. After the Symantec acquisition the programmer group that created PCTW 2.0 created Norton Navigator for Windows 95 and Symantec unbundled the File Manager used in PCTW 2.0 and released it as PC-Tools File Manager 3.0 for Windows 3.1 The lateness of PCTW to the Windows market was a major factor in why CPS was acquired by Symantec. Windows Server at the time was not viewed as a credible alternative to Novell NetWare - the first version of Windows Server was released in 1993 - and the desktop and server software products market was completely centered on Novell NetWare. The subsequent stumble by Novell to maintain dominance in the server market came years later and had nothing to do with the acquisition. Instead, like many software vendors, CPS underestimated how rapidly users were going to shift to Windows from DOS. CPS's other major desktop product was Central Point Anti-Virus (CPAV), whose main competitor was Norton Antivirus. CPAV was a licensed version of Carmel Software'''s Turbo Anti-Virus; CPS, in turn, licensed CPAV to Microsoft to create Microsoft Antivirus for DOS (MSAV) and Windows (MWAV). CPS also released CPAV for Netware 3.xx and 4.x Netware servers in 1993. Central Point also sold the Apple II clone Laser 128 by mail. List of CPS products PC Tools PC Tools for Windows Central Point Anti-Virus Central Point Anti-Virus for NetWare Central Point Backup Central Point Desktop Central Point Commute Copy II+ Copy II 64 (for Commodore 64/128) Copy II PC Copy II Mac Copy II ST (for Atari ST/TT series computers) MacTools and MacTools Pro More PC Tools LANlord Deluxe Option Board'' See also List of mergers and acquisitions by Symantec References Defunct software companies of the United States Defunct companies based in Oregon Software companies established in 1980 Software companies disestablished in 1994 NortonLifeLock acquisitions Central Point, Oregon 1980 establishments in Oregon 1994 disestablishments in Oregon
Operating System (OS)
917
Bundling of Microsoft Windows Bundling of Microsoft Windows is the installation of Microsoft Windows in computers before their purchase. Microsoft encourages original equipment manufacturers (OEMs) of personal computers to include Windows licenses with their products, and agreements between Microsoft and OEMs have undergone antitrust scrutiny. Users opposed to the bundling of Microsoft Windows, including Linux users, have sought refunds for Windows licenses, arguing that the Windows end-user license agreement entitles them to return unused Windows licenses for a cash refund. Although some customers have successfully obtained payments (in some cases after litigation or lengthy negotiations), others have been less successful. The "Windows tax" Microsoft encourages original equipment manufacturers (OEMs) to supply computers with Windows pre-installed, saying that purchasers benefit by not having to install an operating system. Analyst Vishal Tripathi said that many people purchase PCs with pre-installed operating systems because they do not want to deal with the "learning curve" and inconvenience of installing an operating system. Virtually all large computer vendors bundle Microsoft Windows with the majority of the personal computers in their ranges. In 1999, Maximum PC wrote that non-Windows users "have long griped that machines from large companies can't be purchased without Windows". In 1999, analyst Rob Enderle attributed the lack of computers without Windows available for individual purchase to economic impracticality, citing certification and warranty requirements. In 1999, Dell stated that it only offered non-Microsoft operating systems on servers and as part of customized large orders, but if Linux became popular enough to make Linux pre-installation cost-effective, "we'd be foolish not to offer it". The Guardians computer editor Jack Schofield said that there were significant overhead costs associated with pre-installation of Linux, in part due to Linux's small market share. Serdar Yegulalp of Computerworld said that in the late 1990s, because Linux was not fully developed, Linux computers were "a tough sell for non-technical users". Microsoft historically engaged in licensing practices that discouraged the installation of non-Microsoft operating systems. Microsoft once assessed license fees based on the number of computers an OEM sold, regardless of whether a Windows license was included. Beginning in 1983, Microsoft sold MS-DOS licenses to OEMs on an individually negotiated basis. The contracts required OEMs to purchase a number of MS-DOS licenses equal to or greater than the number of computers sold, with the result of zero marginal cost for OEMs to include MS-DOS. Installing an operating system other than MS-DOS would effectively require double payment of operating system royalties. Also, Microsoft penalized OEMs that installed alternative operating systems by making their license terms less favorable. Microsoft entered into a consent decree in 1994 that barred them from conditioning the availability of Windows licenses or varying their prices based on whether OEMs distributed other operating systems. Microsoft General Counsel Brad Smith said that the decree was effective in allowing Dell and HP to offer Linux computers, and Jeremy Reimer of Ars Technica stated that the decree made it "fiscally realistic to sell computers with alternative operating systems". In 1999, a Microsoft representative stated that their contracts with OEMs did not "stop[] any OEM from shipping any operating system on their PCs". In 2010, Microsoft stated that its agreements with OEMs to distribute Windows are nonexclusive, and OEMs are free to distribute computers with a different operating system or without any operating system. In a 2001 article in Byte, it was reported that license agreements between OEMs and Microsoft forbade OEMs from including Windows alongside another operating system on the same computer. According to a 1999 New York Times article, "critics assert that the company continues to use its market clout to ensure that nearly all new personal computers come with Windows pre-installed." In 2009, Microsoft stated that it has always charged OEMs about $50 for a Windows license on a $1,000 computer. In 2007, Dell stated that its computers with Ubuntu installed would be priced about $50 lower than comparable systems with Windows installed. In a 2010 ZDNet article, Chris Clay wrote that Dell computers with Ubuntu preinstalled were priced higher than identical systems with Windows preinstalled, even though Ubuntu is distributed gratis. The claimed increase in the price of a computer resulting from the inclusion of a Windows license has been called the "Windows tax" or "Microsoft tax" by opposing computer users. Some computer purchasers request refunds for Windows licenses included with their purchased computers because they do not want to use Windows, preferring an operating system such as Linux instead. Jeff Walsh of InfoWorld said that businesses with site licenses can save money by requesting refunds of Windows licenses included with purchased computers. Users can avoid the "Windows tax" altogether by assembling a computer from individually purchased parts or purchasing a computer from an OEM that does not bundle Windows. Some smaller OEMs and larger retail chains such as System76 have taken to specializing in Linux-based systems to their advantage from major suppliers' paucity of non-Windows offerings. Beginning in 2007, Dell offered computers with Ubuntu pre-installed. In 2014, Hewlett-Packard stated that it sells "units bundled with a built-in OS and those without". Some Linux distributors also run 'partnership' programs to endorse suppliers of machines with their system pre-installed. Some vendors purchase computers from major OEMs, install Linux on them and resell them. Chris Clay of ZDNet wrote that employee discount programs create a financial incentive to purchase computers from a large manufacturer, even if the manufacturer does not offer computers without Windows. Boot locking concerns Microsoft requires that OEMs support UEFI secure boot on their products to qualify for the Windows 8 Logo Program. Concerns have been raised that OEMs might ship systems that do not allow users to disable secure boot or install signing keys for alternative operating systems. Such computers would be unable to boot any non-Windows operating system (unless that operating system was signed and its keys included with the computer), further complicating the issue of Windows refunds. While Microsoft claims the OEMs would be free to decide which keys to include and how to manage them, competing OS vendors' relative lack of influence on the desktop OS market compared to Microsoft might mean that, even if signed versions of their operating systems were available, they might face difficulties getting hardware vendors to include their keys, especially if end users won't be able to manage those keys themselves. Boot locking was required for Windows Phone and RT devices, but not for Windows 10 Connected PCs. In January 2012, Microsoft confirmed it would require hardware manufacturers to enable secure boot on Windows 8 devices, and that x86/64 devices must provide the option to turn it off while ARM-based devices must not provide the option to turn it off. License refund policy Microsoft does not provide refunds for Windows licenses sold through an OEM, including licenses that come with the purchase of a computer or are pre-installed on a computer. A Microsoft Denmark representative stated that Microsoft's Windows license terms allow OEMs to offer a refund for just the Windows license. Microsoft's End User License Agreement for Windows 10 states that: In 1999, the relevant text read In 1999, according to InfoWorld, "Some users are taking this EULA literally and plan to demand a cash refund." In 1999, a Microsoft representative described requesting a Windows refund on the basis of rejecting the license as "a technicality where someone is twisting the language a little bit to come up with the idea that they can run back to the OEM with this". Laurie J. Flynn of The New York Times characterized the license refund argument as using a loophole in the license agreement. OEM policies for refunding unused Windows licenses vary. Some OEMs have programs that specifically allow a user to receive a refund for an unused Windows license. Acer US has a Windows refund program where a user can ship a computer with an unused copy of Windows to the Acer service center and have the computer returned without Windows for a refund. Acer's policy requires the customer to return items at their own expense, and the balance received by the customer can be as low as €30. The same applies for EU, the reported refund as of 2014 is €40 for Windows 8. Other vendors, like Dell, have ad hoc procedures for users to request a refund of a Windows license; one user who received a £55.23 refund from Dell said of the process, "I was pretty gob-smacked that it was so easy". In some cases, vendors have asked that customers requesting refunds sign non-disclosure agreements. In 1999, a Toshiba representative stated that a case where a user obtained a $110 refund was "not the typical policy and not what other people will run into if they try it". Other vendors do not issue refunds for Windows licenses. In February 1999 InfoWorld reported that "No PC manufacturers are currently offering refunds for users who do not use Windows". According to a 1999 Maximum PC article, Dell did not provide refunds for Windows licenses, interpreting the license agreement to "treat the hardware and software as a single package that must be returned". In 2009, Sony refused to offer a partial refund for a customer who declined the Windows Vista EULA, instead offering a refund for the entire computer, which the customer declined. Litigation by users denied a partial refund for an unused Windows license has resulted in rulings in France and Italy that bundling Microsoft Windows and then refusing to offer partial refunds for just the Windows license violates applicable law. In September 2014, the Supreme Court of Italy in ruling 19161/2014 decided that a laptop buyer was entitled to receive a refund of €140 for the price of a Microsoft Windows license and a Microsoft Works license on a computer, saying that bundling was "a commercial policy of forced distribution" and called this practice "monopolistic in tendency", confirmed later with ruling 4390/2016. In December 2020, the Court of Monza (Italy) in ruling 1734/2020 imposed upon the manufacturer punitive damages amounting to €20,000 for abuse of the appeal procedures. In India, bundling is challenged by users as a violation of Competition Act 2002; one Indian citizen has sent a legal notice to HP. However, in another license refund case, a French appellate court ruled in favor of the OEM, "holding that the sale at issue did not constitute the unfair commercial practice of coercive selling, which is not permitted under any circumstances, an unfair commercial tying practice, or a misleading or aggressive commercial practice." The case is pending before the Court of Cassation. In September 2016, the Court of Justice of the European Union ruled that "the sale of a computer equipped with pre-installed software does not in itself constitute an unfair commercial practice within the meaning of Directive 2005/29 when such an offer is not contrary to the requirements of professional diligence and does not distort the economic behaviour of [purchasers]." The Court also ruled that Directive 2005/29 does not require OEMs to include a separate price for an operating system license. Public response Websites have been created for the specific purpose of spreading information about the issue and educating others on their options for getting a refund. A 1999 rally opposing the bundling of Windows attracted about 100 protesters and gained media attention worldwide. The overall goal of such events has been to get OEMs to expand their selection of computers without a copy of Windows pre-installed, with the additional goal of getting them to revise and improve their refund policies while the first goal has not been met. An analyst stated that refund actions by individual users were "a publicity stunt [that] has no impact". References Microsoft criticisms and controversies Windows XP Windows XP
Operating System (OS)
918
PlayStation 3 system software The PlayStation 3 system software is the updatable firmware and operating system of the PlayStation 3. The base operating system used by Sony for the PlayStation 3 is a fork of both FreeBSD and NetBSD called CellOS. It uses XrossMediaBar as its graphical shell. The process of updating is almost identical to that of the PlayStation Portable, PlayStation Vita, and PlayStation 4. The software may be updated by downloading the update directly on the PlayStation 3, downloading it from the user's local Official PlayStation website to a PC and using a USB storage device to transfer it to the PlayStation 3, or installing the update from game discs containing update data. The initial slim PS3s SKU shipped with a unique firmware with new features, also seen in software 3.00. Technology System The native operating system of the PlayStation 3 is CellOS, which is believed to be a fork of FreeBSD; TCP/IP stack fingerprinting identifies a PlayStation 3 as running FreeBSD, and the PlayStation 3 is known to contain code from FreeBSD and NetBSD. The 3D computer graphics API software used in the PlayStation 3 is LibGCM and PSGL, based on OpenGL ES and Nvidia's Cg. LibGCM is a low level API, and PSGL is a higher level API, but most developers preferred to use libGCM due to higher levels of performance. This is similar to the later PlayStation 4 console which also has two APIs, the low level GNM and the higher level GNMX. Unlike the Software Development Kit (SDK) for mobile apps, Sony's PlayStation 3 SDK is only available to registered game development companies and contains software tools and an integrated hardware component. Due to the fact that it requires a licensing agreement with Sony (which is considered expensive), a number of open source and homebrew PS3 SDKs are available in addition to a number of leaked PS3 SDKs. Graphical shell The PlayStation 3 uses the XrossMediaBar (XMB) as its graphical user interface, which is also used in the PlayStation Portable (PSP) handheld console, a variety of Sony BRAVIA HDTVs, Blu-ray disc players and many more Sony products. XMB displays icons horizontally across the screen that be seen as categories. Users can navigate through them using the left and right buttons of the D-pad, which move the icons forward or back across the screen, highlighting just one at a time, as opposed to using any kind of pointer to select an option. When one category is selected, there are usually more specific options then available to select that are spread vertically above and below the selected icon. Users may navigate among these options by using the up and down buttons of the D-pad. The basic features offered by XMB implementations varies based on device and software version. Apart from those appearing in the PSP console such as category icons for Photos, Music and Games, the PS3 added Users, TV and Friends to the XMB. Also, XMB offers a degree of multitasking. In-game XMB features were added to the PS3 properly with firmware version 2.41 after causing early implementation problems. While XMB proved to be a successful user interface for Sony products such as PSP and PS3, the next generation Sony video game consoles such as the PlayStation 4 and the PlayStation Vita no longer use this user interface. Cooperation with handheld consoles The PlayStation 3 supports Remote Play with Sony's handheld game consoles, the PlayStation Portable and the PlayStation Vita. However, unlike Remote Play between the PlayStation 4 and the PlayStation Vita, the problem with PS3 was that it only supported a "select" few titles and results were often laggy. However, it is clear that Remote Play with the PS3 was the testing bed for its much better integration with the PS4. Also, for users having both the PlayStation 3 and the PlayStation Vita, it is possible to share media files videos, music and images between them by transferring multimedia files directly from the PlayStation 3 to the PlayStation Vita, or vice versa. Furthermore, they can use a service called cross-buy which allows them to buy certain games that support this feature one time, and play them in both Sony platforms. Not only that, but in the case of most such games, their saved games actually transfer back and forth between devices, allowing players to pick up from the moment they left off. There is also a feature called cross-play (or cross-platform play) covering any PlayStation Vita software title that can interact with a PlayStation 3 software title. Different software titles use Cross-Play in different ways. For example, Ultimate Marvel vs. Capcom 3 is a title supporting the Cross-Play feature, and the PS3 version of the game can be controlled using the PS Vita system. In addition, some PS3 games can be played on the PS Vita using the PlayStation Now streaming service. Non-game features Similar to many other consoles, the PlayStation 3 is capable of photo, audio, and video playback in a variety of formats. It also includes various photo slideshow options and several music visualizations. Furthermore, the PlayStation 3 is able to play Blu-ray Disc and DVD movies as well as audio CDs out of the box, and also capable of adopting streaming media services such as Netflix. For a web browser, the PS3 uses the NetFront browser, although unlike its successor PS4 which uses the same modern Webkit core as Safari from Apple, the PS3 web browser receives a low score in HTML5 compliance testing. However, unlike the PS4, the PS3 is able to play Adobe Flash, including full-screen flash. Early versions of the PlayStation 3 system software also provided a feature called OtherOS that was available on the PS3 systems prior to the slimmer models launched in September 2009. This feature enabled users to install an operating system such as Linux, but due to security concerns, Sony later removed this functionality through the 3.21 system software update. According to Sony Computer Entertainment (SCE), disabling this feature will help ensure that PS3 owners will continue to have access to the broad range of gaming and entertainment content from SCE and its content partners on a more secure system. Sony was successfully sued in a class action over the removal of this feature. The settlement was approved in September 2016. Sony agreed to pay up to $55 to as many as 10 million PS3 owners but denied wrongdoing. Furthermore, the PlayStation 3 provides printing support. It can for example print images and web pages when a supported printer is connected via a USB cable or a local network. However, only a selection of printers from Canon, Epson, and Hewlett-Packard are compatible with the PS3. Backwards compatibility All PlayStation 3 consoles are able to play original PlayStation games (PS One discs and downloadable classics). However, not all PlayStation 3 models are backwards compatible with PlayStation 2 games. In summary, early PS3 consoles such as the 20GB and 60GB launch PS3 consoles were backwards compatible with PS2 games because they had PS2 chips in them. Some later models, most notably the 80GB Metal Gear Solid PS3 consoles are also backwards compatible, through partial software emulation in this case since they no longer had the PS2 CPU in them, although they do have the PS2 GPU in them, allowing for reduced backward compatibility through hardware-assisted software emulation. All other later models, such as the 40GB, 140GB, PS3 Slim, and PS3 Super Slim are not backwards compatible with PS2 games, though users can still enjoy PS One and PS3 games on them. According to Sony, when they removed backwards compatibility from the PS3, they had already been at a point where they were three years into its lifecycle; by that time the vast majority of consumers that were purchasing the PS3 cite PS3 games as a primary reason, meaning that the PS2 compatibility was no longer necessary. Nevertheless, PS2 Classics which are playable on the PS3 have officially been introduced to the PlayStation Network for purchase afterwards, although they are only a selection of PS2 games republished in digital format, and unlike PS3 games, they lack Trophy support. Later when the PlayStation 4 console was released, it was not backward compatible with either PlayStation 3, PlayStation 2, or PlayStation 1 games, although limited PS2 backward compatibility was later introduced, and PS4 owners might play a selected group of PS3 games by streaming them over the Internet using the PlayStation Now cloud-based gaming service. LV0 keys The PlayStation 3 LV0 keys are a set of cryptographic keys which form the core of the PlayStation 3's security system. According to a news story on Polygon: With the LV0 keys users are able to circumvent restrictions placed by Sony, more commonly known as jailbreaking. The LV0 keys were released online by a group calling themselves "The Three Musketeers", granting users access to some of the most sensitive parts of the PlayStation 3. With access to these areas, users can decrypt security updates and work around the authorized PlayStation firmware. This allows PlayStation 3 firmware updates to be modified on a computer so that they can be run on a modified console. The Three Musketeers decided to release the code after a group of rival hackers obtained the code and planned to sell it. While this is not the first time the PlayStation 3 has been hacked, according to Eurogamer, "The release of the new custom firmware—and the LV0 decryption keys in particular—poses serious issues." It also says that "options Sony has in battling this leak are limited" since "the reveal of the LV0 key basically means that any system update released by Sony going forward can be decrypted with little or no effort whatsoever". History of updates The "initial" release for the PlayStation 3 system software was version 1.10 as appeared on 11 November 2006 in Japan and 17 November 2006 in North America that provided the PlayStation Network services and the Remote Play for the 60 GB model. However, version 1.02 was included with some games. There were a number of updates in the 1.xx versions, which provided new features such as the Account Management, compatibility of USB devices for PlayStation 2 format games, and supports for USB webcams and Bluetooth keyboards and mice. Version 1.80 released on 24 May 2007 added a number of relatively small new features, mostly related to media and videos, such as the ability to upscale standard DVD-Videos to 1080p and to downscale Blu-ray video to 720p. Version 1.90 released on 24 May 2007 further added the Wallpaper feature for the background of XMB and the ability to eject a game disc using the controller, to re-order game icons by format and creation date. This update also forced 24 Hz output for Blu-ray over HDMI, and introduced bookmarks and a security function to the web browser. The last version in the 1.xx series was 1.94 released on 23 October 2007 that added support for DualShock 3 controllers. As with the version 1.xx series, there were a number of versions in the 2.xx and 3.xx series, released between 8 November 2007 and 20 September 2011. There were quite a few noticeable changes, and in version 2.10 alone there were new features such as the additions of the Voice Changer feature with the power to make users sound like a person using a voice changer with five presets over hi and low tones, a new music bitmapping process specifically designed for the PS3 to provide enhanced audio playback, as well as supports for DivX and WMV playback and Blu-ray disc profile 1.1 for picture-in-picture. Version 2.50 released on 15 October 2008 was the update in the 2.xx series that contained the largest number of new features or changes, among them were the support for official PS3 Bluetooth headset, in-game screenshots and Adobe Flash 9. A recovery menu (or safe mode) was also introduced in this version. Later versions in the 2.xx series such as the 2.7x, 2.85 or 2.90 were distributed with the PS3 "slim". Similar to versions such as 2.00, versions such as 3.00, 3.10, 3.30, 3.40, and 3.70 all introduced relatively large number of new features or changes, such as supports for new Dynamic Custom Themes, improvements in the Internet Browser, Trophy enhancements, and a new [Video Editor & Uploader] application. The most noticeable change in the version 4.00 released on 30 November 2011 was the added support for the PlayStation Vita handheld game consoles. For example, [PS Vita System] was added as an option and [PS Vita System Application Utility] has been added as a feature under [Game]. With this update, the PlayStation 3 also gained the ability to transfer videos, images, musics, and game data to and from the PlayStation Vita. Version 4.10 released on 8 February 2012 also added improvements to the Internet Browser including some support for HTML5 and its display speed and web page layout accuracy. Later versions in the 4.xx series all made a few changes to the system, mostly to improve the stability and operation quality during the uses of some applications, in addition to adding new features such as displaying closed captions when playing BDs and DVDs and "Check for Update" to the options menu for a game. The PlayStation 3 system software is currently still being updated by Sony. Withdrawal of update 2.40 System software version 2.40, which included the in-game XMB feature and PlayStation 3 Trophies, was released on 2 July 2008; however, it was withdrawn later the same day because a small number of users were unable to restart their consoles after performing the update. The fault was explained to have been because of certain system administrative data being contained in the HDD. The issue was addressed in version 2.41 of the system software released on 8 July 2008. Class action suit filed over update 3.0 System software version 3.0 was released on 1 September 2009. Shortly after its release, a number of users complained that the system update caused their system's Blu-ray drive to malfunction. In addition, John Kennedy of Florida filed a class action suit against Sony Computer Entertainment America(SCEA). John Kennedy had purchased a PlayStation 3 in January 2009, claiming it worked perfectly until he installed the required firmware update 3.0, at which point the Blu-ray drive in his system ceased functioning properly. Sony later released a statement, "SCEA is aware of reports that PS3 owners are experiencing isolated issues with their PS3 system since installing the most recent system software update (v3.00)," and released software update 3.01 on 15 September 2009. However, after installing 3.01, the plaintiff alleged the problems were not only not solved, but the new update created new issues as well. Class action suits filed over update 3.21 Due to the removal of the "OtherOS" feature from older models of the PS3 due to security issues (possibly related to the exploit released by geohot) which caused an uproar in the PlayStation community, several lawsuits have been filed. The first one was filed on behalf of PS3 owners by Anthony Ventura. The suit states that removing the feature constitutes breach of contract, false advertising and a handful of other business practices relating to consumer protection laws as the feature was touted by Sony when these systems were new as a way consumers could turn their machines into a basic PC and cites that the feature was "extremely valuable" and one of the main reasons that many people paid more for the PS3 over buying a competing console like a Wii or an Xbox 360. It also elaborates that anyone who does not accept the update can no longer play future games or future Blu-ray movie releases. Later on, two more suits were also filed by other members of the PlayStation 3 community. The first of these newer lawsuits was filed by Todd Densmore and Antal Herz which claim Sony has rendered several PlayStation 3 features they paid for "inoperable" as a result of the release of firmware 3.21. The second filed was by Jason Baker, Sean Bosquett, Paul Graham, and Paul Vannatta, and claims, among other things, that they "lost money by purchasing a PS3 without receiving the benefit of their bargain because the product is not what it was claimed to be - a game console that would provide both the Other OS feature and gaming functions." A fourth lawsuit was filed by Keith Wright and seeks compensation equal to the cost of the console. A fifth lawsuit was filed by Jeffrey Harper and Zachary Kummer which calls for a jury trial. A sixth lawsuit was filed by Johnathon Huber and has quotes from both the EU and US PlayStation blogs. Sony responded to the lawsuits by requesting a dismissal on the grounds that "no one cared about the feature" and that the filings cite quotes from 3rd party websites, the instruction manual, the PlayStation Web site and claims they are invalid proof and that Sony can disable PSN and the other advertised features (playing games that require newer firmware, etc.) as they wish. The lawyers for the plaintiffs reviewed the request and said that this is fairly common at this stage of the process and that the case would be reviewed before a judge in November 2010. In February 2011, all claims of false advertising in the case were dismissed, but the plaintiffs were allowed to appeal and amend the case and the other claims that the removal violated the Computer Fraud and Abuse act were allowed to go forward. In March 2011, the plaintiffs amended their complaint to refute Sony's claims that it was within its rights under the TOS and warranty to remove the feature adding more details to their claims including breach of warranty, breach of implied warranty, breach of contract, unjust enrichment, and breach of several California unfair business practices laws. In April 2011, SCEA again asked that the case be dismissed and made claims that the plaintiffs refiled claim was insufficient and that they were hackers and wanted to violate Sony's intellectual property and asked the judge to grant search rights on their PS3 systems. SCEA also made claims that they were not the division solely responsible for the removal and should not be held responsible despite conflicting information to the contrary. On 18 April 2011, the plaintiffs fired back at Sony's renewed efforts to have the case dismissed by pointing out the fact that Sony had made many of the same claims before and that they had been dismissed by the court and also pointed out several legal precedents under California law that refuted Sony's claims. In December 2011, the whole case was dismissed under the grounds that the plaintiffs had failed to prove that they could expect the "Other OS" feature beyond the warranty of the machine. However, this decision was overturned in a 2014 appellate court decision finding that plaintiffs had indeed made clear and sufficiently substantial claims. Ultimately, in 2016, Sony settled with users who installed Linux or purchased a PlayStation 3 based upon the alternative OS functionality. Withdrawal of update 4.45 System software version 4.45 was released on 18 June 2013; however, it was withdrawn one day later because a small number of users were unable to restart their consoles after performing the update. On 21 June 2013, Morgan Haro, a Community Manager for PlayStation Network, announced that the issue had been identified and a new update was planned to be released to resolve the issue. The system update that addressed this issue, version 4.46 was released on 27 June 2013, and a fix for those affected by system version 4.45 was also provided by Sony. See also Media Go Linux for PlayStation 3 LocationFree Player Qriocity XrossMediaBar Other gaming platforms from Sony: PlayStation 4 system software PlayStation Portable system software PlayStation Vita system software Other gaming platforms from the next generation: Wii U system software Xbox One system software Nintendo 3DS system software Nintendo Switch system software Other gaming platforms from this generation: Wii system software Xbox 360 system software Nintendo DSi system software References External links Official PlayStation 3 System Software Update page • Australia • • • New Zealand • United Kingdom • United States Update History PlayStation Blog (firmware announcements) PS3 compatible printers Software Game console operating systems Unix variants 2006 software Proprietary operating systems PlayStation 3 - PlayStation Support with PlayStation by Windows Phones Power by Windows 10|Windows
Operating System (OS)
919
System Management Mode System Management Mode (SMM, sometimes called ring −2 in reference to protection rings) is an operating mode of x86 central processor units (CPUs) in which all normal execution, including the operating system, is suspended. An alternate software system which usually resides in the computer's firmware, or a hardware-assisted debugger, is then executed with high privileges. It was first released with the Intel 386SL. While initially special SL versions were required for SMM, Intel incorporated SMM in its mainline 486 and Pentium processors in 1993. AMD implemented Intel's SMM with the Am386 processors in 1991. It is available in all later microprocessors in the x86 architecture. Some ARM processors also include the Management Mode, for the system firmware (such as UEFI). Operation SMM is a special-purpose operating mode provided for handling system-wide functions like power management, system hardware control, or proprietary OEM designed code. It is intended for use only by system firmware (BIOS or UEFI), not by applications software or general-purpose systems software. The main benefit of SMM is that it offers a distinct and easily isolated processor environment that operates transparently to the operating system or executive and software applications. In order to achieve transparency, SMM imposes certain rules. The SMM can only be entered through SMI (System Management Interrupt). The processor executes the SMM code in a separate address space (SMRAM) that has to be made inaccessible to other operating modes of the CPU by the firmware. System Management Mode can address up to 4 GB memory as huge real mode. In x86-64 processors, SMM can address >4 GB memory as real address mode. Usage Initially, System Management Mode was used for implementing power management and hardware control features like Advanced Power Management (APM). However, BIOS manufacturers and OEMs have relied on SMM for newer functionality like Advanced Configuration and Power Interface (ACPI). Some uses of the System Management Mode are: Handle system events like memory or chipset errors Manage system safety functions, such as shutdown on high CPU temperature System Management BIOS (SMBIOS) Advanced Configuration and Power Interface Control power management operations, such as managing the voltage regulator module and LPCIO (super I/O or embedded controller) Emulate USB mouse/keyboard as PS/2 mouse/keyboard (often referred to as USB legacy support) Centralize system configuration, such as on Toshiba and IBM/Lenovo notebook computers Managing the Trusted Platform Module (TPM) BIOS-specific hardware control programs, including USB hotswap and Thunderbolt hotswap in operating system runtime System Management Mode can also be abused to run high-privileged rootkits, as demonstrated at Black Hat 2008 and 2015. Entering SMM SMM is entered via the SMI (system management interrupt), which is invoked by: Motherboard hardware or chipset signaling via a designated pin SMI# of the processor chip. This signal can be an independent event. Software SMI triggered by the system software via an I/O access to a location considered special by the motherboard logic (port is common). An I/O write to a location which the firmware has requested that the processor chip act on. By entering SMM, the processor looks for the first instruction at the address SMBASE (SMBASE register content) + 8000h (by default 38000h), using registers CS = 3000h and EIP = 8000h. The CS register value (3000h) is due to the use of real-mode memory addresses by the processor when in SMM. In this case, the CS is internally appended with 0h on its rightmost end. Problems By design, the operating system cannot override or disable the SMI. Due to this fact, it is a target for malicious rootkits to reside in, including NSA's "implants", which have individual code names for specific hardware, like SOUFFLETROUGH for Juniper Networks firewalls, SCHOOLMONTANA for J-series routers of the same company, DEITYBOUNCE for DELL, or IRONCHEF for HP Proliant servers. Improperly designed and insufficiently tested SMM BIOS code can make the wrong assumptions and not work properly when interrupting some other x86 operating modes like PAE or 64-bit long mode. According to the documentation of the Linux kernel, around 2004, such buggy implementations of the USB legacy support feature were a common cause of crashes, for example, on motherboards based on the Intel E7505 chipset. Since the SMM code (SMI handler) is installed by the system firmware (BIOS), the OS and the SMM code may have expectations about hardware settings that are incompatible, such as different ideas of how the Advanced Programmable Interrupt Controller (APIC) should be set up. Operations in SMM take CPU time away from the applications, operating-system kernel and hypervisor, with the effects magnified for multicore processors, since each SMI causes all cores to switch modes. There is also some overhead involved with switching in and out of SMM, since the CPU state must be stored to memory (SMRAM) and any write-back caches must be flushed. This can destroy real-time behavior and cause clock ticks to get lost. The Windows and Linux kernels define an "SMI Timeout" setting a period within which SMM handlers must return control to the operating system, or it will "hang" or "crash". The SMM may disrupt the behavior of real-time applications with constrained timing requirements. A logic analyzer may be required to determine whether the CPU has entered SMM (checking state of SMIACT# pin of CPU). Recovering the SMI handler code to analyze it for bugs, vulnerabilities and secrets requires a logic analyzer or disassembly of the system firmware. See also Coreboot includes an open-source SMM/SMI handler implementation for some chipsets Intel 80486SL LOADALL MediaGX a processor which emulates nonexistent hardware via SMM Ring −3 Unified Extensible Firmware Interface (UEFI) Basic Input/Output System (BIOS) References Further reading AMD Hammer BIOS and Kernel Developer's guide, Chapter 6 (archived from the original on 7 December 2008) Intel 64 and IA-32 Architectures Developer's Manual, Volume 3C, Chapter 34 Rootkits X86 operating modes BIOS ARM architecture
Operating System (OS)
920
List of Soviet computer systems This is the list of Soviet computer systems. The Russian abbreviation EVM (ЭВМ), present in some of the names below, means “electronic computing machine” (). List of hardware The Russian abbreviation EVM (ЭВМ), present in some of the names below, means “electronic computing machine” (). Ministry of Radio Technology Computer systems from the Ministry of Radio Technology: Agat (Агат) — Apple II clone ES EVM (ЕС ЭВМ), IBM mainframe clone ES PEVM (ЕС ПЭВМ), IBM PC compatible M series — series of mainframes and mini-computers Minsk (Минск) Poisk (Поиск) — IBM PC XT clone Setun (Сетунь) — unique balanced ternary computer. Strela (Стрела) Ural (Урал) — mainframe series Vector-06C (Вектор-06Ц) Ministry of Instrument Making Computer systems from the Ministry of Instrument Making: Aragats (Арагац) Iskra (Искра) — common name for many computers with different architecture Iskra-1030 — Intel 8086 XT clone KVM-1 (КВМ-1) SM EVM (СМ ЭВМ) — most models were PDP-11 clones, while some others were HP 2100, VAX or Intel compatible Ministry of the Electronics Industry Computer systems from the Ministry of Electronics Industry: Elektronika (Электроника) family DVK family (ДВК) — PDP-11 clones Elektronika BK-0010 (БК-0010, БК-0011) — LSI-11 clone home computer UKNC (УКНЦ) — educational, PDP11-like Elektronika 60, Elektronika 100 Elektronika 85 — Clone of DEC Professional (computer) 350 (F11) Elektronika 85.1 — Clone of DEC Professional (computer) 380 (J11) Elektronika D3-28 Elektronika SS BIS (Электроника СС БИС) — Cray clone Soviet Academy of Sciences BESM (БЭСМ) — series of mainframes Besta (Беста) — Unix box, Motorola 68020-based, Sun-3 clone Elbrus (Эльбрус) — high-end mainframe series Kronos (Кронос) MESM (МЭСМ) — first Soviet Union computer (1950) M-1 — one of the earliest stored program computers (1950-1951) ZX Spectrum clones ATM Turbo Dubna 48K - running at half the speed of the original Hobbit Pentagon Radon 'Z' Scorpion Other 5E** (5Э**) series - military computers 5E51 (5Э51) 5E53 (5Э53) 5E76 (5Э76) - IBM/360 clone, military version 5E92 (5Э92) 5E92b (5Э92б) A series — ES EVM-compatible military computers Argon — a series of military real-time computers AS-6 (АС-6) - multiprocessor computing complex, name is Russian abbreviation for "Connection Equipment - 6" Dnepr (Днепр) GVS-100 (ГВС-100, Гибридная Вичислителная Система) - Hybrid Computer System Irisha (Ириша) Juku educational computer Kiev (Киев) Korvet (Корвет) Krista (Криста) Micro-80 (Микро-80) — experimental PC, based on 8080-compatible processor Microsha (Микроша) — modification of Radio-86RK MIR, МИР (:uk:ЕОМ "МИР-1", :uk:ЕОМ "МИР-2") Nairi (Наири) Orion-128 (Орион-128) Promin (Проминь) PS-2000, PS-3000 — multiprocessor supercomputers in the 1980s Razdan (Раздан) Radon — real-time computer, designed for anti-aircraft defense Radio-86RK — simplified and modified version of Micro-80 Sneg (Снег) Specialist (Специалист) SVS TsUM-1 (ЦУМ-1) TIA-MC-1 An arcade system UM (УМ) UT-88 Vesna and Sneg — early mainframes List of operating systems For Kronos Kronos For BESM D-68 (Д-68, Диспетчер-68, Dispatcher-68) DISPAK (“Диспетчер Пакетов,” Dispatcher of the Packets) DUBNA (“ДУБНА”) For ES EVM DOS/ES (“Disk Operation system for ES EVM”) OS/ES (“Disk Operation system for ES EVM”) For SM EVM RAFOS (РАФОС), FOBOS (ФОБОС) and FODOS (ФОДОС) — RT-11 clones OSRV (ОСРВ) — RSX-11M clone, one of the most popular Soviet multi-user systems DEMOS — BSD-based Unix-like; later was ported to x86 and some other architectures INMOS (ИНМОС, Инструментальная мобильная операционная система) For 8-bit microcomputers MicroDOS (МикроДОС) — CP/M 2.2 clone For ZX Spectrum clones iS-DOS, TASiS DNA-OS For different platforms MISS (Multipurpose Interactive timeSharing System) - ES EVM ES1010, ES EVM ES1045, D3-28M, PC-compatible, etc. MOS (operating system) - a Soviet clone of Unix in the 1980s See also History of computing in the Soviet Union List of Soviet microprocessors List of Russian IT developers List of Russian microprocessors Internet in Russia References External links Russian Virtual Computer Museum Museum of the USSR Computers history Pioneers of Soviet Computing Archive software and documentation for Soviet computers UK-NC, DVK and BK0010. Computing-related lists
Operating System (OS)
921
OverDrive Media Console OverDrive Media Console is a proprietary, freeware application developed by OverDrive, Inc. for use with its digital distribution services for libraries, schools, and retailers. The application enables users to access audiobooks, eBooks, periodicals, and videos borrowed from libraries and schools—or purchased from booksellers—on devices running Android, BlackBerry, iOS (iPad/iPhone/iPod), and Windows, including Mac and Windows desktop and laptop computers. In October 2012, Barnes & Noble added the OverDrive Media Console app to the NOOK App Store, enabling Nook Color, Nook Tablet, and later Nook HD, Nook HD+ and Samsung Galaxy Tab 4 7.0 Nook users to download audiobooks, eBooks, and videos directly to their devices. The OverDrive app is also available for users of the Kindle Fire in the Amazon Apps Store. Also in October 2012, OverDrive released OverDrive Media Console for Windows 8, which supports devices running Microsoft's Windows 8 and Windows RT operating systems. Reviewers have rated the OverDrive Media Console app among the best eReading applications for the BlackBerry, the iPad, and the iPhone. OverDrive Media Console supports a variety of formats, including EPUB and PDF for reading, and MP3 and Windows Media Audio (WMA) for listening. WMA content is only supported on the Windows version of the OverDrive Media Console, which dramatically reduces the number of titles available on other operating systems. However, many publishers allow transfers of WMA format materials to Apple devices after downloading to a Windows computer. See also 3M#Products (3M Cloud Library) Baker & Taylor#Offerings (Axis360) Digital audio players Windows Media Player References External links OverDrive EPUB readers Proprietary software Windows media players
Operating System (OS)
922
Odin (code conversion software) In computing, Odin is a project to run Microsoft Windows programs on OS/2 or convert them to OS/2 native format. It also provides the Odin32 API to compile Win32 (Windows API) programs for OS/2. The project's goals are: Every Windows program should load and operate properly. Create a complete OS/2 implementation of the Win32 API. Although this is far from complete, much of the Win32 API is not widely used, so partial implementation will give usable results. Odin32 is already used commercially for the OS/2 port of the Opera web browser. Odin is included in the ArcaOS operating system. Technical overview Odin achieves binary compatibility by converting Win32 executables and dynamic-link libraries to OS/2 format. Conversion can be done on the fly (each time the application is run) or permanently. Odin does not use emulation or a compatibility layer. Odin identifies itself to Windows applications as Windows 2000 Service Pack 2. Odin uses code from Wine, which runs Win32 applications on Unix-like operating systems. Name The project is named after Odin, the god of wisdom and supreme god of Germanic and Norse mythology. References External links Project Website Application compatibility list OS/2 emulation software Software derived from or incorporating Wine
Operating System (OS)
923
Sun Microsystems Sun Microsystems, Inc. (Sun for short) was an American technology company that sold computers, computer components, software, and information technology services and created the Java programming language, the Solaris operating system, ZFS, the Network File System (NFS), VirtualBox, and SPARC microprocessors. Sun contributed significantly to the evolution of several key computing technologies, among them Unix, RISC processors, thin client computing, and virtualized computing. Sun was founded on February 24, 1982. At its height, the Sun headquarters were in Santa Clara, California (part of Silicon Valley), on the former west campus of the Agnews Developmental Center. Sun products included computer servers and workstations built on its own RISC-based SPARC processor architecture, as well as on x86-based AMD Opteron and Intel Xeon processors. Sun also developed its own storage systems and a suite of software products, including the Solaris operating system, developer tools, Web infrastructure software, and identity management applications. Technologies included the Java platform and NFS. In general, Sun was a proponent of open systems, particularly Unix. It was also a major contributor to open-source software, as evidenced by its $1 billion purchase, in 2008, of MySQL, an open-source relational database management system. At various times, Sun had manufacturing facilities in several locations worldwide, including Newark, California; Hillsboro, Oregon; and Linlithgow, Scotland. However, by the time the company was acquired by Oracle, it had outsourced most manufacturing responsibilities. On April 20, 2009, it was announced that Oracle Corporation would acquire Sun for 7.4 billion. The deal was completed on January 27, 2010. History The initial design for what became Sun's first Unix workstation, the Sun-1, was conceived by Andy Bechtolsheim when he was a graduate student at Stanford University in Palo Alto, California. Bechtolsheim originally designed the SUN workstation for the Stanford University Network communications project as a personal CAD workstation. It was designed around the Motorola 68000 processor with an advanced memory management unit (MMU) to support the Unix operating system with virtual memory support. He built the first examples from spare parts obtained from Stanford's Department of Computer Science and Silicon Valley supply houses. On February 24, 1982, Scott McNealy, Andy Bechtolsheim, and Vinod Khosla, all Stanford graduate students, founded Sun Microsystems. Bill Joy of Berkeley, a primary developer of the Berkeley Software Distribution (BSD), joined soon after and is counted as one of the original founders. The Sun name is derived from the initials of the Stanford University Network. Sun was profitable from its first quarter in July 1982. By 1983 Sun was known for producing 68k-based systems with high-quality graphics that were the only computers other than DEC's VAX to run 4.2BSD. It licensed the computer design to other manufacturers, which typically used it to build Multibus-based systems running Unix from UniSoft. Sun's initial public offering was in 1986 under the stock symbol SUNW, for Sun Workstations (later Sun Worldwide). The symbol was changed in 2007 to JAVA; Sun stated that the brand awareness associated with its Java platform better represented the company's current strategy. Sun's logo, which features four interleaved copies of the word sun in the form of a rotationally symmetric ambigram, was designed by professor Vaughan Pratt, also of Stanford. The initial version of the logo was orange and had the sides oriented horizontally and vertically, but it was subsequently rotated to stand on one corner and re-colored purple, and later blue. The "dot-com bubble" and aftermath During the dot-com bubble, Sun began making more money, with its stock rising as high as $250 per share. It also began spending much more, hiring workers and building itself out. Some of this was because of genuine demand, but much was from web start-up companies anticipating business that would never happen. In 2000, the bubble burst. Sales in Sun's important hardware division went into free-fall as customers closed shop and auctioned high-end servers. Several quarters of steep losses led to executive departures, rounds of layoffs, and other cost cutting. In December 2001, the stock fell to the 1998, pre-bubble level of about $100. It continued to fall, faster than many other technology companies. A year later, it had reached below $10 (a tenth of what it was in 1990), but it eventually bounced back to $20. In mid-2004, Sun closed their Newark, California, factory and consolidated all manufacturing to Hillsboro, Oregon and Linlithgow, Scotland. In 2006, the rest of the Newark campus was put on the market. Post-crash focus In 2004, Sun canceled two major processor projects which emphasized high instruction-level parallelism and operating frequency. Instead, the company chose to concentrate on processors optimized for multi-threading and multiprocessing, such as the UltraSPARC T1 processor (codenamed "Niagara"). The company also announced a collaboration with Fujitsu to use the Japanese company's processor chips in mid-range and high-end Sun servers. These servers were announced on April 17, 2007, as the M-Series, part of the SPARC Enterprise series. In February 2005, Sun announced the Sun Grid, a grid computing deployment on which it offered utility computing services priced at US$1 per CPU/hour for processing and per GB/month for storage. This offering built upon an existing 3,000-CPU server farm used for internal R&D for over 10 years, which Sun marketed as being able to achieve 97% utilization. In August 2005, the first commercial use of this grid was announced for financial risk simulations which were later launched as its first software as a service product. In January 2005, Sun reported a net profit of $19 million for fiscal 2005 second quarter, for the first time in three years. This was followed by net loss of $9 million on GAAP basis for the third quarter 2005, as reported on April 14, 2005. In January 2007, Sun reported a net GAAP profit of $126 million on revenue of $3.337 billion for its fiscal second quarter. Shortly following that news, it was announced that Kohlberg Kravis Roberts (KKR) would invest $700 million in the company. Sun had engineering groups in Bangalore, Beijing, Dublin, Grenoble, Hamburg, Prague, St. Petersburg, Tel Aviv, Tokyo, Canberra and Trondheim. In 2007–2008, Sun posted revenue of $13.8 billion and had $2 billion in cash. First-quarter 2008 losses were $1.68 billion; revenue fell 7% to $12.99 billion. Sun's stock lost 80% of its value November 2007 to November 2008, reducing the company's market value to $3 billion. With falling sales to large corporate clients, Sun announced plans to lay off 5,000 to 6,000 workers, or 15–18% of its work force. It expected to save $700 million to $800 million a year as a result of the moves, while also taking up to $600 million in charges. Sun acquisitions 1987: Trancept Systems, a high-performance graphics hardware company 1987: Sitka Corp, networking systems linking the Macintosh with IBM PCs 1987: Centram Systems West, maker of networking software for PCs, Macs and Sun systems 1988: Folio, Inc., developer of intelligent font scaling technology and the F3 font format 1991: Interactive Systems Corporation's Intel/Unix OS division, from Eastman Kodak Company 1992: Praxsys Technologies, Inc., developers of the Windows emulation technology that eventually became Wabi 1994: Thinking Machines Corporation hardware division 1996: Lighthouse Design, Ltd. 1996: Cray Business Systems Division, from Silicon Graphics 1996: Integrated Micro Products, specializing in fault tolerant servers 1996: Thinking Machines Corporation software division February 1997: LongView Technologies, LLC August 1997: Diba, technology supplier for the Information Appliance industry September 1997: Chorus Systèmes SA, creators of ChorusOS November 1997: Encore Computer Corporation's storage business 1998: RedCape Software 1998: i-Planet, a small software company that produced the "Pony Espresso" mobile email client—its name (sans hyphen) for the Sun-Netscape software alliance June 1998: Dakota Scientific Software, Inc.—development tools for high-performance computing July 1998: NetDynamics—developers of the NetDynamics Application Server October 1998: Beduin, small software company that produced the "Impact" small-footprint Java-based Web browser for mobile devices. 1999: Star Division, German software company and with it StarOffice, which was later released as open source under the name OpenOffice.org 1999: MAXSTRAT Corporation, a company in Milpitas, California selling Fibre Channel storage servers. October 1999: Forté Software, an enterprise software company specializing in integration solutions and developer of the Forte 4GL 1999: TeamWare 1999: NetBeans, produced a modular IDE written in Java, based on a student project at Charles University in Prague March 2000: Innosoft International, Inc. a software company specializing in highly scalable MTAs (PMDF) and Directory Services. July 2000: Gridware, a software company whose products managed the distribution of computing jobs across multiple computers September 2000: Cobalt Networks, an Internet appliance manufacturer for $2 billion December 2000: HighGround, with a suite of Web-based management solutions 2001: LSC, Inc., an Eagan, Minnesota company that developed Storage and Archive Management File System (SAM-FS) and Quick File System QFS file systems for backup and archive March 2001: InfraSearch, a peer-to-peer search company based in Burlingame. March 2002: Clustra Systems June 2002: Afara Websystems, developed SPARC processor-based technology September 2002: Pirus Networks, intelligent storage services November 2002: Terraspring, infrastructure automation software June 2003: Pixo, added to the Sun Content Delivery Server August 2003: CenterRun, Inc. December 2003: Waveset Technologies, identity management January 2004 Nauticus Networks February 2004: Kealia, founded by original Sun founder Andy Bechtolsheim, developed AMD-based 64-bit servers January 2005: SevenSpace, a multi-platform managed services provider May 2005: Tarantella, Inc. (formerly known as Santa Cruz Operation (SCO)), for $25 million June 2005: SeeBeyond, a Service-Oriented Architecture (SOA) software company for $387m June 2005: Procom Technology, Inc.'s NAS IP Assets August 2005: StorageTek, data storage technology company for $4.1 billion February 2006: Aduva, software for Solaris and Linux patch management October 2006: Neogent April 2007: SavaJe, the SavaJe OS, a Java OS for mobile phones September 2007: Cluster File Systems, Inc. November 2007: Vaau, Enterprise Role Management and identity compliance solutions February 2008: MySQL AB, the company offering the open source database MySQL for $1 billion. February 2008: Innotek GmbH, developer of the VirtualBox virtualization product April 2008: Montalvo Systems, x86 microprocessor startup acquired before first silicon January 2009: Q-layer, a software company with cloud computing solutions Major stockholders As of May 11, 2009, the following shareholders held over 100,000 common shares of Sun and at $9.50 per share offered by Oracle, they received the amounts indicated when the acquisition closed. Hardware For the first decade of Sun's history, the company positioned its products as technical workstations, competing successfully as a low-cost vendor during the Workstation Wars of the 1980s. It then shifted its hardware product line to emphasize servers and storage. High-level telecom control systems such as Operational Support Systems service predominantly used Sun equipment. Motorola-based systems Sun originally used Motorola 68000 family central processing units for the Sun-1 through Sun-3 computer series. The Sun-1 employed a 68000 CPU, the Sun-2 series, a 68010. The Sun-3 series was based on the 68020, with the later Sun-3x using the 68030. SPARC-based systems In 1987, the company began using SPARC, a RISC processor architecture of its own design, in its computer systems, starting with the Sun-4 line. SPARC was initially a 32-bit architecture (SPARC V7) until the introduction of the SPARC V9 architecture in 1995, which added 64-bit extensions. Sun has developed several generations of SPARC-based computer systems, including the SPARCstation, Ultra, and Sun Blade series of workstations, and the SPARCserver, Netra, Enterprise, and Sun Fire line of servers. In the early 1990s the company began to extend its product line to include large-scale symmetric multiprocessing servers, starting with the four-processor SPARCserver 600MP. This was followed by the 8-processor SPARCserver 1000 and 20-processor SPARCcenter 2000, which were based on work done in conjunction with Xerox PARC. In 1995 the company introduced Sun Ultra series machines that were equipped with the first 64-bit implementation of SPARC processors (UltraSPARC). In the late 1990s the transformation of product line in favor of large 64-bit SMP systems was accelerated by the acquisition of Cray Business Systems Division from Silicon Graphics. Their 32-bit, 64-processor Cray Superserver 6400, related to the SPARCcenter, led to the 64-bit Sun Enterprise 10000 high-end server (otherwise known as Starfire or E10K). In September 2004 Sun made available systems with UltraSPARC IV which was the first multi-core SPARC processor. It was followed by UltraSPARC IV+ in September 2005 and its revisions with higher clock speeds in 2007. These CPUs were used in the most powerful, enterprise class high-end CC-NUMA servers developed by Sun, such as the Sun Fire E15K and the Sun Fire E25K. In November 2005 Sun launched the UltraSPARC T1, notable for its ability to concurrently run 32 threads of execution on 8 processor cores. Its intent was to drive more efficient use of CPU resources, which is of particular importance in data centers, where there is an increasing need to reduce power and air conditioning demands, much of which comes from the heat generated by CPUs. The T1 was followed in 2007 by the UltraSPARC T2, which extended the number of threads per core from 4 to 8. Sun has open sourced the design specifications of both the T1 and T2 processors via the OpenSPARC project. In 2006, Sun ventured into the blade server (high density rack-mounted systems) market with the Sun Blade (distinct from the Sun Blade workstation). In April 2007 Sun released the SPARC Enterprise server products, jointly designed by Sun and Fujitsu and based on Fujitsu SPARC64 VI and later processors. The M-class SPARC Enterprise systems include high-end reliability and availability features. Later T-series servers have also been badged SPARC Enterprise rather than Sun Fire. In April 2008 Sun released servers with UltraSPARC T2 Plus, which is an SMP capable version of UltraSPARC T2, available in 2 or 4 processor configurations. It was the first CoolThreads CPU with multi-processor capability and it made possible to build standard rack-mounted servers that could simultaneously process up to massive 256 CPU threads in hardware (Sun SPARC Enterprise T5440), which is considered a record in the industry. Since 2010, all further development of Sun machines based on SPARC architecture (including new SPARC T-Series servers, SPARC T3 and T4 chips) is done as a part of Oracle Corporation hardware division. x86-based systems In the late 1980s, Sun also marketed an Intel 80386-based machine, the Sun386i; this was designed to be a hybrid system, running SunOS but at the same time supporting DOS applications. This only remained on the market for a brief time. A follow-up "486i" upgrade was announced but only a few prototype units were ever manufactured. Sun's brief first foray into x86 systems ended in the early 1990s, as it decided to concentrate on SPARC and retire the last Motorola systems and 386i products, a move dubbed by McNealy as "all the wood behind one arrowhead". Even so, Sun kept its hand in the x86 world, as a release of Solaris for PC compatibles began shipping in 1993. In 1997 Sun acquired Diba, Inc., followed later by the acquisition of Cobalt Networks in 2000, with the aim of building network appliances (single function computers meant for consumers). Sun also marketed a Network Computer (a term popularized and eventually trademarked by Oracle); the JavaStation was a diskless system designed to run Java applications. Although none of these business initiatives were particularly successful, the Cobalt purchase gave Sun a toehold for its return to the x86 hardware market. In 2002, Sun introduced its first general purpose x86 system, the LX50, based in part on previous Cobalt system expertise. This was also Sun's first system announced to support Linux as well as Solaris. In 2003, Sun announced a strategic alliance with AMD to produce x86/x64 servers based on AMD's Opteron processor; this was followed shortly by Sun's acquisition of Kealia, a startup founded by original Sun founder Andy Bechtolsheim, which had been focusing on high-performance AMD-based servers. The following year, Sun launched the Opteron-based Sun Fire V20z and V40z servers, and the Java Workstation W1100z and W2100z workstations. On September 12, 2005, Sun unveiled a new range of Opteron-based servers: the Sun Fire X2100, X4100 and X4200 servers. These were designed from scratch by a team led by Bechtolsheim to address heat and power consumption issues commonly faced in data centers. In July 2006, the Sun Fire X4500 and X4600 systems were introduced, extending a line of x64 systems that support not only Solaris, but also Linux and Microsoft Windows. On January 22, 2007, Sun announced a broad strategic alliance with Intel. Intel endorsed Solaris as a mainstream operating system and as its mission critical Unix for its Xeon processor-based systems, and contributed engineering resources to OpenSolaris. Sun began using the Intel Xeon processor in its x64 server line, starting with the Sun Blade X6250 server module introduced in June 2007. On May 5, 2008, AMD announced its Operating System Research Center (OSRC) expanded its focus to include optimization to Sun's OpenSolaris and xVM virtualization products for AMD based processors. Software Although Sun was initially known as a hardware company, its software history began with its founding in 1982; co-founder Bill Joy was one of the leading Unix developers of the time, having contributed the vi editor, the C shell, and significant work developing TCP/IP and the BSD Unix OS. Sun later developed software such as the Java programming language and acquired software such as StarOffice, VirtualBox and MySQL. Sun used community-based and open-source licensing of its major technologies, and for its support of its products with other open source technologies. GNOME-based desktop software called Java Desktop System (originally code-named "Madhatter") was distributed for the Solaris operating system, and at one point for Linux. Sun supported its Java Enterprise System (a middleware stack) on Linux. It released the source code for Solaris under the open-source Common Development and Distribution License, via the OpenSolaris community. Sun's positioning includes a commitment to indemnify users of some software from intellectual property disputes concerning that software. It offers support services on a variety of pricing bases, including per-employee and per-socket. A 2006 report prepared for the EU by UNU-MERIT stated that Sun was the largest corporate contributor to open source movements in the world. According to this report, Sun's open source contributions exceed the combined total of the next five largest commercial contributors. Operating systems Sun is best known for its Unix systems, which have a reputation for system stability and a consistent design philosophy. Sun's first workstation shipped with UniSoft V7 Unix. Later in 1982 Sun began providing SunOS, a customized 4.1BSD Unix, as the operating system for its workstations. In the late 1980s, AT&T tapped Sun to help them develop the next release of their branded UNIX, and in 1988 announced they would purchase up to a 20% stake in Sun. UNIX System V Release 4 (SVR4) was jointly developed by AT&T and Sun. Sun used SVR4 as the foundation for Solaris 2.x, which became the successor to SunOS 4.1.x (later retroactively named Solaris 1.x). By the mid-1990s, the ensuing Unix wars had largely subsided, AT&T had sold off their Unix interests, and the relationship between the two companies was significantly reduced. From 1992 Sun also sold Interactive Unix, an operating system it acquired when it bought Interactive Systems Corporation from Eastman Kodak Company. This was a popular Unix variant for the PC platform and a major competitor to market leader SCO UNIX. Sun's focus on Interactive Unix diminished in favor of Solaris on both SPARC and x86 systems; it was dropped as a product in 2001. Sun dropped the Solaris 2.x version numbering scheme after the Solaris 2.6 release (1997); the following version was branded Solaris 7. This was the first 64-bit release, intended for the new UltraSPARC CPUs based on the SPARC V9 architecture. Within the next four years, the successors Solaris 8 and Solaris 9 were released in 2000 and 2002 respectively. Following several years of difficult competition and loss of server market share to competitors' Linux-based systems, Sun began to include Linux as part of its strategy in 2002. Sun supported both Red Hat Enterprise Linux and SUSE Linux Enterprise Server on its x64 systems; companies such as Canonical Ltd., Wind River Systems and MontaVista also supported their versions of Linux on Sun's SPARC-based systems. In 2004, after having cultivated a reputation as one of Microsoft's most vocal antagonists, Sun entered into a joint relationship with them, resolving various legal entanglements between the two companies and receiving US$1.95 billion in settlement payments from them. Sun supported Microsoft Windows on its x64 systems, and announced other collaborative agreements with Microsoft, including plans to support each other's virtualization environments. In 2005, the company released Solaris 10. The new version included a large number of enhancements to the operating system, as well as very novel features, previously unseen in the industry. Solaris 10 update releases continued through the next 8 years, the last release from Sun Microsystems being Solaris 10 10/09. The following updates were released by Oracle under the new license agreement; the final release is Solaris 10 1/13. Previously, Sun offered a separate variant of Solaris called Trusted Solaris, which included augmented security features such as multilevel security and a least privilege access model. Solaris 10 included many of the same capabilities as Trusted Solaris at the time of its initial release; Solaris 10 11/06 included Solaris Trusted Extensions, which give it the remaining capabilities needed to make it the functional successor to Trusted Solaris. After releasing Solaris 10, its source code was opened under CDDL free software license and developed in open with contributing Opensolaris community through SXCE that used SVR4 .pkg packaging and supported Opensolaris releases that used IPS. Following acquisition of Sun by Oracle, Opensolaris continued to develop in open under illumos with illumos distributions. Oracle Corporation continued to develop OpenSolaris into next Solaris release, changing back the license to proprietary, and released it as Oracle Solaris 11 in November 2011. Java platform The Java platform was developed at Sun by James Gosling in the early 1990s with the objective of allowing programs to function regardless of the device they were used on, sparking the slogan "Write once, run anywhere" (WORA). While this objective was not entirely achieved (prompting the riposte "Write once, debug everywhere"), Java is regarded as being largely hardware- and operating system-independent. Java was initially promoted as a platform for client-side applets running inside web browsers. Early examples of Java applications were the HotJava web browser and the HotJava Views suite. However, since then Java has been more successful on the server side of the Internet. The platform consists of three major parts: the Java programming language, the Java Virtual Machine (JVM), and several Java Application Programming Interfaces (APIs). The design of the Java platform is controlled by the vendor and user community through the Java Community Process (JCP). Java is an object-oriented programming language. Since its introduction in late 1995, it became one of the world's most popular programming languages. Java programs are compiled to byte code, which can be executed by any JVM, regardless of the environment. The Java APIs provide an extensive set of library routines. These APIs evolved into the Standard Edition (Java SE), which provides basic infrastructure and GUI functionality; the Enterprise Edition (Java EE), aimed at large software companies implementing enterprise-class application servers; and the Micro Edition (Java ME), used to build software for devices with limited resources, such as mobile devices. On November 13, 2006, Sun announced it would be licensing its Java implementation under the GNU General Public License; it released its Java compiler and JVM at that time. In February 2009 Sun entered a battle with Microsoft and Adobe Systems, which promoted rival platforms to build software applications for the Internet. JavaFX was a development platform for music, video and other applications that builds on the Java programming language. Office suite In 1999, Sun acquired the German software company Star Division and with it the office suite StarOffice, which Sun later released as OpenOffice.org under both GNU LGPL and the SISSL (Sun Industry Standards Source License). OpenOffice.org supported Microsoft Office file formats (though not perfectly), was available on many platforms (primarily Linux, Microsoft Windows, Mac OS X, and Solaris) and was used in the open source community. The principal differences between StarOffice and OpenOffice.org were that StarOffice was supported by Sun, was available as either a single-user retail box kit or as per-user blocks of licensing for the enterprise, and included a wider range of fonts and document templates and a commercial quality spellchecker. StarOffice also contained commercially licensed functions and add-ons; in OpenOffice.org these were either replaced by open-source or free variants, or are not present at all. Both packages had native support for the OpenDocument format. Derivatives of OpenOffice.org continue to be developed, these are LibreOffice, Collabora Online, Apache OpenOffice and NeoOffice. Virtualization and datacenter automation software In 2007, Sun announced the Sun xVM virtualization and datacenter automation product suite for commodity hardware. Sun also acquired VirtualBox in 2008. Earlier virtualization technologies from Sun like Dynamic System Domains and Dynamic Reconfiguration were specifically designed for high-end SPARC servers, and Logical Domains only supports the UltraSPARC T1/T2/T2 Plus server platforms. Sun marketed Sun Ops Center provisioning software for datacenter automation. On the client side, Sun offered virtual desktop solutions. Desktop environments and applications could be hosted in a datacenter, with users accessing these environments from a wide range of client devices, including Microsoft Windows PCs, Sun Ray virtual display clients, Apple Macintoshes, PDAs or any combination of supported devices. A variety of networks were supported, from LAN to WAN or the public Internet. Virtual desktop products included Sun Ray Server Software, Sun Secure Global Desktop and Sun Virtual Desktop Infrastructure. Database management systems Sun acquired MySQL AB, the developer of the MySQL database in 2008 for US$1 billion. CEO Jonathan Schwartz mentioned in his blog that optimizing the performance of MySQL was one of the priorities of the acquisition. In February 2008, Sun began to publish results of the MySQL performance optimization work. Sun contributed to the PostgreSQL project. On the Java platform, Sun contributed to and supported Java DB. Other software Sun offered other software products for software development and infrastructure services. Many were developed in house; others came from acquisitions, including Tarantella, Waveset Technologies, SeeBeyond, and Vaau. Sun acquired many of the Netscape non-browser software products as part a deal involving Netscape's merger with AOL. These software products were initially offered under the "iPlanet" brand; once the Sun-Netscape alliance ended, they were re-branded as "Sun ONE" (Sun Open Network Environment), and then the "Sun Java System". Sun's middleware product was branded as the Java Enterprise System (or JES), and marketed for web and application serving, communication, calendaring, directory, identity management and service-oriented architecture. Sun's Open ESB and other software suites were available free of charge on systems running Solaris, Red Hat Enterprise Linux, HP-UX, and Windows, with support available optionally. Sun developed data center management software products, which included the Solaris Cluster high availability software, and a grid management package called Sun Grid Engine and firewall software such as SunScreen. For Network Equipment Providers and telecommunications customers, Sun developed the Sun Netra High-Availability Suite. Sun produced compilers and development tools under the Sun Studio brand, for building and developing Solaris and Linux applications. Sun entered the software as a service (SaaS) market with zembly, a social cloud-based computing platform and Project Kenai, an open-source project hosting service. Storage Sun sold its own storage systems to complement its system offerings; it has also made several storage-related acquisitions. On June 2, 2005, Sun announced it would purchase Storage Technology Corporation (StorageTek) for US$4.1 billion in cash, or $37.00 per share, a deal completed in August 2005. In 2006, Sun introduced the Sun StorageTek 5800 System, the first application-aware programmable storage solution. In 2008, Sun contributed the source code of the StorageTek 5800 System under the BSD license. Sun announced the Sun Open Storage platform in 2008 built with open source technologies. In late 2008 Sun announced the Sun Storage 7000 Unified Storage systems (codenamed Amber Road). Transparent placement of data in the systems' solid-state drives (SSD) and conventional hard drives was managed by ZFS to take advantage of the speed of SSDs and the economy of conventional hard disks. Other storage products included Sun Fire X4500 storage server and SAM-QFS filesystem and storage management software. High-performance computing Sun marketed the Sun Constellation System for high-performance computing (HPC). Even before the introduction of the Sun Constellation System in 2007, Sun's products were in use in many of the TOP500 systems and supercomputing centers: Lustre was used by seven of the top 10 supercomputers in 2008, as well as other industries that need high-performance storage: six major oil companies (including BP, Shell, and ExxonMobil), chip-design (including Synopsys and Sony), and the movie-industry (including Harry Potter and Spider-Man). Sun Fire X4500 was used by high-energy physics supercomputers to run dCache Sun Grid Engine was a popular workload scheduler for clusters and computer farms Sun Visualization System allowed users of the TeraGrid to remotely access the 3D rendering capabilities of the Maverick system at the University of Texas at Austin Sun Modular Datacenter (Project Blackbox) was two Sun MD S20 units used by the Stanford Linear Accelerator Center The Sun HPC ClusterTools product was a set of Message Passing Interface (MPI) libraries and tools for running parallel jobs on Solaris HPC clusters. Beginning with version 7.0, Sun switched from its own implementation of MPI to Open MPI, and donated engineering resources to the Open MPI project. Sun was a participant in the OpenMP language committee. Sun Studio compilers and tools implemented the OpenMP specification for shared memory parallelization. In 2006, Sun built the TSUBAME supercomputer, which was until June 2008 the fastest supercomputer in Asia. Sun built Ranger at the Texas Advanced Computing Center (TACC) in 2007. Ranger had a peak performance of over 500 TFLOPS, and was the sixth-most-powerful supercomputer on the TOP500 list in November 2008. Sun announced an OpenSolaris distribution that integrated Sun's HPC products with others. Staff Notable Sun employees included John Gilmore, Whitfield Diffie, Radia Perlman, Ivan Sutherland, and Marc Tremblay. Sun was an early advocate of Unix-based networked computing, promoting TCP/IP and especially NFS, as reflected in the company's motto "The Network Is The Computer", coined by John Gage. James Gosling led the team which developed the Java programming language. Jon Bosak led the creation of the XML specification at W3C. In 2005, Sun Microsystems was one of the first Fortune 500 companies that instituted a formal Social Media program. Sun staff published articles on the company's blog site. Staff were encouraged to use the site to blog on any aspect of their work or personal life, with few restrictions placed on staff, other than commercially confidential material. Jonathan I. Schwartz was one of the first CEOs of large companies to regularly blog; his postings were frequently quoted and analyzed in the press. Acquisition by Oracle On September 3, 2009, the European Commission opened an in-depth investigation into the proposed takeover of Sun Microsystems by Oracle. On November 9, 2009, the Commission issued a statement of objections relating to the acquisition of Sun by Oracle. Finally, on January 21, 2010, the European Commission approved Oracle's acquisition of Sun. The Commission's investigation showed that another open database, PostgreSQL, was considered by many users of this type of software as a credible alternative to MySQL and could to some extent replace the competitive strength that the latter currently represents in the database market. Sun was sold to Oracle Corporation in 2009 for $5.6 billion. Sun's staff were asked to share anecdotes about their experiences at Sun. A website containing videos, stories, and photographs from 27 years at Sun was made available on September 2, 2009. In October, Sun announced a second round of thousands of employees to be laid off, blamed partially on delays in approval of the merger. The transaction was completed in early 2010. In January 2011, Oracle agreed to pay $46 million to settle charges that it submitted false claims to US federal government agencies and paid "kickbacks" to systems integrators. In February 2011, Sun's former Menlo Park, California, campus of about was sold, and it was announced that it would become headquarters for Facebook. The sprawling facility built around an enclosed courtyard had been nicknamed "Sun Quentin". On September 1, 2011, Sun India legally became part of Oracle. It had been delayed due to legal issues in Indian court. See also Callan Data Systems Global Education Learning Community Hackathon Liberty Alliance List of computer system manufacturers Open Source University Meetup Sun Certified Professional References Further reading External links Post-merge web site (removed in February 2021). A weekly third-party summary of news about Sun and its products published since 1998. 1982 establishments in California 2010 disestablishments in California American companies established in 1982 American companies disestablished in 2010 Cloud computing providers Companies based in Santa Clara, California Computer companies established in 1982 Computer companies disestablished in 2010 Defunct computer hardware companies Defunct companies based in the San Francisco Bay Area Defunct computer companies of the United States Free software companies Oracle acquisitions 2010 mergers and acquisitions Software companies based in the San Francisco Bay Area Software companies based in Tokyo Software companies established in 1982 Software companies disestablished in 2010 Software companies of the United States
Operating System (OS)
924
IBM System/360 Model 67 The IBM System/360 Model 67 (S/360-67) was an important IBM mainframe model in the late 1960s. Unlike the rest of the S/360 series, it included features to facilitate time-sharing applications, notably a Dynamic Address Translation unit, the "DAT box", to support virtual memory, 32-bit addressing and the 2846 Channel Controller to allow sharing channels between processors. The S/360-67 was otherwise compatible with the rest of the S/360 series. Origins The S/360-67 was intended to satisfy the needs of key time-sharing customers, notably MIT (where Project MAC had become a notorious IBM sales failure), the University of Michigan, General Motors, Bell Labs, Princeton University, the Carnegie Institute of Technology (later Carnegie Mellon University), and the Naval Postgraduate School. In the mid-1960s a number of organizations were interested in offering interactive computing services using time-sharing. At that time the work that computers could perform was limited by their lack of real memory storage capacity. When IBM introduced its System/360 family of computers in the mid-1960s, it did not provide a solution for this limitation and within IBM there were conflicting views about the importance of time-sharing and the need to support it. A paper titled Program and Addressing Structure in a Time-Sharing Environment by Bruce Arden, Bernard Galler, Frank Westervelt (all associate directors at the University of Michigan's academic Computing Center), and Tom O'Brian building upon some basic ideas developed at the Massachusetts Institute of Technology (MIT) was published in January 1966. The paper outlined a virtual memory architecture using dynamic address translation (DAT) that could be used to implement time-sharing. After a year of negotiations and design studies, IBM agreed to make a one-of-a-kind version of its S/360-65 mainframe computer for the University of Michigan. The S/360-65M would include dynamic address translation (DAT) features that would support virtual memory and allow support for time-sharing. Initially IBM decided not to supply a time-sharing operating system for the new machine. As other organizations heard about the project they were intrigued by the time-sharing idea and expressed interest in ordering the modified IBM S/360 series machines. With this demonstrated interest IBM changed the computer's model number to S/360-67 and made it a supported product. When IBM realized there was a market for time-sharing, it agreed to develop a new time-sharing operating system called IBM Time Sharing System (TSS/360) for delivery at roughly the same time as the first model S/360-67. The first S/360-67 was shipped in May 1966. The S/360-67 was withdrawn on March 15, 1977. Before the announcement of the Model 67, IBM had announced models 64 and 66, DAT versions of its 60 and 62 models, but they were almost immediately replaced by the 67 at the same time that the 60 and 62 were replaced by the 65. Announcement IBM announced the S/360-67 in its August 16, 1965 "blue letters" (a standard mechanism used by IBM to make product announcements). IBM stated that: "Special bid restrictions have been removed from the System/360 Model 67" (i.e., it was now generally available) It included "multiprocessor configurations, with a high degree of system availability", with up to four processing units [while configurations with up to four processors were announced, only one and two processors configurations were actually built] It had "its own powerful operating system...[the] Time Sharing System monitor (TSS)" offering "virtually instantaneous access to and response from the computer" to "take advantage of the unique capabilities of a multiprocessor system" It offered "dynamic relocation of problem programs using the dynamic address translation facilities of the 2067 Processing Unit, permitting response, within seconds, to many simultaneous users" Virtual memory The S/360-67 design added a component for implementing virtual memory, the "DAT box" (Dynamic Address Translation box). DAT on the 360/67 was based on the architecture outlined in a 1966 JACM paper by Arden, Galler, Westervelt, and O'Brien and included both segment and page tables. The Model 67's virtual memory support was very similar to the virtual memory support that eventually became standard on the entire System/370 line. The S/360-67 provided a 24- or 32-bit address space – unlike the strictly 24-bit address space of other S/360 and early S/370 systems, and the 31-bit address space of S/370-XA available on later S/370s. The S/360-67 virtual address space was divided into pages (of 4096 bytes) grouped into segments (of 1 million bytes); pages were dynamically mapped onto the processor's real memory. These S/360-67 features plus reference and change bits as part of the storage key enabled operating systems to implement demand paging: referencing a page that was not in memory caused a page fault, which in turn could be intercepted and processed by an operating system interrupt handler. The S/360-67's virtual memory system was capable of meeting three distinct goals: Large address space. It mapped physical memory onto a larger pool of virtual memory, which could be dynamically swapped in and out of real memory as needed from random-access storage (typically: disk or drum storage). Isolated OS components. It made it possible to remove most of the operating system's memory footprint from the user's environment, thereby increasing the memory available for application use, and reducing the risk of applications intruding into or corrupting operating system data and programs. Multiple address spaces. By implementing multiple virtual address spaces, each for a different user, each user could potentially have a private virtual machine. The first goal removed (for decades, at least) a crushing limitation of earlier machines: running out of physical storage. The second enabled substantial improvements in security and reliability. The third enabled the implementation of true virtual machines. Contemporary documents make it clear that full hardware virtualization and virtual machines were not original design goals for the S/360-67. Features The S/360-67 included the following extensions in addition to the standard and optional features available on all S/360 systems: Dynamic Address Translation (DAT) with support for 24 or 32-bit virtual addresses using segment and page tables (up to 16 segments each containing up to 256 4096 byte pages) Extended PSW Mode that enables additional interrupt masking and additional control registers High Resolution Interval Timer with a resolution of approximately 13 microseconds Reference and change bits as part of storage protection keys Extended Direct Control allowing the processors in a duplex configuration to present an external interrupt to the other processor Partitioning of the processors, processor storage, and I/O channels in a duplex configuration into two separate subsystems Floating Addressing to allow processor storage in a partitioned duplex configuration to be assigned consecutive real memory addresses An IBM 2846 Channel Controller that allows both processors in a duplex configuration to access all of the I/O channels and that allows I/O interrupts to be presented to either processor independent of what processor initiated the I/O operation Simplex configurations can include 7 I/O channels, while duplex configurations can include 14 I/O channels Three new supervisor-state instructions: Load Multiple Control (LMC), Store Multiple Control (SMC), Load Real Address (LRA) Two new problem-state instructions: Branch and Store Register (BASR), and Branch and Store (BAS) Two new program interruptions: Segment translation exception (16) and page translation exception (17) The S/360-67 operated with a basic internal cycle time of 200 nanoseconds and a basic 750 nanosecond magnetic core storage cycle, the same as the S/360-65. The 200 ns cycle time put the S/360-67 in the middle of the S/360 line, between the Model 30 at the low end and the Model 195 at the high end. From 1 to 8 bytes (8 data bits and 1 parity bit per byte) could be read or written to processor storage in a single cycle. A 60-bit parallel adder facilitated handling of long fractions in floating-point operations. An 8-bit serial adder enabled simultaneous execution of floating point exponent arithmetic, and also handled decimal arithmetic and variable field length (VFL) instructions. New components Four new components were part of the S/360-67: 2067 Processing Unit Models 1 and 2, 2365 Processor Storage Model 12, 2846 Channel Controller, and 2167 Configuration Unit. These components, together with the 2365 Processor Storage Model 2, 2860 Selector Channel, 2870 Multiplexer Channel, and other System/360 control units and devices were available for use with the S/360-67. Note that while Carnegie Tech had a 360/67 with an IBM 2361 LCS, that option was not listed in the price book and may not have worked in a duplex configuration. Basic configurations Three basic configurations were available for the IBM System/360 model 67: Simplex—one IBM 2067-1 processor, two to four IBM 2365-2 Processor Storage components (512K to 1M bytes), up to seven data channels, and other peripherals. This system was called the IBM System/360 model 67-1. Half-duplex—one IBM 2067-2 processor, two to four IBM 2365-12 Processor Storage components (512K to 1M bytes), one IBM 2167 Configuration Unit, one or two IBM 2846 Channel Controllers, up to fourteen data channels, and other peripherals. Duplex—two IBM 2067-2 processors, three to eight IBM 2365-12 Processor Storage components (768K to 2M bytes), one IBM 2167 Configuration Unit, one or two IBM 2846 Channel Controllers, up to fourteen data channels, and other peripherals. A half-duplex system could be upgraded in the field to a duplex system by adding one IBM 2067-2 processor and the third IBM 2365-12 Processor Storage, unless the half-duplex system already had three or more. The half-duplex and duplex configurations were called the IBM System/360 model 67-2. Operating systems When the S/360-67 was announced in August 1965, IBM also announced TSS/360, a time-sharing operating system project that was canceled in 1971 (having also been canceled in 1968, but reprieved in 1969). IBM subsequently modified TSS/360 and offered the TSS/370 PRPQ for three releases before cancelling it. IBM's failure to deliver TSS/360 as promised opened the door for others to develop operating systems that would use the unique features of the S/360-67 MTS, the Michigan Terminal System, was the time-sharing operating system developed at the University of Michigan and first used on the Model 67 in January 1967. Virtual memory support was added to MTS in October 1967. Multi-processor support for a duplex S/360-67 was added in October 1968. CP/CMS was the first virtual machine operating system. Developed at IBM's Cambridge Scientific Center (CSC) near MIT. CP/CMS was essentially an unsupported research system, built away from IBM's mainstream product organizations, with active involvement of outside researchers. Over time it evolved into a fully supported IBM operating system (VM/370 and today's z/VM). VP/CSS, based upon CP/CMS, was developed by National CSS to provide commercial time-sharing services. Legacy The S/360-67 had an important legacy. After the failure of TSS/360, IBM was surprised by the blossoming of a time-sharing community on the S/360-67 platform (CP/CMS, MTS, MUSIC). A large number of commercial, academic, and service bureau sites installed the system. By taking advantage of IBM's lukewarm support for time-sharing, and by sharing information and resources (including source code modifications), they built and supported a generation of time-sharing centers. The unique features of the S/360-67 were initially not carried into IBM's next product series, the System/370, although the 370/145 had an associative memory that appeared more useful for paging than for its ostensible purpose. This was largely fallout from a bitter and highly visible political battle within IBM over the merits of time-sharing versus batch processing. Initially at least, time-sharing lost. However, IBM faced increasing customer demand for time-sharing and virtual memory capabilities. IBM also could not ignore the large number of S/360-67 time-sharing installations – including the new industry of time-sharing vendors, such as National CSS and Interactive Data Corporation (IDC), that were quickly achieving commercial success. In 1972, IBM added virtual memory features to the S/370 series, a move seen by many as a vindication of work done on the S/360-67 project. The survival and success of IBM's VM family, and of virtualization technology in general, also owe much to the S/360-67. In 2010, in the technical description of its latest mainframe, the z196, IBM stated that its software virtualization started with the System/360 model 67. References E.W. Pugh, L.R. Johnson, and John H. Palmer, IBM's 360 and early 370 systems, MIT Press, Cambridge MA and London, , includes extensive (819 pp.) treatment of IBM's offerings during this period Melinda Varian, VM and the VM community, past present, and future, SHARE 89 Sessions 9059-9061, 1997 S360 func67 External links A. Padegs, "System/360 and Beyond", IBM Journal of Research & Development, vol. 25 no. 5, pp. 377-390, September 1981 IBM System/360 System Summary, thirteenth edition, January 1974, IBM publication GA22-6810-12, pages 6–13 to 6-15 describe the model 67 IBM System/360 Model 67 Reference Data (Blue card) Several photos of a dual processor IBM 360/67 at the University of Michigan's academic Computing Center in the late 1960s or early 1970s are included in Dave Mills' article describing the Michigan Terminal System (MTS) Pictures of an IBM S/360-67 at Newcastle (UK) University TSS/360 Concepts and Facilities Time-sharing in the IBM System/360 model 67 System 360 Model 67 Computing platforms Time-sharing Computer-related introductions in 1968 VM (operating system)
Operating System (OS)
925
Hobbit (computer) Hobbit (Хоббит) is a Soviet/Russian 8-bit home computer, based on the Sinclair ZX Spectrum hardware architecture. It also featured a CP/M mode and Forth mode or LOGO mode, with the Forth or LOGO operating environment residing in an on-board ROM chip. Overview Hobbit was invented by Dmitry Mikhailov (Russian: Дмитрий Михайлов) (all R&D) and Mikhail Osetinskii (management) (Михаил Осетинский) in Leningrad, Russia in the late 1980s. Its first circuitry layout was designed on a home-made computer (built using ASMP of three KR580 chips in 1979, the Soviet clones of Intel 8080), by Dmitry Mikhailov as well. The computer was manufactured by the joint venture InterCompex. Hobbit was marketed in the former Soviet Union as a low-cost personal computer solution for basic educational and office business needs, in addition to its obvious use as a home computer. Schools would buy it for a classroom, interconnecting several machines in a 56K baud network. It was possible to use either another Hobbit or a single IBM PC compatible computer as a master host on the network; a special Hobbit network adapter card by InterCompex was needed for the IBM PC in the latter case. Hobbit was also briefly marketed in the U.K., targeted mainly at the existing ZX Spectrum fans willing to lay their hands on a better computer compatible with the familiar architecture. Though rarely available in the domestic market, export models featured the internal 3.5" drive, just like an Atari ST or an Amiga. Such models always had both the EGA and the TV output connectors operational, as well as the AY8910 sound chip. Domestic models often did not include either the TV output converter or the internal speaker or both. The AY8910 for the domestic models was sold separately as an external extension module, hanging off the same extension bus as the optional external disk drive. Another extension was the SME board (Screen and Memory Extension). This featured 32 KB cache memory, some of which could be dedicated to video text buffer in CGA mode (this was supported by drivers available only in the FORTH or the CP/M environments; no known programs using the Sinclair-based BASIC mode used this feature). SME worked at astonishing speed - one machine command made an output of entire display line. SME was capable of rendering several dozens of windows per second. SME capabilities were fully utilized only in Forth environment. Technical details Z80A at 3.5 MHz 64K RAM Disk drives: external 2 x 5.25" drives (up to 4 connectable) or internal 3.5" drive Connections: joystick (2 x Sinclair, 1 x Kempston), Centronics, RS232, audio in/out (for cassette recorder), system bus extension 74-key keyboard (33 keys freely programmable) Video output: Composite video TV out, EGA monitor Operating system: built in disassembler, CP/M clone called "Beta", system language switchable between English and Russian References External links General: Hobbit computer nostalgia page Press links: The following two articles were published about Hobbit in the Your Sinclair magazine back in the early 1990s: Rage Hard! Sep/1990 Rage Hard! Jan/1991 Hobbit was also briefly mentioned in this article from the ISSUE 98 APRIL 1992 of CRASH (magazine). Then, later in 1992, Sinclair User featured the latest Hobbit-tech, elaborating also on FORTH and CP/M that the computer had. These articles probably give the best press coverage of Hobbit ever: The Hobbit (Aug/1992) The Hobbit—tested (Sep/1992) Computer-related introductions in 1990 ZX Spectrum clones Soviet Union–United Kingdom relations Z80-based home computers Soviet computer systems
Operating System (OS)
926
System on a chip A system on a chip (SoC; or ) is an integrated circuit (also known as a "chip") that integrates all or most components of a computer or other electronic system. These components almost always include a central processing unit (CPU), memory, input/output ports and secondary storage, often alongside other components such as radio modems and a graphics processing unit (GPU) – all on a single substrate or microchip. It may contain digital, analog, mixed-signal, and often radio frequency signal processing functions (otherwise it is considered only an application processor). Higher-performance SoCs are often paired with dedicated and physically separate memory and secondary storage (almost always LPDDR and eUFS or eMMC, respectively) chips, that may be layered on top of the SoC in what's known as a package on package (PoP) configuration, or be placed close to the SoC. Additionally, SoCs may use separate wireless modems. SoCs are in contrast to the common traditional motherboard-based PC architecture, which separates components based on function and connects them through a central interfacing circuit board. Whereas a motherboard houses and connects detachable or replaceable components, SoCs integrate all of these components into a single integrated circuit. An SoC will typically integrate a CPU, graphics and memory interfaces, hard-disk and USB connectivity, random-access and read-only memories and secondary storage and/or their controllers on a single circuit die, whereas a motherboard would connect these modules as discrete components or expansion cards. An SoC integrates a microcontroller, microprocessor or perhaps several processor cores with peripherals like a GPU, Wi-Fi and cellular network radio modems, and/or one or more coprocessors. Similar to how a microcontroller integrates a microprocessor with peripheral circuits and memory, an SoC can be seen as integrating a microcontroller with even more advanced peripherals. More tightly integrated computer system designs improve performance and reduce power consumption as well as semiconductor die area than multi-chip designs with equivalent functionality. This comes at the cost of reduced replaceability of components. By definition, SoC designs are fully or nearly fully integrated across different component modules. For these reasons, there has been a general trend towards tighter integration of components in the computer hardware industry, in part due to the influence of SoCs and lessons learned from the mobile and embedded computing markets. SoCs can be viewed as part of a larger trend towards embedded computing and hardware acceleration. SoCs are very common in the mobile computing (such as in smartphones and tablet computers) and edge computing markets. They are also commonly used in embedded systems such as WiFi routers and the Internet of Things. Types In general, there are three distinguishable types of SoCs: SoCs built around a microcontroller, SoCs built around a microprocessor, often found in mobile phones; Specialized application-specific integrated circuit SoCs designed for specific applications that do not fit into the above two categories Applications SoCs can be applied to any computing task. However, they are typically used in mobile computing such as tablets, smartphones, smartwatches and netbooks as well as embedded systems and in applications where previously microcontrollers would be used. Embedded systems Where previously only microcontrollers could be used, SoCs are rising to prominence in the embedded systems market. Tighter system integration offers better reliability and mean time between failure, and SoCs offer more advanced functionality and computing power than microcontrollers. Applications include AI acceleration, embedded machine vision, data collection, telemetry, vector processing and ambient intelligence. Often embedded SoCs target the internet of things, industrial internet of things and edge computing markets. Mobile computing Mobile computing based SoCs always bundle processors, memories, on-chip caches, wireless networking capabilities and often digital camera hardware and firmware. With increasing memory sizes, high end SoCs will often have no memory and flash storage and instead, the memory and flash memory will be placed right next to, or above (package on package), the SoC. Some examples of mobile computing SoCs include: Samsung Electronics: list, typically based on ARM Exynos, used mainly by Samsung's Galaxy series of smartphones Qualcomm: Snapdragon (list), used in many LG, Xiaomi, Google Pixel, HTC and Samsung Galaxy smartphones. In 2018, Snapdragon SoCs are being used as the backbone of laptop computers running Windows 10, marketed as "Always Connected PCs". Personal computers In 1992, Acorn Computers produced the A3010, A3020 and A4000 range of personal computers with the ARM250 SoC. It combined the original Acorn ARM2 processor with a memory controller (MEMC), video controller (VIDC), and I/O controller (IOC). In previous Acorn ARM-powered computers, these were four discrete chips. The ARM7500 chip was their second-generation SoC, based on the ARM700, VIDC20 and IOMD controllers, and was widely licensed in embedded devices such as set-top-boxes, as well as later Acorn personal computers. SoCs are being applied to mainstream personal computers as of 2018. They are particularly applied to laptops and tablet PCs. Tablet and laptop manufacturers have learned lessons from embedded systems and smartphone markets about reduced power consumption, better performance and reliability from tighter integration of hardware and firmware modules, and LTE and other wireless network communications integrated on chip (integrated network interface controllers). ARM-based: Qualcomm Snapdragon ARM250 ARM7500(FE) Apple M1 x86-based: Intel Core CULV Structure An SoC consists of hardware functional units, including microprocessors that run software code, as well as a communications subsystem to connect, control, direct and interface between these functional modules. Functional components Processor cores An SoC must have at least one processor core, but typically an SoC has more than one core. Processor cores can be a microcontroller, microprocessor (μP), digital signal processor (DSP) or application-specific instruction set processor (ASIP) core. ASIPs have instruction sets that are customized for an application domain and designed to be more efficient than general-purpose instructions for a specific type of workload. Multiprocessor SoCs have more than one processor core by definition. Whether single-core, multi-core or manycore, SoC processor cores typically use RISC instruction set architectures. RISC architectures are advantageous over CISC processors for SoCs because they require less digital logic, and therefore less power and area on board, and in the embedded and mobile computing markets, area and power are often highly constrained. In particular, SoC processor cores often use the ARM architecture because it is a soft processor specified as an IP core and is more power efficient than x86. Memory SoCs must have semiconductor memory blocks to perform their computation, as do microcontrollers and other embedded systems. Depending on the application, SoC memory may form a memory hierarchy and cache hierarchy. In the mobile computing market, this is common, but in many low-power embedded microcontrollers, this is not necessary. Memory technologies for SoCs include read-only memory (ROM), random-access memory (RAM), Electrically Erasable Programmable ROM (EEPROM) and flash memory. As in other computer systems, RAM can be subdivided into relatively faster but more expensive static RAM (SRAM) and the slower but cheaper dynamic RAM (DRAM). When an SoC has a cache hierarchy, SRAM will usually be used to implement processor registers and cores' L1 caches whereas DRAM will be used for lower levels of the cache hierarchy including main memory. "Main memory" may be specific to a single processor (which can be multi-core) when the SoC has multiple processors, in which case it is distributed memory and must be sent via on-chip to be accessed by a different processor. For further discussion of multi-processing memory issues, see cache coherence and memory latency. Interfaces SoCs include external interfaces, typically for communication protocols. These are often based upon industry standards such as USB, FireWire, Ethernet, USART, SPI, HDMI, I²C, etc. These interfaces will differ according to the intended application. Wireless networking protocols such as Wi-Fi, Bluetooth, 6LoWPAN and near-field communication may also be supported. When needed, SoCs include analog interfaces including analog-to-digital and digital-to-analog converters, often for signal processing. These may be able to interface with different types of sensors or actuators, including smart transducers. They may interface with application-specific modules or shields. Or they may be internal to the SoC, such as if an analog sensor is built in to the SoC and its readings must be converted to digital signals for mathematical processing. Digital signal processors Digital signal processor (DSP) cores are often included on SoCs. They perform signal processing operations in SoCs for sensors, actuators, data collection, data analysis and multimedia processing. DSP cores typically feature very long instruction word (VLIW) and single instruction, multiple data (SIMD) instruction set architectures, and are therefore highly amenable to exploiting instruction-level parallelism through parallel processing and superscalar execution. DSP cores most often feature application-specific instructions, and as such are typically application-specific instruction-set processors (ASIP). Such application-specific instructions correspond to dedicated hardware functional units that compute those instructions. Typical DSP instructions include multiply-accumulate, Fast Fourier transform, fused multiply-add, and convolutions. Other As with other computer systems, SoCs require timing sources to generate clock signals, control execution of SoC functions and provide time context to signal processing applications of the SoC, if needed. Popular time sources are crystal oscillators and phase-locked loops. SoC peripherals including counter-timers, real-time timers and power-on reset generators. SoCs also include voltage regulators and power management circuits. Intermodule communication SoCs comprise many execution units. These units must often send data and instructions back and forth. Because of this, all but the most trivial SoCs require communications subsystems. Originally, as with other microcomputer technologies, data bus architectures were used, but recently designs based on sparse intercommunication networks known as networks-on-chip (NoC) have risen to prominence and are forecast to overtake bus architectures for SoC design in the near future. Bus-based communication Historically, a shared global computer bus typically connected the different components, also called "blocks" of the SoC. A very common bus for SoC communications is ARM's royalty-free Advanced Microcontroller Bus Architecture (AMBA) standard. Direct memory access controllers route data directly between external interfaces and SoC memory, bypassing the CPU or control unit, thereby increasing the data throughput of the SoC. This is similar to some device drivers of peripherals on component-based multi-chip module PC architectures. Computer buses are limited in scalability, supporting only up to tens of cores (multicore) on a single chip. Wire delay is not scalable due to continued miniaturization, system performance does not scale with the number of cores attached, the SoC's operating frequency must decrease with each additional core attached for power to be sustainable, and long wires consume large amounts of electrical power. These challenges are prohibitive to supporting manycore systems on chip. Network on a chip In the late 2010s, a trend of SoCs implementing communications subsystems in terms of a network-like topology instead of bus-based protocols has emerged. A trend towards more processor cores on SoCs has caused on-chip communication efficiency to become one of the key factors in determining the overall system performance and cost. This has led to the emergence of interconnection networks with router-based packet switching known as "networks on chip" (NoCs) to overcome the bottlenecks of bus-based networks. Networks-on-chip have advantages including destination- and application-specific routing, greater power efficiency and reduced possibility of bus contention. Network-on-chip architectures take inspiration from communication protocols like TCP and the Internet protocol suite for on-chip communication, although they typically have fewer network layers. Optimal network-on-chip network architectures are an ongoing area of much research interest. NoC architectures range from traditional distributed computing network topologies such as torus, hypercube, meshes and tree networks to genetic algorithm scheduling to randomized algorithms such as random walks with branching and randomized time to live (TTL). Many SoC researchers consider NoC architectures to be the future of SoC design because they have been shown to efficiently meet power and throughput needs of SoC designs. Current NoC architectures are two-dimensional. 2D IC design has limited floorplanning choices as the number of cores in SoCs increase, so as three-dimensional integrated circuits (3DICs) emerge, SoC designers are looking towards building three-dimensional on-chip networks known as 3DNoCs. Design flow A system on a chip consists of both the hardware, described in , and the software controlling the microcontroller, microprocessor or digital signal processor cores, peripherals and interfaces. The design flow for an SoC aims to develop this hardware and software at the same time, also known as architectural co-design. The design flow must also take into account optimizations () and constraints. Most SoCs are developed from pre-qualified hardware component IP core specifications for the hardware elements and execution units, collectively "blocks", described above, together with software device drivers that may control their operation. Of particular importance are the protocol stacks that drive industry-standard interfaces like USB. The hardware blocks are put together using computer-aided design tools, specifically electronic design automation tools; the software modules are integrated using a software integrated development environment. SoCs components are also often designed in high-level programming languages such as C++, MATLAB or SystemC and converted to RTL designs through high-level synthesis (HLS) tools such as C to HDL or flow to HDL. HLS products called "algorithmic synthesis" allow designers to use C++ to model and synthesize system, circuit, software and verification levels all in one high level language commonly known to computer engineers in a manner independent of time scales, which are typically specified in HDL. Other components can remain software and be compiled and embedded onto soft-core processors included in the SoC as modules in HDL as IP cores. Once the architecture of the SoC has been defined, any new hardware elements are written in an abstract hardware description language termed register transfer level (RTL) which defines the circuit behavior, or synthesized into RTL from a high level language through high-level synthesis. These elements are connected together in a hardware description language to create the full SoC design. The logic specified to connect these components and convert between possibly different interfaces provided by different vendors is called glue logic. Design verification Chips are verified for validation correctness before being sent to a semiconductor foundry. This process is called functional verification and it accounts for a significant portion of the time and energy expended in the chip design life cycle, often quoted as 70%. With the growing complexity of chips, hardware verification languages like SystemVerilog, SystemC, e, and OpenVera are being used. Bugs found in the verification stage are reported to the designer. Traditionally, engineers have employed simulation acceleration, emulation or prototyping on reprogrammable hardware to verify and debug hardware and software for SoC designs prior to the finalization of the design, known as tape-out. Field-programmable gate arrays (FPGAs) are favored for prototyping SoCs because FPGA prototypes are reprogrammable, allow debugging and are more flexible than application-specific integrated circuits (ASICs). With high capacity and fast compilation time, simulation acceleration and emulation are powerful technologies that provide wide visibility into systems. Both technologies, however, operate slowly, on the order of MHz, which may be significantly slower – up to 100 times slower – than the SoC's operating frequency. Acceleration and emulation boxes are also very large and expensive at over US$1 million. FPGA prototypes, in contrast, use FPGAs directly to enable engineers to validate and test at, or close to, a system's full operating frequency with real-world stimuli. Tools such as Certus are used to insert probes in the FPGA RTL that make signals available for observation. This is used to debug hardware, firmware and software interactions across multiple FPGAs with capabilities similar to a logic analyzer. In parallel, the hardware elements are grouped and passed through a process of logic synthesis, during which performance constraints, such as operational frequency and expected signal delays, are applied. This generates an output known as a netlist describing the design as a physical circuit and its interconnections. These netlists are combined with the glue logic connecting the components to produce the schematic description of the SoC as a circuit which can be printed onto a chip. This process is known as place and route and precedes tape-out in the event that the SoCs are produced as application-specific integrated circuits (ASIC). Optimization goals SoCs must optimize power use, area on die, communication, positioning for locality between modular units and other factors. Optimization is necessarily a design goal of SoCs. If optimization was not necessary, the engineers would use a multi-chip module architecture without accounting for the area utilization, power consumption or performance of the system to the same extent. Common optimization targets for SoC designs follow, with explanations of each. In general, optimizing any of these quantities may be a hard combinatorial optimization problem, and can indeed be NP-hard fairly easily. Therefore, sophisticated optimization algorithms are often required and it may be practical to use approximation algorithms or heuristics in some cases. Additionally, most SoC designs contain multiple variables to optimize simultaneously, so Pareto efficient solutions are sought after in SoC design. Oftentimes the goals of optimizing some of these quantities are directly at odds, further adding complexity to design optimization of SoCs and introducing trade-offs in system design. For broader coverage of trade-offs and requirements analysis, see requirements engineering. Targets Power consumption SoCs are optimized to minimize the electrical power used to perform the SoC's functions. Most SoCs must use low power. SoC systems often require long battery life (such as smartphones), can potentially spending months or years without a power source needing to maintain autonomous function, and often are limited in power use by a high number of embedded SoCs being networked together in an area. Additionally, energy costs can be high and conserving energy will reduce the total cost of ownership of the SoC. Finally, waste heat from high energy consumption can damage other circuit components if too much heat is dissipated, giving another pragmatic reason to conserve energy. The amount of energy used in a circuit is the integral of power consumed with respect to time, and the average rate of power consumption is the product of current by voltage. Equivalently, by Ohm's law, power is current squared times resistance or voltage squared divided by resistance: SoCs are frequently embedded in portable devices such as smartphones, GPS navigation devices, digital watches (including smartwatches) and netbooks. Customers want long battery lives for mobile computing devices, another reason that power consumption must be minimized in SoCs. Multimedia applications are often executed on these devices, including video games, video streaming, image processing; all of which have grown in computational complexity in recent years with user demands and expectations for higher-quality multimedia. Computation is more demanding as expectations move towards 3D video at high resolution with multiple standards, so SoCs performing multimedia tasks must be computationally capable platform while being low power to run off a standard mobile battery. Performance per watt SoCs are optimized to maximize power efficiency in performance per watt: maximize the performance of the SoC given a budget of power usage. Many applications such as edge computing, distributed processing and ambient intelligence require a certain level of computational performance, but power is limited in most SoC environments. The ARM architecture has greater performance per watt than x86 in embedded systems, so it is preferred over x86 for most SoC applications requiring an embedded processor. Waste heat SoC designs are optimized to minimize waste heat output on the chip. As with other integrated circuits, heat generated due to high power density are the bottleneck to further miniaturization of components. The power densities of high speed integrated circuits, particularly microprocessors and including SoCs, have become highly uneven. Too much waste heat can damage circuits and erode reliability of the circuit over time. High temperatures and thermal stress negatively impact reliability, stress migration, decreased mean time between failures, electromigration, wire bonding, metastability and other performance degradation of the SoC over time. In particular, most SoCs are in a small physical area or volume and therefore the effects of waste heat are compounded because there is little room for it to diffuse out of the system. Because of high transistor counts on modern devices, oftentimes a layout of sufficient throughput and high transistor density is physically realizable from fabrication processes but would result in unacceptably high amounts of heat in the circuit's volume. These thermal effects force SoC and other chip designers to apply conservative design margins, creating less performant devices to mitigate the risk of catastrophic failure. Due to increased transistor densities as length scales get smaller, each process generation produces more heat output than the last. Compounding this problem, SoC architectures are usually heterogeneous, creating spatially inhomogeneous heat fluxes, which cannot be effectively mitigated by uniform passive cooling. Throughput SoCs are optimized to maximize computational and communications throughput. Latency SoCs are optimized to minimize latency for some or all of their functions. This can be accomplished by laying out elements with proper proximity and locality to each-other to minimize the interconnection delays and maximize the speed at which data is communicated between modules, functional units and memories. In general, optimizing to minimize latency is an NP-complete problem equivalent to the boolean satisfiability problem. For tasks running on processor cores, latency and throughput can be improved with task scheduling. Some tasks run in application-specific hardware units, however, and even task scheduling may not be sufficient to optimize all software-based tasks to meet timing and throughput constraints. Methodologies Systems on chip are modeled with standard hardware verification and validation techniques, but additional techniques are used to model and optimize SoC design alternatives to make the system optimal with respect to multiple-criteria decision analysis on the above optimization targets. Task scheduling Task scheduling is an important activity in any computer system with multiple processes or threads sharing a single processor core. It is important to reduce and increase for embedded software running on an SoC's . Not every important computing activity in a SoC is performed in software running on on-chip processors, but scheduling can drastically improve performance of software-based tasks and other tasks involving shared resources. SoCs often schedule tasks according to network scheduling and randomized scheduling algorithms. Pipelining Hardware and software tasks are often pipelined in processor design. Pipelining is an important principle for speedup in computer architecture. They are frequently used in GPUs (graphics pipeline) and RISC processors (evolutions of the classic RISC pipeline), but are also applied to application-specific tasks such as digital signal processing and multimedia manipulations in the context of SoCs. Probabilistic modeling SoCs are often analyzed though probabilistic models, and Markov chains. For instance, Little's law allows SoC states and NoC buffers to be modeled as arrival processes and analyzed through Poisson random variables and Poisson processes. Markov chains SoCs are often modeled with Markov chains, both discrete time and continuous time variants. Markov chain modeling allows asymptotic analysis of the SoC's steady state distribution of power, heat, latency and other factors to allow design decisions to be optimized for the common case. Fabrication SoC chips are typically fabricated using metal–oxide–semiconductor (MOS) technology. The netlists described above are used as the basis for the physical design (place and route) flow to convert the designers' intent into the design of the SoC. Throughout this conversion process, the design is analyzed with static timing modeling, simulation and other tools to ensure that it meets the specified operational parameters such as frequency, power consumption and dissipation, functional integrity (as described in the register transfer level code) and electrical integrity. When all known bugs have been rectified and these have been re-verified and all physical design checks are done, the physical design files describing each layer of the chip are sent to the foundry's mask shop where a full set of glass lithographic masks will be etched. These are sent to a wafer fabrication plant to create the SoC dice before packaging and testing. SoCs can be fabricated by several technologies, including: Full custom ASIC Standard cell ASIC Field-programmable gate array (FPGA) ASICs consume less power and are faster than FPGAs but cannot be reprogrammed and are expensive to manufacture. FPGA designs are more suitable for lower volume designs, but after enough units of production ASICs reduce the total cost of ownership. SoC designs consume less power and have a lower cost and higher reliability than the multi-chip systems that they replace. With fewer packages in the system, assembly costs are reduced as well. However, like most very-large-scale integration (VLSI) designs, the total cost is higher for one large chip than for the same functionality distributed over several smaller chips, because of lower yields and higher non-recurring engineering costs. When it is not feasible to construct an SoC for a particular application, an alternative is a system in package (SiP) comprising a number of chips in a single package. When produced in large volumes, SoC is more cost-effective than SiP because its packaging is simpler. Another reason SiP may be preferred is waste heat may be too high in a SoC for a given purpose because functional components are too close together, and in an SiP heat will dissipate better from different functional modules since they are physically further apart. Benchmarks SoC research and development often compares many options. Benchmarks, such as COSMIC, are developed to help such evaluations. See also List of system-on-a-chip suppliers Post-silicon validation ARM architecture Single-board computer System in package Network on a chip Programmable SoC Application-specific instruction set processor (ASIP) Platform-based design Lab on a chip Organ on a chip in biomedical technology Multi-chip module Notes References Further reading 465 pages. External links SOCC Annual IEEE International SoC Conference Baya free SoC platform assembly and IP integration tool Systems on Chip for Embedded Applications, Auburn University seminar in VLSI Instant SoC SoC for FPGAs defined by C++ Computer engineering Electronic design Microtechnology Hardware acceleration Computer systems Application-specific integrated circuits
Operating System (OS)
927
Linux kernel oops In computing, an oops is a deviation from correct behavior of the Linux kernel, one that produces a certain error log. The better-known kernel panic condition results from many kinds of oops, but other instances of an oops event may allow continued operation with compromised reliability. The term does not stand for anything, other than that it is a simple mistake. Functioning When the kernel detects a problem, it kills any offending processes and prints an oops message, which Linux kernel engineers can use in debugging the condition that created the oops and fixing the underlying programming error. After a system has experienced an oops, some internal resources may no longer be operational. Thus, even if the system appears to work correctly, undesirable side effects may have resulted from the active task being killed. A kernel oops often leads to a kernel panic when the system attempts to use resources that have been lost. The official Linux kernel documentation regarding oops messages resides in the file of the kernel sources. Some logger configurations may affect the ability to collect oops messages. The kerneloops software can collect and submit kernel oopses to a repository such as the www.kerneloops.org website, which provides statistics and public access to reported oopses. For a person not familiar with technical details of computers and operating systems, an oops message might look confusing. Unlike other operating systems such as Windows or macOS, Linux chooses to present details explaining the crash of the kernel rather than display a simplified, user-friendly message, such as the BSoD on Windows. A simplified crash screen has been proposed a few times, however currently none are in development. See also kdump (Linux) Linux kernel's crash dump mechanism, which internally uses kexec System.map contains mappings between symbol names and their addresses in memory, used to interpret oopses References Further reading Linux Device Drivers, 3rd edition, Chapter 4. Kernel Oops Howto (the madwifi project) Useful information on configuration files and tools to help display oops messages. Also many other links. Computer errors Linux kernel Screens of death
Operating System (OS)
928
Think Blue Linux Think Blue Linux (sometimes ThinkBlue Linux) was a port of Linux to IBM S/390 (later, zSeries) mainframe computers, done by the Millenux subsidiary of German company Thinking Objects Software GmbH. The distribution consisted primarily of a collection of Red Hat Linux 6.1 packages (or RPMs) on top of IBM's port of the Linux kernel. Distribution of the product is scheduled to cease in early 2006, as most of its packages are out of date, and other Linux distributions support IBM mainframes. Most modern Linux distributions support IBM mainframes, making a special dedicated distribution unnecessary. References LinuxToday article Heise article (German) External links linux.s390.org aka linux.zseries.org is the official ThinkBlue/64 website Thinking Objects Software GmbH (German) Millenux (subsidiary of Thinking Objects) Linux for zSeries product page (German) Discontinued Linux distributions Linux distributions
Operating System (OS)
929
AIDA64 AIDA64 is a system information, diagnostics, and auditing application developed by FinalWire Ltd (a Hungarian company) that runs on Windows, Android, iOS, Windows Phone, Tizen, Chrome OS and Sailfish OS operating systems. It displays detailed information on the components of a computer. Information can be saved to file in formats such as HTML, CSV, or XML. History ASMDEMO Aida started in 1995 with the freeware ASMDEMO, a 16 bits DOS hardware analyser software with basic capabilities. The first public release was ASMDEMO v870, provided with CPU and disk benchmarks. AIDA Then, in 2000, AIDA 1.0 was released provided with a hardware database with 12,000 entries, support for 32 bits MMX and SSE benchmarks. It has been written by Tamás Miklós. AIDA32 In 2001 is released AIDA32 1.0, a 32 bits Windows system diagnostic tool with basic capabilities. In 2002, AIDA32 2.0 is released adding XML reports and network audit with SQL database support. In 2003, AIDA32 3.61 is out provided with a hardware database with 25,000 entries, monitor diagnostics and is localized to 23 languages. The latest version 3.94 was released in March 2004. AIDA32 was distributed as freeware, and as a portable executable file which does not need to be installed on the host computer. Development of AIDA32 was stopped on March 24, 2004. Everest In April 2004, Miklós was appointed Executive Vice President of Software Engineering Research & Development at Lavalys. The successor of AIDA32 was the commercial Lavalys product 'Everest. Lavalys distributed a freeware version of Everest up until version 2.20, which is still available via freeware distribution websites. The freeware Everest Home Edition was discontinued on December 1, 2005 in favour of a full commercial and paid version. The final and last version of Everest is 5.50, released in April 2010. Everest is now retired in favour of AIDA64. Corsair Memory Dashboard In 2004, Corsair Xpert memory module has been released. This is a memory utility freeware based on AIDA32 and exclusively developed for Corsair Memory, Inc. AIDA64 The Everest line was discontinued after the acquisition of Lavalys by FinalWire, a privately held company formed from members working on AIDA since its beginning and located in Budapest, Hungary. Tamás Miklós becomes the managing director of FinalWire. The principal product of FinalWire is AIDA64, taking the place of Everest. This is now not a freeware anymore, but a commercial piece of software limited to 30 day trial before having to choose and pay for a version. AIDA64 leaps forwards by adding a collection of 64 bits processor and memory benchmarks, an optimized ZLib data compression and an enhanced set of fractal computational floating-point benchmarks, a new CPU benchmark method to determine cryptographic hash value calculation performance, support for SSDs (SSD specific SMART entries, etc.) and extending the hardware database to 115,000 elements. All AIDA64 benchmarks are ported to 64-bit and utilize MMX, 3DNow! and SSE instructions to stress the whole potential of modern multi-core Intel and AMD processors. Customers who bought Everest were eligible to a free upgrade to AIDA64 until October 20, 2010. AIDA64 is available in four editions: in an Extreme edition targeted at home/personal use, AIDA64 Engineer, AIDA64 Business and AIDA64 Network Audit all targeted towards professionals. On October 6, 2010, the version 1.0 was released. The version 5.20.3400 released in March 2015 had a hardware database holding over 176,000 entries. On March 5, 2015, FinalWire introduced an Android version of AIDA64. The main features are about displaying information about the SoC, CPU, screen, battery, temperature, WI-FI and cellular network, Android properties, GPU details, other listings of devices (PCI, sensors etc.) and a listing of installed apps, codecs and system directories. A version for the Android Wear platform is also available. On May 8, 2015, AIDA64 for iOS and Windows Phone have been released to celebrate the 20th anniversary of the software (1995 with ASMDEMO). The Windows Phone application is planned to be converted to a universal app in the second half of 2015. On July 28, 2015, AIDA64 added supports for new technologies (new generation of Intel processors, AMD and Nvidia GPGPU, new SSDs, and for Windows 10 and Windows Server 2016 operating systems). A version of the mobile application has been released for Tizen, completing the portfolio of supported operating systems. On June 20, 2016, on the AIDA64 Facebook page, a version of AIDA64 for Android was running on a Asus Chromebook running on Chrome OS. On June 24, 2016, still on the same Facebook page, AIDA64 was announced to be available for Sailfish OS. See also SiSoftware References External links Download latest AIDA64 Extreme at aida64.com AIDA32 v3.94.2 Download at Download.com AIDA32 v3.93 Download at MajorGeeks.com A cheap PC auditing tool—review and configuration tips Timeline and history of development of AIDA software Utilities for Windows
Operating System (OS)
930
DbDOS dbDOS is software developed by dBase for Windows computers with Intel processors. dbDOS allows Intel-based PCs to run DOS Applications, such as dBASE III, dBASE IV (Version 1, 2, 3), and dBASE V for DOS in an emulated DOS environment. It is an environment configured specifically to allow the various versions of dBASE for DOS to run without any changes to the dBASE executables or the dBASE compiled programs created. The company dBase offers licenses for dBase CLASSIC (DOS). Overview In the late 1980s to early 1990s, dBase for DOS was one of the more popular database tools on the market. When Borland decided to stop development of the DOS version in favor of the Windows version, many companies were left providing support for themselves. Many decided to keep their applications running on the DOS platform for many years following Borland's switch to Windows. This strategy worked well enough until Microsoft decided to make some changes in the way the underlying operating system worked. And as more and more PC started supporting 64-bit operating systems, the ability to keep dBASE for DOS running on maintainable hardware became difficult. In 2012, the newly formed dBase LLC opted to support this install base of dBASE applications by releasing a virtualization tool named dbDOS. Based on the popular DOSBox, dbDOS quickly became an easy way to enable virtually any DOS-based application on Microsoft's Windows XP, Vista, Windows 7, Windows Server 2003 and Windows Server 2008, both 32- and 64-bit versions of the operating systems. With enhanced support for dBASE III, dBASE IV (Version 1, 2, 3), and dBASE V for DOS, dbDOS is the easiest way to keep mission critical applications up and running for years to come. dbDOS 1.0 was released on May 23, 2012, following the restructuring of dBase LLC in April 2012. System requirements The dbDOS virtual machine requires an Intel-based personal computer running Microsoft Windows. The computer must have at least 1 gigabyte of RAM and at least 100 megabytes of free hard drive space available. Once the virtual machine software known as dbDOS is installed, the user is then free to install additional application software provide they have the originally licensed installation media available. Features Easy to use wizard based interface supports importing and exporting of multiple environment configurations No modifications to dBASE or any other applications is required Ability to run any DOS programs* (BETA) Very small footprint, under 10 megabytes of storage needed for the running program Revive existing applications instead of rewriting them Original installation media and licence from the manufacturer is required to install and run applications within dbDOS dBase dbDOS 1.1 was released as a free upgrade on June 22, 2012. New features in this release include: Ability to print ESC and printer specific codes to the host OS's printers (either networked or connected) dBase dbDOS 1.5 was released as a free upgrade on September 25, 2012. New features in this release include: Increased overall performance. Enhanced memory capabilities with JEMM386 enable programs to run much faster Clipboard feature for pasting text into DOS from Windows dBase dbDOS PRO 2 was released a new professional version on August 23, 2013. New features in this release include: August 23, 2013 Enhanced printing capability to now closely resemble printouts from the 1980s and 1990s. New copy from Windows clipboard to dbDOS VM. New copy / paste from unique dbDOS buffer to Windows. Double the performance of the 1.x product. New Print Screen functionality introduced. Support for Windows 8 dBase dbDOS PRO 3 was released on May 2, 2014. New features in this release include: May 2, 2014 PDF output to print to paper, to PDF, or to both at the same time New print features – new “Enhanced” printing feature that provides a more accurate version of those reports than ever before New print features – new “Direct” printing feature that handles more ESC codes and sequences for use when RAW, Interpreted, and Enhanced can not get it right (Expert Mode). New print features – a print driver that will allow better print across new printers requiring one line code change Easy DOS Configuration Wizard simplifies the setup and launching of a single dBASE program Enhanced DOS SHELL functionality Display for resolutions above 800x600 are 100% sharper and easier to read Improved backup system that makes it easier to find the configuration dBase dbDOS PRO 4 was released on May 6, 2015. New features in this release include: May 6, 2014 New live files system allows to make changes outside of the dbDOS VM and see changes immediately inside the VM. dbDOS PRO 4 engine is 38% faster than the last release 4DOS Integration – Will allow for more advanced command processing in the product and running DOS based scripts. New look – The new look makes the product easier to understand and use right out of the box. Enhanced PDF print functionality. Enhanced Display for resolutions above 800x600 are 100% sharper and easier to read. dBase dbDOS PRO 4N was released on May 18, 2015. New features in this release include: May 18, 2014 Multi-User Support New live files system allows to make changes outside of the dbDOS VM and see changes immediately inside the VM dbDOS PRO 4 is 38% faster than the last release dbDOS PRO 4N is built on-top of the widely used dbDOS PRO 4 product and it includes all the features of that product Architecture dBase dbDOS has a unique architecture. dbDOS CONFIG.EXE; The new dbDOS Configuration adds significant features to manage Windows shortcuts for running dBASE for DOS applications. One of the main comments with the initial dbDOS 1.0 product was that end-users wanted to be able to manage the shortcuts, they wanted to be able to delete them, and they wanted a way to share them. The new interface allows all of those features. Windows Shortcut; an icon that is used to represent a specific executable program configured to run inside the dbDOS VM. This will usually have a name under the icon on the desktop or on the start menu, which describes what program will be executed when selected. dbDOS VM Virtual Machine (dbDOS VM); The dbDOS VM or VM stands for Virtual Machine and with dbDOS it is a DOS emulation that allows products developed for DOS-based application to run inside the virtual machine. The VM supports the Windows Operating Systems (XP, 2003, Vista, 2008, 7, 8, 8.1) in either 32- or 64-bit editions. Applications (.COM, .EXE, .BAT); One of the main goals from using the dbDOS Config.exe is to define a configuration that calls a .BAT, .COM, or a .EXE file to be executed inside the dbDOS VM. Command Prompt; dbDOS CONFIG.EXE also allows configurations that will take the user to the command-prompt. From the command-prompt the user can use it just like a DOS prompt. MS-Print Spooler; Output from the dbDOS VM is sent directly to the Print Spooler for printout Print Output; Output is printed on the hardware available to the dbDOS VM. Version history See also Comparison of platform virtualization software Desktop virtualization Platform virtualization Virtual disk image DOS Paradox (database) FoxPro Lotus 1-2-3 References External links dBase dbDOS official website dBase official website Virtualization software MacOS software DOS emulators
Operating System (OS)
931
CD-ROM A CD-ROM (, compact disc read-only memory) is a pre-pressed optical compact disc that contains data. Computers can read—but not write or erase—CD-ROMs, i.e. it is a type of read-only memory. Some CDs, called enhanced CDs, hold both computer data and audio with the latter capable of being played on a CD player, while data (such as software or digital video) is only usable on a computer (such as ISO 9660 format PC CD-ROMs). During the 1990s, CD-ROMs were popularly used to distribute software and data for computers and fifth generation video game consoles. History The earliest theoretical work on optical disc storage was done by independent researchers in the United States including David Paul Gregg (1958) and James Russel (1965–1975). In particular, Gregg's patents were used as the basis of the LaserDisc specification that was co-developed between MCA and Philips after MCA purchased Gregg's patents, as well as the company he founded, Gauss Electrophysics. The LaserDisc was the immediate precursor to the CD, with the primary difference being that the LaserDisc encoded information through an analog process whereas the CD used digital encoding. Key work to digitize the optical disc was performed by Toshi Doi and Kees Schouhamer Immink during 1979–1980, who worked on a taskforce for Sony and Phillips. The result was the Compact Disc Digital Audio (CD-DA), defined on 1980. The CD-ROM was later designed an extension of the CD-DA, and adapted this format to hold any form of digital data, with an initial storage capacity of 553 MB. Sony and Philips created the technical standard that defines the format of a CD-ROM in 1983, in what came to be called the Yellow Book. The CD-ROM was announced in 1984 and introduced by Denon and Sony at the first Japanese COMDEX computer show in 1985. In November, 1985, several computer industry participants including Microsoft, Philips, Sony, Apple and Digital Equipment Corporation met to create a specification to define a file system format for CD-ROMs. The resulting specification, called the High Sierra format, was published in May 1986. It was eventually standardized, with a few changes, as the ISO 9660 standard in 1988. One of the first CD-ROM products to be made available to the public was the Grolier Academic Encyclopedia, presented at the Microsoft CD-ROM Conference in March 1986. CD-ROMs began being used in home video game consoles starting with the PC Engine CD-ROM² (TurboGrafx-CD) in 1988, while CD-ROM drives had also become available for home computers by the end of the 1980s. In 1990, Data East demonstrated an arcade system board that supported CD-ROMs, similar to 1980s laserdisc video games but with digital data, allowing more flexibility than older laserdisc games. By early 1990, about 300,000 CD-ROM drives were sold in Japan, while 125,000 CD-ROM discs were being produced monthly in the United States. Some computers which were marketed in the 1990s were called "multimedia" computers because they incorporated a CD-ROM drive, which allowed for the delivery of several hundred megabytes of video, picture, and audio data. CD-ROM discs Media CD-ROMs are identical in appearance to audio CDs, and data are stored and retrieved in a very similar manner (only differing from audio CDs in the standards used to store the data). Discs are made from a 1.2 mm thick disc of polycarbonate plastic, with a thin layer of aluminium to make a reflective surface. The most common size of CD-ROM is 120 mm in diameter, though the smaller Mini CD standard with an 80 mm diameter, as well as shaped compact discs in numerous non-standard sizes and molds (e.g., business card-sized media), also exist. Data is stored on the disc as a series of microscopic indentations called "pits", with the non-indented spaces between them called "lands". A laser is shone onto the reflective surface of the disc to read the pattern of pits and lands. Because the depth of the pits is approximately one-quarter to one-sixth of the wavelength of the laser light used to read the disc, the reflected beam's phase is shifted in relation to the incoming beam, causing destructive interference and reducing the reflected beam's intensity. This is converted into binary data. Standard Several formats are used for data stored on compact discs, known as the Rainbow Books. The Yellow Book, created in 1983, defines the specifications for CD-ROMs, standardized in 1988 as the ISO/IEC 10149 standard and in 1989 as the ECMA-130 standard. The CD-ROM standard builds on top of the original Red Book CD-DA standard for CD audio. Other standards, such as the White Book for Video CDs, further define formats based on the CD-ROM specifications. The Yellow Book itself is not freely available, but the standards with the corresponding content can be downloaded for free from ISO or ECMA. There are several standards that define how to structure data files on a CD-ROM. ISO 9660 defines the standard file system for a CD-ROM. ISO 13490 is an improvement on this standard which adds support for non-sequential write-once and re-writeable discs such as CD-R and CD-RW, as well as multiple sessions. The ISO 13346 standard was designed to address most of the shortcomings of ISO 9660, and a subset of it evolved into the UDF format, which was adopted for DVDs. A bootable CD specification, called El Torito, was issued in January 1995, to make a CD emulate a hard disk or floppy disk. Manufacture Pre-pressed CD-ROMs are mass-produced by a process of stamping where a glass master disc is created and used to make "stampers", which are in turn used to manufacture multiple copies of the final disc with the pits already present. Recordable (CD-R) and rewritable (CD-RW) discs are manufactured by a different method, whereby the data are recorded on them by a laser changing the properties of a dye or phase transition material in a process that is often referred to as "burning". CD-ROM format Data stored on CD-ROMs follows the standard CD data encoding techniques described in the Red Book specification (originally defined for audio CD only). This includes cross-interleaved Reed–Solomon coding (CIRC), eight-to-fourteen modulation (EFM), and the use of pits and lands for coding the bits into the physical surface of the CD. The structures used to group data on a CD-ROM are also derived from the Red Book. Like audio CDs (CD-DA), a CD-ROM sector contains 2,352 bytes of user data, composed of 98 frames, each consisting of 33 bytes (24 bytes for the user data, 8 bytes for error correction, and 1 byte for the subcode). Unlike audio CDs, the data stored in these sectors corresponds to any type of digital data, not audio samples encoded according to the audio CD specification. To structure, address and protect this data, the CD-ROM standard further defines two sector modes, Mode 1 and Mode 2, which describe two different layouts for the data inside a sector. A track (a group of sectors) inside a CD-ROM only contains sectors in the same mode, but if multiple tracks are present in a CD-ROM, each track can have its sectors in a different mode from the rest of the tracks. They can also coexist with audio CD tracks, which is the case of mixed mode CDs. Sector structure Both Mode 1 and 2 sectors use the first 16 bytes for header information, but differ in the remaining 2,336 bytes due to the use of error correction bytes. Unlike an audio CD, a CD-ROM cannot rely on error concealment by interpolation; a higher reliability of the retrieved data is required. To achieve improved error correction and detection, Mode 1, used mostly for digital data, adds a 32-bit cyclic redundancy check (CRC) code for error detection, and a third layer of Reed–Solomon error correction using a Reed-Solomon Product-like Code (RSPC). Mode 1 therefore contains 288 bytes per sector for error detection and correction, leaving 2,048 bytes per sector available for data. Mode 2, which is more appropriate for image or video data (where perfect reliability may be a little bit less important), contains no additional error detection or correction bytes, having therefore 2,336 available data bytes per sector. Note that both modes, like audio CDs, still benefit from the lower layers of error correction at the frame level. Before being stored on a disc with the techniques described above, each CD-ROM sector is scrambled to prevent some problematic patterns from showing up. These scrambled sectors then follow the same encoding process described in the Red Book in order to be finally stored on a CD. The following table shows a comparison of the structure of sectors in CD-DA and CD-ROMs: The net byte rate of a Mode-1 CD-ROM, based on comparison to CD-DA audio standards, is 44,100 Hz × 16 bits/sample × 2 channels × 2,048 / 2,352 / 8 = 150 KB/s (150 × 210) . This value, 150 kbit/s, is defined as "1× speed". Therefore, for Mode 1 CD-ROMs, a 1× CD-ROM drive reads 150/2 = 75 consecutive sectors per second. The playing time of a standard CD is 74 minutes, or 4,440 seconds, contained in 333,000 blocks or sectors. Therefore, the net capacity of a Mode-1 CD-ROM is 650 MB (650 × 220). For 80 minute CDs, the capacity is 703 MB. CD-ROM XA extension CD-ROM XA is an extension of the Yellow Book standard for CD-ROMs that combines compressed audio, video and computer data, allowing all to be accessed simultaneously. It was intended as a bridge between CD-ROM and CD-i (Green Book) and was published by Sony and Philips, and backed by Microsoft, in 1991, first announced in September 1988. "XA" stands for eXtended Architecture. CD-ROM XA defines two new sector layouts, called Mode 2 Form 1 and Mode 2 Form 2 (which are different from the original Mode 2). XA Mode 2 Form 1 is similar to the Mode 1 structure described above, and can interleave with XA Mode 2 Form 2 sectors; it is used for data. XA Mode 2 Form 2 has 2,324 bytes of user data, and is similar to the standard Mode 2 but with error detection bytes added (though no error correction). It can interleave with XA Mode 2 Form 1 sectors, and it is used for audio/video data. Video CDs, Super Video CDs, Photo CDs, Enhanced Music CDs and CD-i use these sector modes. The following table shows a comparison of the structure of sectors in CD-ROM XA modes: Disc images When a disc image of a CD-ROM is created, this can be done in either "raw" mode (extracting 2,352 bytes per sector, independent of the internal structure), or obtaining only the sector's useful data (2,048/2,336/2,352/2,324 bytes depending on the CD-ROM mode). The file size of a disc image created in raw mode is always a multiple of 2,352 bytes (the size of a block). Disc image formats that store raw CD-ROM sectors include CCD/IMG, CUE/BIN, and MDS/MDF. The size of a disc image created from the data in the sectors will depend on the type of sectors it is using. For example, if a CD-ROM mode 1 image is created by extracting only each sector's data, its size will be a multiple of 2,048; this is usually the case for ISO disc images. On a 74-minute CD-R, it is possible to fit larger disc images using raw mode, up to 333,000 × 2,352 = 783,216,000 bytes (~747 MB). This is the upper limit for raw images created on a 74 min or ≈650 MB Red Book CD. The 14.8% increase is due to the discarding of error correction data. Capacity CD-ROM capacities are normally expressed with binary prefixes, subtracting the space used for error correction data. The capacity of a CD-ROM depends on how close the outward data track is extended to the disc's outer rim. A standard 120 mm, 700 MB CD-ROM can actually hold about 703 MB of data with error correction (or 847 MB total). In comparison, a single-layer DVD-ROM can hold 4.7 GB (4.7 × 109) of error-protected data, more than 6 CD-ROMs. CD-ROM drives CD-ROM discs are read using CD-ROM drives. A CD-ROM drive may be connected to the computer via an IDE (ATA), SCSI, SATA, FireWire, or USB interface or a proprietary interface, such as the Panasonic CD interface, LMSI/Philips, Sony and Mitsumi standards. Virtually all modern CD-ROM drives can also play audio CDs (as well as Video CDs and other data standards) when used with the right software. Laser and optics CD-ROM drives employ a near-infrared 780 nm laser diode. The laser beam is directed onto the disc via an opto-electronic tracking module, which then detects whether the beam has been reflected or scattered. Transfer rates Original speed CD-ROM drives are rated with a speed factor relative to music CDs. If a CD-ROM is read at the same rotational speed as an audio CD, the data transfer rate is 150 kbit/s, commonly called "1×" (with constant linear velocity, short "CLV"). At this data rate, the track moves along under the laser spot at about 1.2 m/s. To maintain this linear velocity as the optical head moves to different positions, the angular velocity is varied from about 500 rpm at the inner edge to 200 rpm at the outer edge. The 1× speed rating for CD-ROM (150 kbit/s) is different from the 1× speed rating for DVDs (1.32 MB/s). Speed advancements By increasing the speed at which the disc is spun, data can be transferred at greater rates. For example, a CD-ROM drive that can read at 8× speed spins the disc at 1600 to 4000 rpm, giving a linear velocity of 9.6 m/s and a transfer rate of 1200 kbit/s. Above 12× speed most drives read at Constant angular velocity (CAV, constant rpm) so that the motor is not made to change from one speed to another as the head seeks from place to place on the disc. In CAV mode the "×" number denotes the transfer rate at the outer edge of the disc, where it is a maximum. 20× was thought to be the maximum speed due to mechanical constraints until Samsung Electronics introduced the SCR-3230, a 32x CD-ROM drive which uses a ball bearing system to balance the spinning disc in the drive to reduce vibration and noise. As of 2004, the fastest transfer rate commonly available is about 52× or 10,400 rpm and 7.62 MB/s. Higher spin speeds are limited by the strength of the polycarbonate plastic of which the discs are made. At 52×, the linear velocity of the outermost part of the disc is around 65 m/s. However, improvements can still be obtained using multiple laser pickups as demonstrated by the Kenwood TrueX 72× which uses seven laser beams and a rotation speed of approximately 10×. The first 12× drive was released in late 1996. Above 12× speed, there are problems with vibration and heat. CAV drives give speeds up to 30× at the outer edge of the disc with the same rotational speed as a standard (constant linear velocity, CLV) 12×, or 32× with a slight increase. However, due to the nature of CAV (linear speed at the inner edge is still only 12×, increasing smoothly in-between) the actual throughput increase is less than 30/12; in fact, roughly 20× average for a completely full disc, and even less for a partially filled one. Physical limitations Problems with vibration, owing to limits on achievable symmetry and strength in mass-produced media, mean that CD-ROM drive speeds have not massively increased since the late 1990s. Over 10 years later, commonly available drives vary between 24× (slimline and portable units, 10× spin speed) and 52× (typically CD- and read-only units, 21× spin speed), all using CAV to achieve their claimed "max" speeds, with 32× through 48× most common. Even so, these speeds can cause poor reading (drive error correction having become very sophisticated in response) and even shattering of poorly made or physically damaged media, with small cracks rapidly growing into catastrophic breakages when centripetally stressed at 10,000–13,000 rpm (i.e. 40–52× CAV). High rotational speeds also produce undesirable noise from disc vibration, rushing air and the spindle motor itself. Most 21st-century drives allow forced low speed modes (by use of small utility programs) for the sake of safety, accurate reading or silence, and will automatically fall back if numerous sequential read errors and retries are encountered. Workarounds Other methods of improving read speed were trialled such as using multiple optical beams, increasing throughput up to 72× with a 10× spin speed, but along with other technologies like 90~99 minute recordable media, GigaRec and double-density compact disc (Purple Book standard) recorders, their utility was nullified by the introduction of consumer DVD-ROM drives capable of consistent 36× equivalent CD-ROM speeds (4× DVD) or higher. Additionally, with a 700 MB CD-ROM fully readable in under 2½ minutes at 52× CAV, increases in actual data transfer rate are decreasingly influential on overall effective drive speed when taken into consideration with other factors such as loading/unloading, media recognition, spin up/down and random seek times, making for much decreased returns on development investment. A similar stratification effect has since been seen in DVD development where maximum speed has stabilised at 16× CAV (with exceptional cases between 18× and 22×) and capacity at 4.3 and 8.5 GB (single and dual layer), with higher speed and capacity needs instead being catered to by Blu-ray drives. Speed ratings CD-Recordable drives are often sold with three different speed ratings, one speed for write-once operations, one for re-write operations, and one for read-only operations. The speeds are typically listed in that order; i.e. a 12×/10×/32× CD drive can, CPU and media permitting, write to CD-R discs at 12× speed (1.76 MB/s), write to CD-RW discs at 10× speed (1.46 MB/s), and read from CDs at 32× speed (4.69 MB/s). Speed table Copyright issues Software distributors, and in particular distributors of computer games, often make use of various copy protection schemes to prevent software running from any media besides the original CD-ROMs. This differs somewhat from audio CD protection in that it is usually implemented in both the media and the software itself. The CD-ROM itself may contain "weak" sectors to make copying the disc more difficult, and additional data that may be difficult or impossible to copy to a CD-R or disc image, but which the software checks for each time it is run to ensure an original disc and not an unauthorized copy is present in the computer's CD-ROM drive. Manufacturers of CD writers (CD-R or CD-RW) are encouraged by the music industry to ensure that every drive they produce has a unique identifier, which will be encoded by the drive on every disc that it records: the RID or Recorder Identification Code. This is a counterpart to the Source Identification Code (SID), an eight character code beginning with "IFPI" that is usually stamped on discs produced by CD recording plants. See also ATA Packet Interface (ATAPI) Optical recording (history of) CD/DVD authoring Compact Disc Digital Audio Computer hardware DVD-Audio DVD-ROM MultiLevel Recording Optical disc drive Phase-change Dual Thor-CD DVP Media, patent holder for self-loading and self configuring CD-ROM technology List of optical disc manufacturers Notes References Compact disc Rotating disc computer storage media Ecma standards Optical computer storage Optical computer storage media Rainbow Books Video game distribution
Operating System (OS)
932
List of Linux adopters Linux adopters are companies, organizations and individuals who have moved from other operating systems to Linux (i.e. "desktop Linux"). Such Linux has not displaced Microsoft Windows to a large degree, except on servers. However, Microsoft has adopted the Linux kernel in recent versions of Windows 10, in addition to mainly using its own kernel. Linux is the most popular operating system running on Microsoft Azure, i.e. for Microsoft's customers. Microsoft also has an operating system, Azure Sphere, based on the Linux kernel only. Government As local governments come under pressure from institutions such as the World Trade Organization and the International Intellectual Property Alliance, some have turned to Linux and other free software as an affordable, legal alternative to both pirated software and expensive proprietary computer products from Microsoft, Apple and other commercial companies. The spread of Linux affords some leverage for these countries when companies from the developed world bid for government contracts (since a low-cost option exists), while furnishing an alternative path to development for countries like India and Pakistan that have many citizens skilled in computer applications but cannot afford technological investment at "First World" prices. The cost factor is not the only one being considered though – many governmental institutions (in public and military sectors) from North America and European Union make the transition to Linux due to its superior stability and openness of the source code which in its turn leverages information security. Africa The South African Social Security Agency (SASSA) deployed multi-station Linux desktops to address budget and infrastructure constraints in 50 rural sites. First National Bank switched more than 12,000 desktop computers to Linux by 2007. Asia East The People's Republic of China exclusively uses Linux as the operating system for its Loongson processor family, with the aim of technology independence. Kylin, used by People's Liberation Army in The People's Republic of China. The first version used FreeBSD, but since release 3.0, it employs Linux. State owned Industrial and Commercial Bank of China (ICBC) is installing Linux in all of its 20,000 retail branches as the basis for its web server and a new terminal platform. (2005) North Korea uses a Linux distribution developed by the Korea Computer Center, called Red Star OS, on their computers. Prior to its release in 2008, Red Hat Linux or Windows XP were used. West In 2003, the Turkish government decided to create its own Linux distribution, Pardus, developed by UEKAE (National Research Institute of Electronics and Cryptology). The first version, Pardus 1.0, was officially announced on 27 December 2005. North In late 2010, Vladimir Putin signed a plan to move the Russian Federation government towards free software including Linux in the second quarter of 2012. South Government of India's CDAC developed an Indian Linux distribution, BOSS GNU/Linux (Bharat Operating System Solutions). It is customized to suit Indian's digital environment and supports most of the Indian languages. The Government of Kerala, India, announced its official support for free/open-source software in its State IT Policy of 2001, which was formulated after the first-ever free software conference in India, "Freedom First!", held in July 2001 in Trivandrum, the capital of Kerala, where Richard Stallman inaugurated the Free Software Foundation of India. Since then, Kerala's IT Policy has been significantly influenced by FOSS, with several major initiatives such as IT@School Project, possibly the largest single-purpose deployment of Linux in the world, and leading to the formation of the International Centre for Free and Open Source Software (ICFOSS) in 2009. In March 2014, with the end of support for Windows XP, the Government of Tamil Nadu, India has advised all its departments to install BOSS Linux (Bharat Operating System Solutions). The Government of Pakistan established a Technology Resource Mobilization Unit in 2002 to enable groups of professionals to exchange views and coordinate activities in their sectors and to educate users about free software alternatives. Linux is an option for poor countries which have little revenue for public investment; Pakistan is using open-source software in public schools and colleges, and hopes to run all government services on Linux eventually. South-East In 2010, the Philippines fielded an Ubuntu-powered national voting system. In July 2010, Malaysia had switched 703 of the state's 724 agencies to free and open-source software with a Linux-based operating system used. The Chief Secretary to the Government cited, "(the) general acceptance of its promise of better quality, higher reliability, more flexibility and lower cost". Americas North Cuba Students from the University of Information Science in Cuba launched its own distribution of Linux called Nova to promote the replacement of Microsoft Windows on civilian and government computers, a project that is now supported by the Cuban Government. By early 2011 the Universidad de Ciencias Informáticas announced that they would migrate more than 8000 PCs to this new operating system. U.S. In July 2001, the White House started switching whitehouse.gov to an operating system based on Red Hat Linux and using the Apache HTTP Server. The installation was completed in February 2009. In October 2009, the White House servers adopted Drupal, an open-source content management system software distribution. The United States Department of Defense uses Linux - "the U.S. Army is the single largest installed base for Red Hat Linux" and the US Navy nuclear submarine fleet runs on Linux, including their sonar systems. In June 2012, the US Navy signed a US$27,883,883 contract with Raytheon to install Linux ground control software for its fleet of vertical take-off and landing (VTOL) Northrup-Grumman MQ8B Fire Scout drones. The contract involves Naval Air Station Patuxent River, Maryland, which has already spent US$5,175,075 in preparation for the Linux systems. In April 2006, the US Federal Aviation Administration announced that it had completed a migration to Red Hat Enterprise Linux in one third of the scheduled time and about US$15 million under budget. The switch saved a further US$15 million in datacenter operating costs. The US National Nuclear Security Administration operates the world's tenth fastest supercomputer, the IBM Roadrunner, which uses Red Hat Enterprise Linux along with Fedora as its operating systems. The city government of Largo, Florida uses Linux and has won international recognition for their implementation, indicating that it provides "extensive savings over more traditional alternatives in city-wide applications." South Brazil uses PC Conectado, a program utilizing Linux. In 2004, Venezuela's government approved the 3390 decree, to give preference to using free software in public administration. One result of this policy is the development of Canaima, a Debian-based Linux distribution. Europe Austria Austria's city of Vienna has chosen to start migrating its desktop PCs to Debian-based Wienux. However, the idea was largely abandoned, because the necessary software was incompatible with Linux. Czech Republic Czech Post migrated 4000 servers and 12,000 clients to Novell Linux in 2005 France In 2007, France's national police force (the National Gendarmerie) started moving their 90,000 desktops from Windows XP to a Ubuntu based OS, GendBuntu, over concerns about the additional training costs of moving to Windows Vista, and following the success of OpenOffice.org roll-outs. The force saved about €50 million on software licensing between 2004 and 2008. The migration largely completed in 2014. France's Ministry of Agriculture uses Mandriva Linux. The French Parliament switched to using Ubuntu on desktop PCs in 2007. However, in 2012, it was decided to let each Member of Parliament choose between Windows and Linux. Germany The city government of Munich, Germany, chose in 2003 to start to migrate its 14,000 desktops to Debian-based LiMux. Even though more than 80 percent of workstations used OpenOffice and 100 percent used Firefox/Thunderbird five years later (November 2008), an adoption rate of Linux itself of only 20 percent (June 2010) was achieved. The effort was later reorganized, focusing on smaller deployments and winning over staff to the value of the program. By the end of 2011 the program had exceeded its goal and changed over 9000 desktops to Linux. The city of Munich reported at the end of 2012 that the migration to Linux was highly successful and has already saved the city over €11 million (US$14 million). Recently the Deputy Mayor Josef Schmid said that the city is putting together an independent expert group to look at moving back to Microsoft due to issues in LiMux, the primary issues have been of compatibility; users in the rest of Germany that use other (Microsoft) software have had trouble with the files generated by Munich's open-source applications. The second is price, with Schmid saying that the city now has the impression that "Linux is very expensive" due to custom programming, The independent group will advise the best course of action, and if that group recommends using Microsoft software, Schmid says that a switch back isn't impossible. The city council said they already saved more than US$10 million, and there is no major issue with the switch to Linux. Some observers, such as Silviu Stahie of Softpedia have indicated that the attempted rejection of Linux has been influenced by Microsoft and its supporters, and that this is predominantly a political issue and not a technical one. Microsoft's German headquarters has committed to move to Munich as part of this issue. In February 2017, the city council considered the move from the Linux-based OS to Windows 10 while shortly before Microsoft Germany moved its headquarters to Munich. The Federal Employment Office of Germany (Bundesagentur für Arbeit) has migrated 13,000 public workstations from Windows NT to openSUSE. Iceland Iceland has announced in March 2012, that it wishes to migrate to open-source software in public institutions. Schools have already migrated from Windows to Ubuntu Linux. North Macedonia Republic of North Macedonia's Ministry of Education and Science deployed more than 180,000 Ubuntu based classroom desktops, and has encouraged every student in the Republic of North Macedonia to use Ubuntu computer workstations. The Netherlands The Dutch Police Internet Research and Investigation Network (iRN) has only used free and open-source software based on open standards, publicly developed with the source code available on the Internet for audit, since 2003. They use 2200 Ubuntu workstations. Russia In 2014, Russia announced plans to move their Ministry of Health to Linux as a counter to sanctions over the annexation of Crimea by the Russian Federation and a means of hurting US corporate interests, such as Microsoft. In 2018, Russia began adopting Astra Linux, an operating system which is certified to handle data classified as "special importance", on their military computer systems. Spain Spain was noted as the furthest along the road to Linux adoption in 2003, for example with Linux distribution LinEx The regional Andalusian Autonomous Government of Andalucía in Spain developed its own Linux distribution, called Guadalinex in 2004. The city government of Barcelona in Spain announced in 2018 that it would migrate all desktop software from proprietary to free/open-source alternatives, and would gradually migrate from proprietary operating systems to Linux. Switzerland Switzerland's Canton of Solothurn decided in 2001 to migrate its computers to Linux, but in 2010 the Swiss authority has made a U-turn by deciding to use Windows 7 for desktop clients. United Kingdom Hackney London Borough Council used Linux laptops for its 4000 employees to allow working from home during the COVID-19 pandemic. Education Linux is often used in technical disciplines at universities and research centres. This is due to several factors, including that Linux is available free of charge and includes a large body of free/open-source software. To some extent, technical competence of computer science and software engineering academics is also a contributor, as is stability, maintainability, and upgradability. IBM ran an advertising campaign entitled "Linux is Education" featuring a young boy who was supposed to be "Linux". Examples of large scale adoption of Linux in education include the following: The OLPC XO-1 (previously called the MIT $100 laptop and The Children's Machine), is an inexpensive laptop running Linux, which will be distributed to millions of children as part of the One Laptop Per Child project, especially in developing countries. Europe Germany Germany has announced that 560,000 students in 33 universities will migrate to Linux. In 2012, the Leibniz-Rechenzentrum (Leibniz Supercomputing Centre) (LRZ) of the Bavarian Academy of Sciences and Humanities unveiled the SuperMUC, the world's fourth most powerful supercomputer. The computer is x86-based and features 155,000 processor cores with a maximum speed of 3 petaflops of processing power and 324 terabytes of RAM. Its operating system is SUSE Linux Enterprise Server. Italy Schools in Bolzano, Italy, with a student population of 16,000, switched to a custom distribution of Linux, (FUSS Soledad GNU/Linux), in September 2005. North Macedonia Republic of North Macedonia deployed 5,000 Linux desktops running Ubuntu across all 468 public schools and 182 computer labs (December 2005). Later in 2007, another 180,000 Ubuntu thin client computers were deployed. U.K. In 2013, Westcliff High School for Girls in the United Kingdom successfully moved from Windows to openSUSE. Orwell High School, a school with about 1,000 students in Felixstowe, England, has switched to Linux. The school has just received Specialist School for Technology status through a government initiative. Switzerland All primary and secondary public schools in the Swiss Canton of Geneva, have switched to using Ubuntu for the PCs used by teachers and students in 2013–14. The switch has been completed by all of the 170 primary public schools and over 2,000 computers. The migration of the canton's 20 secondary schools is planned for the school year 2014-15 Americas Brazil has 35 million students in over 50,000 schools using 523,400 computer stations all running Linux. 22,000 students in the US state of Indiana had access to Linux Workstations at their high schools in 2006. In 2009, Venezuela's Ministry of Education began a project called Canaima-educativo, to provide all students in public schools with "Canaimita" laptop computers with the Canaima Debian-based Linux distribution pre-installed, as well as with open-source educational content. Asia China The Chinese government is buying 1.5 million Linux Loongson PCs as part of its plans to support its domestic industry. In addition the province of Jiangsu will install as many as 150,000 Linux PCs, using Loongson processors, in rural schools starting in 2009. Indonesia By December 2013, about 500 Indonesian schools were running openSUSE. Georgia In 2004, Georgia began running all its school computers and LTSP thin clients on Linux, mainly using Kubuntu, Ubuntu and stripped Fedora-based distros. India The Indian government's tablet computer initiative for student use employs Linux as the operating system as part of its drive to produce a tablet PC for under 1,500 rupees (US$35). The Indian state of Tamil Nadu plans to distribute 100,000 Linux laptops to its students. Government officials of Kerala, India announced they will use only free software, running on the Linux platform, for computer education, starting with the 2,650 government and government-aided high schools. The Indian state of Tamil Nadu has issued a directive to local government departments asking them to switch over to open-source software, in the wake of Microsoft's decision to end support for Windows XP in April 2014. Philippines The Philippines has deployed 13,000 desktops running on Fedora, the first 10,000 were delivered in December 2007 by Advanced Solutions Inc. Another 10,000 desktops of Edubuntu and Kubuntu are planned. Russia Russia announced in October 2007, that all its school computers will run on Linux. This is to avoid cost of licensing currently unlicensed software. Home Sony's PlayStation 3 came with a hard disk (20 GB, 60 GB or 80 GB) and was specially designed to allow easy installation of Linux on the system. However, Linux was prevented from accessing certain functions of the PlayStation such as 3D graphics. Sony also released a Linux kit for its PlayStation 2 console (see Linux for PlayStation 2). PlayStation hardware running Linux has been occasionally used in small scale distributed computing experiments, due to the ease of installation and the relatively low price of a PS3 compared to other hardware choices offering similar performance. As of April 1, 2010, Sony disabled the ability to install Linux "due to security concerns" starting with firmware version 3.21. In 2008, many netbook models were shipped with Linux installed, usually with a lightweight distribution, such as Xandros or Linpus, to reduce resource consumption on their limited resources. Through 2007 and 2008, Linux distributions with an emphasis on ease of use such as Ubuntu became increasingly popular as home desktop operating systems, with some OEMs, such as Dell, offering models with Ubuntu or other Linux distributions on desktop systems. In 2011, Google introduced its Chromebooks, web thin clients based on Linux and supplying just a web browser, file manager and media player. They also have the ability to remote desktop into other computers via the free Chrome Remote Desktop extension. In 2012 the first Chromebox, a desktop equivalent of the Chromebook, was introduced. By 2013 Chromebooks had captured 20-25% of the US market for sub-$300 laptops. Android, created by Google in 2007, is the smartphone & tablet operating system which, as of late 2013, runs on 80% of smartphones and 60% of tablets, worldwide; it is pre-installed on devices by brand hardware manufacturers. In 2013, Valve publicly released ports of Steam and the Source engine to Linux, allowing many popular titles by Valve such as Team Fortress 2 and Half-Life 2 to be played on Linux. Later that same year, Valve announced their upcoming Steam Machine consoles, which would by default run SteamOS, an operating system based on the Linux kernel. Valve has created a compatibility layer called Proton. Proton makes it possible to run many games on Linux. In March 2014, Ubuntu claimed 22,000,000 users. Businesses and non-profits Linux is used extensively on servers in businesses, and has been for a long time. Linux is also used in some corporate environments as the desktop platform for their employees, with commercially available solutions including Red Hat Enterprise Linux, SUSE Linux Enterprise Desktop, and Ubuntu. Free I.T. Athens, founded in 2005 in Athens, Georgia, United States, is a non-profit organization dedicated to rescuing computers from landfills, recycling them or refurbishing them using Linux exclusively. Burlington Coat Factory has used Linux exclusively since 1999. Ernie Ball, known for its famous Super Slinky guitar strings, has used Linux as its desktop operating system since 2000. Novell is undergoing a migration from Windows to Linux. Of its 5500 employees, 50% were successfully migrated as of April 2006. This was expected to rise to 80% by November. Wotif, the Australian hotel booking website, migrated from Windows to Linux servers to keep up with the growth of its business. Union Bank of California announced in January 2007 that it would standardize its IT infrastructure on Red Hat Enterprise Linux in order to lower costs. Peugeot, the European car maker, announced plans to deploy up to 20,000 copies of Novell's Linux desktop, SUSE Linux Enterprise Desktop, and 2,500 copies of SUSE Linux Enterprise Server, in 2007. Mindbridge, a software company, announced in September 2007 that it had migrated a large number of Windows servers onto a smaller number of Linux servers and a few BSD servers. It claims to have saved "bunches of money." Virgin America, the low cost U.S. airline, uses Linux to power its in-flight entertainment system, RED. Amazon.com, the US based mail-order retailer, uses Linux "in nearly every corner of its business". Google uses a version of Ubuntu internally nicknamed Goobuntu. In August 2017, Google announced that it would be replacing Goobuntu with gLinux, an in-house distro based on the Debian Testing branch. IBM does extensive development work for Linux and also uses it on desktops and servers internally. The company also created a TV advertising campaign: IBM supports Linux 100%. Wikimedia Foundation moved to running its Wikipedia servers on Ubuntu in late 2008, after having previously used a combination of Red Hat and Fedora. DreamWorks Animation adopted the use of Linux since 2001, and uses more than 1,000 Linux desktops and more than 3,000 Linux servers. The Chicago Mercantile Exchange employs an all-Linux computing infrastructure and has used it to process over a quadrillion dollars worth of financial transactions The Chi-X pan-European equity exchange runs its MarketPrizm trading platform software on Linux. The London Stock Exchange uses the Linux-based MillenniumIT Millennium Exchange software for its trading platform and predicts that moving to Linux from Windows will give it an annual cost savings of at least £10 million ($14.7 million) from 2011 to 2012. The New York Stock Exchange uses Linux to run its trading applications. Mobexpert Group, the leading furniture manufacturer and retailer in Romania, extensively uses Linux, LibreOffice and other free software in its data communications and processing systems, including some desktops. American electronic music composer Kim Cascone migrated from Apple Mac to Ubuntu for his music studio, performance use and administration in 2009. Laughing Boy Records under the direction of owner Geoff Beasley switched from doing audio recording on Windows to Linux in 2004 as a result of Windows spyware problems. Nav Canada's new Internet Flight Planning System for roll-out in 2011, is written in Python and runs on Red Hat Linux. Electrolux Frigidaire Infinity i-kitchen is a "smart appliance" refrigerator that uses a Linux operating system, running on an embedded 400 MHz Freescale i.MX25 processor with 128 MB of RAM and a 480×800 touch panel. DukeJets LLC (USA) and Duke Jets Ltd. (Canada), air charter brokerage companies, switched from Windows to Ubuntu Linux in 2012. Banco do Brasil, the biggest bank in Brazil, has moved nearly all desktops to Linux, except some corporate ones and a few that are need to operate some specific hardware. They began migration of their servers to Linux in 2002. Branch servers and ATMs all run Linux. The distribution of choice is openSUSE 11.2. KLM, the Royal Aviation Company of the Netherlands, uses Linux on the OSS-based version of its KLM WebFarm. Ocado, the online supermarket, uses Linux in its data centres. Kazi Farms Group, a large poultry and food products company in Bangladesh, migrated 1000 computers to Linux. An associated TV channel, Deepto TV, as well as an associated daily newspaper Dhaka Tribune also migrated to Linux. Zando Computer, an IT consulting company located in Bucharest, Romania uses Linux for its business needs (server and desktop). The company recommends to its clients and actively deploys Linux, LibreOffice (OpenDocument format solutions) and other categories of free software. Nvidia, at the 2015 Consumer Electronics Show, company CEO Jen-Hsun Huang made his extensive presentations using Ubuntu Linux. Statistical Office of the Republic of Serbia ICT report for 2017, showed that 19.8% of the companies in Serbia use Linux as their main operating system (up from 14.5% in 2016). Linux is largely used in Serbian large enterprises (companies that fulfil two out of three conditions: 250+ employees, revenue of 35+ million euros, total assets of 17.5+ million euros), where Linux adoption has reached 40.9%. Scientific institutions NASA decided to switch the International Space Station laptops running Windows XP to Debian 6. Both CERN and Fermilab use Scientific Linux in all their work; this includes running the Large Hadron Collider or the Dark Energy Camera or the 20,000 internal servers of CERN WLCG is composed of 576 sites with more than 390,000 processors and 150 petabytes of storage and uses Linux on all its nodes. Canada's largest super computer, the IBM iDataPlex cluster computer at the University of Toronto uses Linux as its operating system. The Internet Archive uses hundreds of x86 servers to catalogue the Internet, all of them running Linux. ASV Roboat autonomous robotic sailboat runs on Linux Tianhe-I, the world's fastest super computer as of October 2010, located at the National Centre for Supercomputing in Tianjin, China runs Linux. The University of Portsmouth in the United Kingdom has deployed a "cost effective" high performance computer that will be used to analyse data from telescopes around the world, run simulations and test the current theories about the universe. Its operating system is Scientific Linux. Dr David Bacon of the University of Portsmouth said: "Our Institute of Cosmology is in a great position to use this high performance computer to make real breakthroughs in understanding the universe, both by analysing the very latest astronomical observations, and by calculating the consequences of mind-boggling new theories...By selecting Dell’s industry-standard hardware and open-source software we’re able to free up budget that would have normally been spent on costly licences and reinvest it." In September 2011, ten autonomous unmanned air vehicles were flown in flocking flight by the École Polytechnique Fédérale de Lausanne’s Laboratory of Intelligence Systems in Lausanne, Switzerland. The UAVs each sense each other and control their own flight in relation to each other, each has an independent processor running Linux to accomplish this. Celebrities British actor Stephen Fry, in August 2012 stated that he uses Linux. "Do I use Linux on any of my devices? Yes – I use Ubuntu these-days; it seems the friendliest." In 2008, Jamie Hyneman, co-host of the American television series Mythbusters, advocated Linux-based operating systems as a solution to software bloat. Science fiction writer Cory Doctorow uses Ubuntu. Actor Wil Wheaton sometimes uses Linux distributions, but not as his primary operating system. See also Comparison of open source and closed source References Linux
Operating System (OS)
933
Word processor A word processor (WP) is a device or computer program that provides for input, editing, formatting, and output of text, often with some additional features. Early word processors were stand-alone devices dedicated to the function, but current word processors are word processor programs running on general purpose computers. The functions of a word processor program fall somewhere between those of a simple text editor and a fully functioned desktop publishing program. However, the distinctions between these three have changed over time and were unclear after 2010. Background Word processors did not develop out of computer technology. Rather, they evolved from mechanical machines and only later did they merge with the computer field. The history of word processing is the story of the gradual automation of the physical aspects of writing and editing, and then to the refinement of the technology to make it available to corporations and Individuals. The term word processing appeared in American offices in early 1970s centered on the idea of streamlining the work to typists, but the meaning soon shifted toward the automation of the whole editing cycle. At first, the designers of word processing systems combined existing technologies with emerging ones to develop stand-alone equipment, creating a new business distinct from the emerging world of the personal computer. The concept of word processing arose from the more general data processing, which since the 1950s had been the application of computers to business administration. Through history, there have been three types of word processors: mechanical, electronic and software. Mechanical word processing The first word processing device (a "Machine for Transcribing Letters" that appears to have been similar to a typewriter) was patented by Henry Mill for a machine that was capable of "writing so clearly and accurately you could not distinguish it from a printing press". More than a century later, another patent appeared in the name of William Austin Burt for the typographer. In the late 19th century, Christopher Latham Sholes created the first recognizable typewriter although it was a large size, which was described as a "literary piano". The only "word processing" these mechanical systems could perform was to change where letters appeared on the page, to fill in spaces that were previously left on the page, or to skip over lines. It was not until decades later that the introduction of electricity and electronics into typewriters began to help the writer with the mechanical part. The term “word processing” (translated from the german word Textverarbeitung) itself was created in the 1950s by Ulrich Steinhilper, a German IBM typewriter sales executive. However, it did not make its appearance in 1960s office management or computing literatures, though many of the ideas, products, and technologies to which it would later be applied were already well known. But by 1971 the term was recognized by the New York Times as a business "buzz word". Word processing paralleled the more general "data processing", or the application of computers to business administration. Thus by 1972 discussion of word processing was common in publications devoted to business office management and technology, and by the mid-1970s the term would have been familiar to any office manager who consulted business periodicals. Electromechanical and electronic word processing By the late 1960s, IBM had developed the IBM MT/ST (Magnetic Tape/Selectric Typewriter). This was a model of the IBM Selectric typewriter from the earlier part of this decade, but it came built into its own desk, integrated with magnetic tape recording and playback facilities along with controls and a bank of electrical relays. The MT/ST automated word wrap, but it had no screen. This device allowed a user to rewrite text that had been written on another tape, and it also allowed limited collaboration in the sense that a user could send the tape to another person to let them edit the document or make a copy. It was a revolution for the word processing industry. In 1969, the tapes were replaced by magnetic cards. These memory cards were inserted into an extra device that accompanied the MT/ST, able to read and record users' work. In the early 1970s, word processing began to slowly shift from glorified typewriters augmented with electronic features to become fully computer-based (although only with single-purpose hardware) with the development of several innovations. Just before the arrival of the personal computer (PC), IBM developed the floppy disk. In the early 1970s, the first word-processing systems appeared which allowed display and editing of documents on CRT screens. During this era, these early stand-alone word processing systems were designed, built, and marketed by several pioneering companies. Linolex Systems was founded in 1970 by James Lincoln and Robert Oleksiak. Linolex based its technology on microprocessors, floppy drives and software. It was a computer-based system for application in the word processing businesses and it sold systems through its own sales force. With a base of installed systems in over 500 sites, Linolex Systems sold 3 million units in 1975 — a year before the Apple computer was released. At that time, the Lexitron Corporation also produced a series of dedicated word-processing microcomputers. Lexitron was the first to use a full-sized video display screen (CRT) in its models by 1978. Lexitron also used 5 inch floppy diskettes, which became the standard in the personal computer field. The program disk was inserted in one drive, and the system booted up. The data diskette was then put in the second drive. The operating system and the word processing program were combined in one file. Another of the early word processing adopters was Vydec, which created in 1973 the first modern text processor, the “Vydec Word Processing System”. It had built-in multiple functions like the ability to share content by diskette and print it. The Vydec Word Processing System sold for $12,000 at the time, (about $60,000 adjusted for inflation). The Redactron Corporation (organized by Evelyn Berezin in 1969) designed and manufactured editing systems, including correcting/editing typewriters, cassette and card units, and eventually a word processor called the Data Secretary. The Burroughs Corporation acquired Redactron in 1976. A CRT-based system by Wang Laboratories became one of the most popular systems of the 1970s and early 1980s. The Wang system displayed text on a CRT screen, and incorporated virtually every fundamental characteristic of word processors as they are known today. While early computerized word processor system were often expensive and hard to use (that is, like the computer mainframes of the 1960s), the Wang system was a true office machine, affordable to organizations such as medium-sized law firms, and easily mastered and operated by secretarial staff. The phrase "word processor" rapidly came to refer to CRT-based machines similar to Wang's. Numerous machines of this kind emerged, typically marketed by traditional office-equipment companies such as IBM, Lanier (AES Data machines - re-badged), CPT, and NBI. All were specialized, dedicated, proprietary systems, with prices in the $10,000 range. Cheap general-purpose personal computers were still the domain of hobbyists. Japanese word processor devices In Japan, even though typewriters with Japanese writing system had widely been used for businesses and governments, they were limited to specialists who required special skills due to the wide variety of letters, until computer-based devices came onto the market. In 1977, Sharp showcased a prototype of a computer-based word processing dedicated device with Japanese writing system in Business Show in Tokyo. Toshiba released the first Japanese word processor JW-10 in February 1979. The price was 6,300,000 JPY, equivalent to US$45,000. This is selected as one of the milestones of IEEE. The Japanese writing system uses a large number of kanji (logographic Chinese characters) which require 2 bytes to store, so having one key per each symbol is infeasible. Japanese word processing became possible with the development of the Japanese input method (a sequence of keypresses, with visual feedback, which selects a character) -- now widely used in personal computers. Oki launched OKI WORD EDITOR-200 in March 1979 with this kana-based keyboard input system. In 1980 several electronics and office equipment brands entered this rapidly growing market with more compact and affordable devices. While the average unit price in 1980 was 2,000,000 JPY (US$14,300), it was dropped to 164,000 JPY (US$1,200) in 1985. Even after personal computers became widely available, Japanese word processors remained popular as they tended to be more portable (an "office computer" was initially too large to carry around), and become necessities in business and academics, even for private individuals in the second half of the 1980s. The phrase "word processor" has been abbreviated as "Wa-pro" or "wapuro" in Japanese. Word processing software The final step in word processing came with the advent of the personal computer in the late 1970s and 1980s and with the subsequent creation of word processing software. Word processing software that would create much more complex and capable output was developed and prices began to fall, making them more accessible to the public. By the late 1970s, computerized word processors were still primarily used by employees composing documents for large and midsized businesses (e.g., law firms and newspapers). Within a few years, the falling prices of PCs made word processing available for the first time to all writers in the convenience of their homes. The first word processing program for personal computers (microcomputers) was Electric Pencil, from Michael Shrayer Software, which went on sale in December 1976. In 1978 WordStar appeared and because of its many new features soon dominated the market. However, WordStar was written for the early CP/M (Control Program–Micro) operating system, and by the time it was rewritten for the newer MS-DOS (Microsoft Disk Operating System), it was obsolete. WordPerfect and its competitor Microsoft Word replaced it as the main word processing programs during the MS-DOS era, although there were less successful programs such as XyWrite. Most early word processing software required users to memorize semi-mnemonic key combinations rather than pressing keys such as "copy" or "bold". Moreover, CP/M lacked cursor keys; for example WordStar used the E-S-D-X-centered "diamond" for cursor navigation. However, the price differences between dedicated word processors and general-purpose PCs, and the value added to the latter by software such as “killer app” spreadsheet applications, e.g. VisiCalc and Lotus 1-2-3, were so compelling that personal computers and word processing software became serious competition for the dedicated machines and soon dominated the market. Then in the late 1980s innovations such as the advent of laser printers, a "typographic" approach to word processing (WYSIWYG - What You See Is What You Get), using bitmap displays with multiple fonts (pioneered by the Xerox Alto computer and Bravo word processing program), and graphical user interfaces such as “copy and paste” (another Xerox PARC innovation, with the Gypsy word processor). These were popularized by MacWrite on the Apple Macintosh in 1983, and Microsoft Word on the IBM PC in 1984. These were probably the first true WYSIWYG word processors to become known to many people. Of particular interest also is the standardization of TrueType fonts used in both Macintosh and Windows PCs. While the publishers of the operating systems provide TrueType typefaces, they are largely gathered from traditional typefaces converted by smaller font publishing houses to replicate standard fonts. Demand for new and interesting fonts, which can be found free of copyright restrictions, or commissioned from font designers, occurred. The growing popularity of the Windows operating system in the 1990s later took Microsoft Word along with it. Originally called "Microsoft Multi-Tool Word", this program quickly became a synonym for “word processor”. See also List of word processors References Broad-concept articles
Operating System (OS)
934
Human Engineered Software Human Engineered Software (HES, also known as HesWare) was an American home computer software and hardware developer/publisher from 1980 to 1984, concentrating on the Commodore 64 and the Atari 8-bit family. History The company was located in Brisbane, California. It was acquired by Prabhat Jain in 1984 and funded by Microsoft's Bill Gates and Steve Ballmer, Dave Marquat of TVI and Prabhat Jain of Video-7/Paradise. Published titles included numerous games as well as educational and productivity programs. Among them were Project Space Station, Mr. TNT, Turtle Graphics by David Malmberg, several Jeff Minter games (Llamasoft), such as Attack of the Mutant Camels, Gridrunner, Hes Games, and HesMon, Graphics BASIC, 64Forth (a cartridge-based Forth implementation), and the HesModem and HesModem II. At one point, HES was the largest single-source supplier of software for the Commodore 64. The company was started by Jay Balakrishnan and Cy Shuster in 1980. The company was founded in Balakrishnan's apartment in Los Angeles, where he took down the door to his bedroom, put it across two file cabinets, and used that as a desk for his development (winding the cables around the doorknob). With research into the PET ROM, Balakrishnan wrote the first 8K 6502 Assembler, HESbal (HES Basic Assembler Language) in BASIC, and an accompanying text editor, HESedit. Having HESbal allowed numerous creative follow-on products, such as HEScom, software and a user port cable that allowed VIC20 programs to be saved to a PET hard disk (since the first VIC20 didn't have a hard disk). Shuster soldered the HEScom cables in his garage and wrote HESlister, a print utility for BASIC programs, that he ported from a TRS-80 Model I to the PET, to the VIC, and later to the IBM PC. HESware published OMNIWRITER, a word processor for the Commodore 64. Game writers Lawrence Holland and Ron Gilbert, later to be famous for their work at LucasArts, started their careers at HES. By early 1984 InfoWorld reported that HES was tied with Broderbund as the world's tenth-largest microcomputer-software company and largest entertainment-software company, with $13 million in 1983 sales. In October 1984, HES filed for bankruptcy. It was reported that Avant Garde Publishing Corp would buy HES in a straight cash deal, but later detailed that the offer was blocked in bankruptcy court and HES shut down. References Commodore 64 Defunct video game companies of the United States Entertainment companies based in California Software companies based in the San Francisco Bay Area Companies based in San Mateo County, California Software companies established in 1980 Software companies disestablished in 1984 Video game companies established in 1980 Video game companies disestablished in 1984 Companies disestablished in 1984 1980 establishments in California 1984 disestablishments in California Defunct companies based in the San Francisco Bay Area American companies disestablished in 1984 American companies established in 1980
Operating System (OS)
935
Advanced System Optimizer Advanced System Optimizer (formerly Advanced Vista Optimizer) is a software utility for Microsoft Windows developed by Systweak (a company founded in 1999 by Mr. Shrishail Rana). It is used to improve computer performance and speed. Advanced System Optimizer has been reviewed by PCworld, Cnet, G2, and Yahoo. Features Advanced System Optimizer has utilities for optimization, speedup, cleanup, memory management, etc. Its utilities include system cleaners, system and memory optimizers, junk file cleaners, privacy protectors, startup managers, security tools and other maintenance tools., repair missing or broken DLLs and includes a file eraser. There's a "what's recommended" link, which is used to find the problems on the PC, to give info on how to speed up the computer, or to show settings of various program features with the scheduler. The "Single Click Care" option scans the computer for optimization all areas of the computer. This program features an "Optimization" tab, which is used for memory optimization and to free up memory of the computer. The startup manager feature of this program is used to manage programs that load at the computer's startup. The registry cleaner has 12 categories of registry errors and can detect and delete registry errors. The 2008 version had over 25 tools. It can be scheduled to run optimization without the need for user intervention. Reception In a review syndicated to The Washington Post, PC World praised the quality of the suite's design, stating the tools perform as advertised. The reviewer did however note the product's price as one drawback. PC Advisor also praised the package's functionality, but warned readers they would have to decide for themselves whether it is worth the price considering the availability of free alternatives. Alternatives At present users now have several choices to buy better tools for their computer and carry out optimization, privacy protection on their computer. Some of the alternatives are: SafeSoft PC Cleaner CCleaner References External links Computer system optimization software Shareware Utilities for Windows
Operating System (OS)
936
Open implementation In computing, open implementation platforms are systems where the implementation is accessible. Open implementation allows developers of a program to alter pieces of the underlying software to fit their specific needs. With this technique it is far easier to write general tools, though it makes the programs themselves more complex to design and use. There are also open language implementations, which make aspects of the language implementation accessible to application programmers. Open implementation is not to be confused with open source, which allows users to change implementation source code, rather than using existing application programming interfaces. See also Aspect-oriented programming for a successor concept in research Metaobject protocol for the primary implementation means Software architecture for organization of software in general External links Links pertaining to open implementation Free software culture and documents
Operating System (OS)
937
Sprite (operating system) Sprite is an experimental Unix-like distributed operating system developed at the University of California, Berkeley by John Ousterhout's research group between 1984 and 1992. Its notable features include support for single system image on computer clusters and the introduction of the log-structured filesystem. The Tcl scripting language also originated in this project. Early work Early work on Sprite was based on the idea of making the operating system more "network aware", and thereby at the same time make it invisible to the user. The primary area of work was the building of a new network file system which made heavy use of local client-side caching in order to improve performance. After opening the file and some initial reads, the network is only used on-demand, and most user actions occur against the cache. Similar utilities allow remote devices to be mapped into the local computer's space, allowing for network printing and similar duties. Many of the key Unix files are based on the network, including things like the password file. All machines in a network share the root directory as well. Other common Unix utilities such as finger were re-written to make them network aware as well, listing all of the people logged on across the network. This makes a Sprite network appear as if it were a single large time-sharing system, or a single-system image. Another key addition to Sprite is process migration, which allows programs to be moved between machines at any time. The system maintains a list of machines and their state, and automatically moves processes to idle machines to improve local performance. Processes can also be "evicted" from machines to improve their performance, causing the original starter to move it to another machine on the network, or take control of it locally again. Long tasks (like compiling the Sprite system) can appear very fast. Further development Work on the "early" Sprite outlined above ended around 1987, but was improved during the next year. Starting in 1990 Sprite was used as the basis for development of the first log-structured file system (LFS), development of which continued until about 1992. LFS dramatically increases the performance of file writes at the expense of read performance. Under Sprite, this tradeoff is particularly useful because most read access is cached anyway—that is, Sprite systems typically perform fewer reads than a normal Unix system. LFS-like systems also allow for much easier crash recovery, which became a major focus of the project during this period. Additional experimentation on striped file systems, both striped across different machines as well as clusters of drives, continued until about 1994. Discontinuation Sprite was not a microkernel system, and suffers the same sort of problems as other Unixes in terms of development complexity, becoming increasingly difficult to develop as more functionality was added. By the 1990s it was suffering and the small team supporting the project was simply not able to keep up with the rapid changes in Unix taking place during this time. The project was slowly shut down by 1994. See also Amoeba (operating system) Plan 9 from Bell Labs References External links Sprite home page Booting a Sprite harddisk image on an emulated DECstation 5000/200 Unix variants Software using the MIT license Discontinued operating systems Distributed operating systems
Operating System (OS)
938
RSTS/E RSTS () is a multi-user time-sharing operating system, initially developed by Evans Griffiths & Hart of Boston, and acquired by Digital Equipment Corporation (DEC, now part of Hewlett Packard) for the PDP-11 series of 16-bit minicomputers. The first version of RSTS (RSTS-11, Version 1) was implemented in 1970 by DEC software engineers that developed the TSS-8 time-sharing operating system for the PDP-8. The last version of RSTS (RSTS/E, Version 10.1) was released in September 1992. RSTS-11 and RSTS/E are usually referred to just as "RSTS" and this article will generally use the shorter form. Acronyms and abbreviations BTSS (Basic Time Sharing System – never marketed) – The first name for RSTS. CCL (Concise Command Language) – equivalent to a command to run a program kept in the Command Line Interpreter. CIL (Core Image Library) – Similar to a shared library (.so) on Linux or .DLL on Microsoft Windows. CILUS (Core Image Library Update and Save) – Program to manipulate a CIL file. CLI (Command Line Interpreter) – See Command-line interface. CUSPs (Commonly Used System Programs) – System management applications like Task Manager or Registry Editor on Microsoft Windows. On RSTS-11, CUSPs were written in BASIC-Plus just like user programs. DCL (Digital Command Language) – See DIGITAL Command Language. DTR (DATATRIEVE) – programming language FIP (File Information Processing) – resident area for issuing file requests FIRQB (File Information Request Queue Block) – A data structure containing information about file requests. KBM (Keyboard Monitor) – Analogous to Command Line Interpreter. LAT (Local Area Transport) – Digital's predecessor to TCP/IP MFD (Master File Directory) – Root directory of file system. PBS (Print Batch Services) PIP (Peripheral Interchange Program) PPN (Project Programmer Number) – Analogous to GID and UID in Unix. RDC (Remote Diagnostics Console) – A replacement front panel for a PDP-11 which used a serial connection to the console terminal or a modem instead of lights and toggle switches to control the CPU. RSTS-11 (Resource Sharing Time Sharing System) – The first commercial product name for RSTS RSTS/E (Resource Sharing Timesharing System Extended) – The current implementation of RSTS. RTS (Run Time System) – Read only segment of code provided by the supplier which would be mapped into the high end of a 32K, 16-bit word address space that a user program would use to interface with the operating system. Only one copy of an RTS would be loaded into RAM, but would be mapped into the address space of any user program that required it. In essence, shared, re-entrant code, to reduce RAM requirements, by sharing the code between any programs that required it. RTSS (Resource Time Sharing System – never marketed) – The second name for RSTS SATT (Storage Allocation Truth Table) a series of 512KB blocks on every disk that indicated if the block, or cluster, on the whole disk was allocated on the disk. Bitwise, a 1 indicated a cluster was in use; a 0 indicated it was not in use. SIL (Save Image Library) – The new name for a CIL file after DEC started selling PDP-11 systems with all Semiconductor memory and no Magnetic-core memory such as the PDP-11T55. SILUS (Save Image Library Update and Save) – The new name for CILUS after CIL files were renamed SIL files. UFD (User File Directory) – A user's home directory. Root directory of a file system. XRB (Transfer Request Block) – A data structure containing information about other types of system requests that do not use FIRQBs to convey the information Development 1970s The kernel of RSTS was programmed in the assembly language MACRO-11, compiled and installed to a disk using the CILUS program, running on a DOS-11 operating system. RSTS booted into an extended version of the BASIC programming language which DEC called "BASIC-PLUS". All of the system software CUSPS for the operating system, including the programs for resource accounting, login, logout, and managing the system, were written in BASIC-PLUS. From 1970 to 1973, RSTS ran in only 56K bytes of magnetic core memory (64 kilobytes including the memory-mapped I/O space). This would allow a system to have up to 16 terminals with a maximum of 17 jobs. The maximum program size was 16K bytes. By the end of 1973 DEC estimated there were 150 licensed systems running RSTS. In 1973 memory management support was included in RSTS (now RSTS/E) for the newer DEC PDP-11/40 and PDP-11/45 minicomputers (the PDP-11/20 was only supported under RSTS-11). The introduction of memory management in the newer PDP-11 computers not only meant these machines were able to address four times the amount of memory (18-bit addressing, 256K bytes), it also paved the way for the developers to separate user mode processes from the core of the kernel. In 1975 memory management support was again updated for the newer 22-bit addressable PDP-11/70. RSTS systems could now be expanded to use as much as two megabytes of memory running up to 63 jobs. The RTS and CCL concepts were introduced although they had to be compiled in during "SYSGEN". Multi-terminal service was introduced which would allow a single job the ability to control multiple terminals (128 total). Large-message send/receive and interprocess communication became very sophisticated and efficient. By August there are 1,200 licensed systems. In 1977 the installation process for RSTS was no longer dependent on DOS-11. The RSTS kernel could now be compiled under the RT-11 RTS, formatted as a kernel file with RT-11 SILUS, and copied to the system or other disks, while the computer was time-sharing. The BASIC-PLUS RTS (as well as RT-11, RSX-11, TECO and third party RTSs) all ran as user mode processes, independent of the RSTS kernel. A systems manager could now decide during the bootstrap phase which RTS to run as the systems default KBM. By now, there were some 3,100 licensed systems. In 1978 the final memory management update was included for all machines that could support 22bit addressing. RSTS could now use the maximum amount of memory available to a PDP-11 (4 megabytes). Support was also included for SUPERVISORY mode which made RSTS the first DEC operating system with this capability. DECnet was also supported as well as remote diagnostics from field service technicians at the RDC in Colorado Springs, Colorado (a DEC subscription service). By the end of the decade, there are over 5,000 licensed systems. 1980s In 1981 support for separate instruction and data space for users with Unibus machines (PDP-11/44, PDP-11/45, PDP-11/55 and PDP-11/70) provided an extension to the memory constraints of an individual program. Compiling programs to use separate instruction and data space would soon give a program up to 64 kB for instructions, and up to 64 kB for buffering data. The DCL RTS is included as well as support for the newer revision of DECnet III. By 1983, with an estimated 15,000 DEC machines running RSTS/E, V8.0-06 included support for the smallest 18-bit PDP-11 sold by DEC (the MicroPDP-11). A pre-generated kernel and CUSPS were included in this distribution to make installation on the MicroPDP-11 easier. DEC sold the pre-generated version on the MicroPDP-11 as MicroRSTS at a reduced price, however users needed to purchase the full version if they had a need to generate their own kernel. The file system was upgraded and given the designation RSTS Directory Structure 1 (RDS1). All previous versions of the RSTS file system are given the designation RDS0. The newer file system is designed to support more than 1700 user accounts. "It is now thought that there are well over 10,000 licensed users and at least an equal number of unlicensed users!". From 1985 to 1989 RSTS became a mature product in the Version 9 revisions. DCL was installed as the primary RTS and the file system was again upgraded (now RDS1.2) to support new user account features. Passwords were now encrypted using a modified DES algorithm instead of limited to six (6) characters stored in DEC Radix-50 format. Before Version 9, there was a non-user system account in the project (group) zero (the designation is [0,1]), and all accounts in project number 1 were privileged (not unlike the root account on Unix systems). After Version 9 was released, additional accounts could be created for project zero, and multiple privileges could be individually set for any account. Support for the LAT protocol was included as well as the ability to run the newest version of DECnet IV. These network enhancements gave any user connected to a terminal through a DECserver the ability to communicate with a RSTS machine, just as easily as they could with a VAX running VMS. The DCL command structure between DEC operating systems also contributed to the familiar look and feel: This is not just another pseudo command file processor; it is based on VMS features. The DCL command file processor is fully supported and integrated in RSTS through extensive changes to DCL and the monitor. DCL executes command files as part of your job; therefore, no pseudo keyboard or forcing of commands to your keyboard is necessary (as with ATPK). 1990s In 1994 DEC sold most of its PDP-11 software business to Mentec. Digital continued to support its own PDP-11 customers for a short period after with the assistance of Mentec staff. In 1997 Digital and Mentec granted anyone wishing to use RSTS 9.6 or earlier for non-commercial, hobby purposes a no-cost license . The license is only valid on the SIMH PDP-11 emulator. The license also covers some other Digital operating systems. Copies of the license are included in authorized software kit available for download on the official website of the SIMH emulator. Documentation The standard complement of documentation manuals that accompanies a RSTS distribution consists of at least 11 large three-ring binders (collectively known as "The orange wall"), one small three-ring binder containing the RSTS/E Quick Reference Guide and a paperback copy of Introduction to BASIC AA-0155B-TK. Each of the 11 three-ring binders contains: Volume 1: General Information and Installation Documentation Directory Release Notes Maintenance Notebook System Installation and Update Guide Volume 2: System Management System Manager's Guide Volume 3: System Usage System User's Guide Guide to Writing Command Procedures Volume 4: Utilities Utilities Reference Manual Introduction to the EDT Editor SORT/MERGE User's Guide RUNOFF User's Guide Volume 4A: Utilities EDT Editor Manual Volume 4B: Utilities Task Builder Reference Manual Programmer's Utilities Manual RT11 Utilities Manual TECO User's Guide Volume 5: BASIC-PLUS BASIC-PLUS Language Manual Volume 6: System Programming Programming Manual Volume 7: MACRO Programming System Directives Manual ODT Reference Manual Volume 7A: MACRO Programming MACRO-11 Language Manual RMS-11 MACRO Programmer's Guide Volume 8: RMS RMS-11: An Introduction RMS11 User's Guide RMS-11 Utilities Operation Communication RSTS uses a serial communication connection to interact with the operator. The connection might be a local computer terminal with a 20 mA current loop interface, an RS-232 interface (either local serial port or remote connection via modem), or by an ethernet connection utilizing DECnet or LAT. As many as 128 terminals (using multi-terminal service) could connect to a RSTS system, running under a maximum of 63 jobs (depending on the processor being used, the amount of memory and disk space, and the system load). Most RSTS systems had nowhere near that many terminals. Users could also submit jobs to be run in batch mode. There was also a batch program called "ATPK" that allowed users to run a series of commands on an imaginary terminal (pseudo-terminal) in semi-interactive mode similar to batch commands in MS-DOS. Login [Project, Programmer] Users connected to the system by typing the LOGIN command (or HELLO) at a logged-out terminal and pressing return. Actually, typing any command at a logged-out terminal simply started the LOGIN program which then interpreted the command. If it was one of the commands which were allowed to be used by a user that is not yet logged in ("Logged Out"), then the associated program for that command was CHAINed to, otherwise the message "Please say HELLO" was printed on the terminal. Prior to Version 9, a user could also initiate a 1 line login, however this left the user's password on the screen for anyone else in the room to view (examples follow): Bye HELLO 1,2;SECRET Ready or I 1,2;SECRET Ready or LOGIN 1,2;SECRET Ready One could determine the status of a terminal from the command responses, printed by the command interpreter. A logged-in user communicating with the BASIC-PLUS KBM was given the prompt "Ready" and a user who is logged out is given the prompt "Bye". A user would log in by supplying their PPN number and password. User numbers consisted of a project number (this would be the equivalent of a group number in Unix), a comma, and a programmer number. Both numbers were in the range of 0 to 254, with special exceptions. When specifying an account, the project and programmer number were enclosed in brackets. A typical user number could be [10,5] (project 10, programmer 5), [2,146], [254,31], or [200,220], etc. When a user was running a system program while logged out (because the system manager had enabled it) their PPN number was [0,0], and would appear in the SYSTAT CUSP as **,**. Thus that is not a valid account number. System and user accounts In every project, the programmer number 0 was usually reserved as a group account, as it could be referenced by the special symbol #. If one's user number were [20,103], a reference to a file name beginning with "#" would refer to a file stored in the account of the user number [20,0]. This feature would be useful in educational environments, as programmer number 0 could be issued to the instructor of a class, and the individuals students given accounts with the same project number, and the instructor could store in his account files marked as shared only for that project number (which would be students in that class only, and no other). Two special classes of project numbers existed. The project number 0 is generally reserved for system software, and prior to Version 9 there was only 1 project 0 account (named [0,1]). Programmers in the project number 1 were privileged accounts, equivalent to the single account "root" on Unix systems, except that the account numbers [1,0] through [1,254] were all privileged accounts. After Version 9 was released, any account could be granted specific privileges by the systems manager. The account [0,1] is used to store the operating system file itself, all run-time library systems, and certain system files relating to booting the system (author's comments appear on the right in bold): DIR [0,1] Name .Ext Size Prot Date SY:[0,1] BADB .SYS 0P < 63> 06-Jun-98 List of bad blocks SATT .SYS 3CP < 63> 06-Jun-98 Bitmap of allocated disk storage INIT .SYS 419P < 40> 06-Jun-98 Operating system loader program ERR .ERR 16CP < 40> 06-Jun-98 System error messages RSTS .SIL 307CP < 60> 06-Jun-98 Operating system itself BASIC .RTS 73CP < 60> 06-Jun-98 BASIC-PLUS run time system RT11 .RTS 20C < 60> 06-Jun-98 RT-11 run time system SWAP .SYS 1024CP < 63> 06-Jun-98 System swap file CRASH .SYS 35CP < 63> 06-Jun-98 System crash dump RSX .RTS 16C < 60> 23-Sep-79 RSX-11 run-time system TECO .RTS 39C < 60> 24-Sep-79 TECO text editor Total of 1952 blocks in 11 files in SY:[0,1] (Editor's note: This directory listing is previous to Version 9.) The DIR command is an installed CCL equivalent to a RUN command for the DIRECT program. [0,1] is the account number (and directory name) of the operating system storage account. It would be referred to as "project number 0, programmer number 1". The numbers shown after each file represent its size in disk blocks, a block being 512 bytes or 1/2 kilobyte (K). "C" indicates the file is contiguous (is stored as one file without being separated into pieces, similar to files on a Microsoft Windows system after a drive has been defragmented), while "P" indicates it is specially protected (cannot be deleted, even by a privileged user, unless the P bit is cleared by separate command). The numbers in brackets (like "< 40>") represent the protections for the file, which is always displayed in decimal. Protections indicate if the file may be seen by any other user, by other users with the same programmer number, if the file is read only or if it may be altered by another user, and whether the file may be executed by an ordinary user giving them additional privileges. These protection codes are very similar to the r, w and x protections in Unix and similar operating systems such as BSD and Linux. Code 60 is equivalent to a private file, code 63 is a private non-deletable file, and 40 is a public file. Library files are kept in account [1,1] and it is usually referenced by the logical name LB:. The account [1,2] is the system startup account (much like a unix system starting up under root), and contains the system CUSPS that could be referenced by prefixing the CUSP name with a dollar sign ($). "!" is used for account [1,3], "%" for [1,4] and "&" for [1,5]. The account [1,1] also had the special privilege of being the only account where a user logged in under that account is permitted to execute the POKE system call to put values into any memory in the system. Thus the account number [1,1] is the closest equivalent to "root" on Unix-based systems. Run-time environments One of the features of RSTS is the means for the execution of programs and the environment used to run them. The various environments allowed for programming in BASIC-PLUS, the enhanced BASIC-Plus-2, and in more traditional programming languages such as COBOL and FORTRAN. These environments were separate from each other such that one could start a program from one environment and the system would switch to a different environment while running a different program, and then return the user to the original environment they started with. These environments were referred to as an RTS. The term for the command line interface that most of these RTSs had was the KBM. Prior to Version 9, the systems manager needed to define which RTS the system would start under, and it had to be one that would execute compiled programs. A systems manager may also install special CCL commands, which take precedence over all KBM commands (with the exception of DCL). A CCL is analogous to a shortcut to a program on a Windows system or a symbolic link on Unix-based systems. CCLs are installed as a memory-resident command either during startup, or dynamically while the system is running by a system's manager (i.e.: it is not permanent like a disk file). When logged in, a user can "SWITCH" to any of these environments, type language statements in the BASIC-PLUS programming language, issue RUN commands to specific programs, or issue a special command called a CCL to execute a program with command options. Most RSTS systems managers generated the kernel to include the "Control-T" one line status option which could tell the user what program they were running, under what RTS the program was using, how much memory the program was taking, how much it could expand to, and how much memory the RTS was using. BASIC-PLUS Programs written in BASIC-PLUS ran under the BASIC RTS, which allowed them up to 32K bytes of memory (out of 64K total). The language was interpreted, each different keyword being internally converted to a unique byte code and the variables and data being indexed and stored separately within the memory space. The internal byte-code format was known as PCODE - when the interactive SAVE command was issued, the BASIC Plus RTS simply saved the working memory area to a disk file with a ".BAC" extension. Although this format was undocumented, two Electronic Engineering undergraduates from Southampton University in the UK (Nick de Smith and David Garrod) developed a decompiler that could reverse engineer BAC files into their original BASIC Plus source, complete with original line numbers and variable names (both subsequently worked for DEC). The rest of the memory was used by the BASIC RTS itself. If one wrote programs in a language that permitted true binary executables such as BASIC-Plus-2, FORTRAN-IV, or Macro Assembler, then the amount of memory available would be 56K (8K allocated to the RTS). The standard BASIC-PLUS prompt is the "Ready" response, pressing Control-T displays status (example): DCL (Digital Command Language) Starting with Version 9, DCL became the primary startup RTS even though it does not have the ability to execute binary programs. This became possible with the advent of the disappearing RSX RTS (see below). DCL was incorporated into all of the recent versions of DEC's operating systems (RSX-11, RT-11, VMS, and later OpenVMS) for compatibility. The standard DCL prompt is the dollar "$" sign (example): $ write 0 "Hello World, it is "+F$TIME() Hello World, it is 01-Jan-08 10:20 PM $ inquire p1 "Press Control-T for 1 line status:" Press Control-T for 1 line status: 1 KB0 DCL+DCL KB(0R) 4(8)K+24K 0.1(+0.1) -8 $ set verify/debug/watch $ show memory (show memory) (SYSTAT/C) Memory allocation table: Start End Length Permanent Temporary 0K - 85K ( 86K) MONITOR 86K - 1737K (1652K) (User) 1738K - 1747K ( 10K) (User) DAPRES LIB 1748K - 1751K ( 4K) (User) RMSRES LIB 1752K - 2043K ( 292K) ** XBUF ** 2044K - *** END *** $ RSX (Realtime System eXecutive) Programs that were written for the RSX RTS such as COBOL, Macro Assembler, or later releases of BASIC-Plus-2, could utilize the maximum amount of memory available for a binary program (56K due to the requirements of an RTS needing the top 8K to use for itself). RSTS Version 7 and later allowed the RSX RTS to be included in the kernel, making it completely "disappear" from the user address space, thus allowing 64K bytes of memory for user programs. Programs got around the limitations of the amount of available memory by using libraries (when permissible), by complicated overlay strategies, or by calling other programs ("Chaining") and passing them commands in a shared memory area called "Core Common," among other practices. When RSX is the default KBM, the standard RSX prompt (both logged in and logged out) is the ">" (or MCR "Monitor Console Routine") sign (example): >run Please type HELLO >HELLO 1,2;SECRET >run ?What? >help Valid keyboard commands are: ASSIGN DISMOUNT HELP RUN UNSAVE BYE EXIT MOUNT SHUTUP DEASSIGN HELLO REASSIGN SWITCH >run CSPCOM CSP>HWORLD=HWORLD CSP>^Z >RUN TKB TKB>HWORLD=HWORLD,LB:CSPCOM.OLB/LB TKB>// >run HWORLD.TSK Hello World Press Control-T for 1 line status: ? 1 KB0 HWORLD+...RSX KB(0R) 7(32)K+0K 0.8(+0.2) +0 >DIR HWORLD.*/na/ex/si/pr SY:[1,2] HWORLD.BAS 1 < 60> HWORLD.BAC 7C <124> HWORLD.OBJ 2 < 60> HWORLD.TSK 25C <124> Total of 35 blocks in 4 files in SY:[1,2] > RT-11 The RT-11 RTS emulated the Single Job version of the RT-11 distribution. Like the RSX emulation, RT-11 occupied the top 8K of memory, leaving the bottom 56K for CUSPS, programs written in FORTRAN-IV or Macro Assembler. When RT-11 is the default KBM, the standard RT-11 prompt (both logged in and logged out) is the "." sign (example): .VERSION Please type HELLO .HELLO 1,2;SECRET .VERSION RT-11SJ V3-03; RSTS/E V8.0 .R PIP *HWORLD.MAC=KB: .MCALL .TTYIN,.PRINT,.EXIT HWORLD: .ASCII /Hello World/<15><12> .ASCIZ /Press Control-T for 1 line status:/ .EVEN Start: .PRINT #HWORLD .TTYIN .EXIT .END START ^Z *^Z .R MACRO HWORLD=HWORLD *^Z .R LINK *HWORLD=HWORLD *^Z .R HWORLD.SAV Hello World Press Control-T for 1 line status: 1 KB0 HWORLD+RT11 KB(0R) 2(28)K+4K 0.6(+0.2) +0 ..DIR HWORLD.*/na/ex/si/pr SY:[1,2] HWORLD.BAS 1 < 60> HWORLD.BAC 7C <124> HWORLD.TSK 25C <124> HWORLD.MAC 1 < 60> HWORLD.OBJ 1 < 60> HWORLD.SAV 2C <124> Total of 37 blocks in 6 files in SY:[1,2] . TECO (Text Editor and COrrector) The TECO editor was itself implemented as an RTS to maximize the amount of memory available for the editing buffer, and also because it was first implemented in RSTS V5B, before the release of the general purpose runtime systems (RSX and RT11). TECO was the only RTS distributed with RSTS that did not contain a built-in KBM. The user would start up TECO (like any other program) by running a TECO program (TECO.TEC). TECO and the affine QEDIT were the direct ancestors of the first UNIX-based text editor, ED. Most RSTS systems used CCL's to create a file (MAKE filespec), edit a file (TECO filespec), or run a TECO program (MUNG filespec, data). The following program is an example of how TECO could be used to calculate pi (currently set to 20 digits): Ready run TECO *GZ0J\UNQN"E 20UN ' BUH BUV HK QN< J BUQ QN*10/3UI QI< \ +2*10+(QQ*QI)UA B L K QI*2-1UJ QA/QJUQ QA-(QQ*QJ)-2\ 10@I// -1%I > QQ/10UT QH+QT+48UW QW-58"E 48UW %V ' QV"N QV^T ' QWUV QQ-(QT*10)UH > QV^T @^A/ /HKEX$$ 31415926535897932384 Ready RSTS easter eggs System start-up (INIT.SYS) If a user typed an unrecognised command at system boot to the "Option:" prompt of INIT.SYS, the startup utility, the message "Type 'HELP' for help" was displayed. If the user subsequently typed 'HELP' (including the quotes) to the prompt, the response was "How amusing, anyway..." followed by the actual help message. PDP-11 console lights One of the nice features that a system manager could compile into the kernel was a rotating display pattern that gave the illusion of 2 snakes chasing each other around the console lights. The normal kernel would give the illusion of 1 snake moving from right to left in the data lights across the bottom. If the system manager also compiled the "lights" object module the user would see an additional snake moving from left to right in the address lights across the top. This was accomplished by using supervisory mode in the versions prior to 9.0. RSX also had a similar display pattern that would appear as if 2 snakes were playing chicken and would run into each other in the center of the console. Teco easter egg The command 'make' allowed a user to make a text file and automatically enter TECO text editor. If a user typed 'make love', the system created a file called 'love' and typed back, 'Not War?' Open Files List Kevin Herbert, later working for DEC, added an undocumented feature in the 90's to allow a user to enter ^F to see a list of open files the user process had, complete with blocks in use and file sizes Stardate Beginning with version 9.0, an undocumented feature would allow the system manager to change the display of the system date. RSTS now became the first operating system that would display the system date as a set of numbers representing a stardate as commonly known from the TV series Star Trek. Add-ons by other companies System Industries bought the only source license for RSTS to implement an enhancement called (SImultaneous Machine ACceSs), which allowed their special disk controller to set a semaphore flag for disk access, allowing multiple WRITES to the same files on a RSTS System where the disk is shared by multiple PDP-11 RSTS systems. This feature was implemented in System Industries controllers that were attached to many DEC computers and designed by Dr. Albert Chu while he worked at System Industries. The main innovation was use of a semaphore, a flag to indicate which processor, by cooperative sharing, has exclusive write access. This required many changes to the way access to disks was accomplished by the RSTS operating system. The FIPS (File Information Processing System) system, that handled i/o access, was single-threaded in RSTS. To allow a disk access to stall while another machine had active access to a block, required that the FIPS could timeout a request, go to the next request and 'come back' to the stalled one in a round robin fashion. The code to allow this was written by Philip Hunt while working at System Industries, in Milipitas, California. He eventually worked for Digital Equipment in the New England area in the late 1980s and early '90s. SIMACS was not limited to the PDP-11 product line; VAXen could also use it. RSTS emulations ROSS/V In 1981, Evans Griffiths & Hart marketed the ROSS/V product. ROSS/V allowed all user mode processes of RSTS (CUSPS, RTSs and user programs) the ability to run unmodified under VMS on the VAX-11 machines. The code for this emulation handled all of the kernel processes that would normally be handled by a RSTS kernel running on a PDP-11. The original BASIC-PLUS language that has carried through all versions of RSTS was subcontracted by Evans Griffiths & Hart, Inc. for a fixed price of $10,500. Other PDP-11 emulators RSTS and its applications can run under any PDP-11 emulator. For more information, see PDP-11 RSTS mascot Spike and Albert Versions RSTS was originally called BTSS (Basic Time Sharing System). Before shipment actually began, the name was changed from BTSS to RTSS because a product called BTSS was already being marketed by Honeywell. A simple typing mistake changed the name from RTSS to RSTS. The addition of new memory management support and the ability to install more memory in the PDP-11/40 and PDP-11/45 led to another name change: RSTS-11 now became RSTS/E. Clones in the USSR DOS-KP ("ДОС-КП") Applications Computer bureaus sometimes deployed User-11 for RSTS/E-based data management. See also Asynchronous System Trap BASIC-Plus-2 Concise Command Language DATATRIEVE DECnet Front panel Kevin Mitnick Local Area Transport Octal Debugging Technique QIO Record Management Services Runtime system SYSTAT (command) Time-sharing Time-sharing system evolution References External links Elvira at The Royal Institute of Technology in Stockholm, Sweden RSTS Hobbyist Site SimH web page Wofford Witch DEC operating systems PDP-11 Discontinued operating systems Time-sharing operating systems Assembly language software 1970 software
Operating System (OS)
939
ISO 13490 ISO/IEC 13490 (also known as ECMA-168) is the successor to ISO 9660 (level 3), intended to describe the file system of a CD-ROM or CD-R. ISO 13490 has several improvements over its predecessor. It fully addresses the filename, POSIX attribute, and multibyte character issues that were not handled by ISO 9660. It is also a more efficient format, permits incremental recording, and permits both the ISO 9660 format and ISO/IEC 13490 format to co-exist on the same media. It also specifies how to use multisession properly. It is derived from the Frankfurt Group (formed in 1990 by many CD-ROM and CD-WO hardware and media manufacturers, CD-ROM data publishers, users of CD-ROMs, and major computer companies) proposal and fully supports orange book media. Multiple session overview ISO 13490 define a rule for operating systems as to how to read a multiple-session ISO 9660 volume from a CD-R. Instead of looking for the volume descriptor at offset 32,768 (sector number 16 on a CD) from the start of the disc (which would be the default behavior in ISO 9660), programs accessing the disc should start reading from the 16th sector in the first track of the latest session. Sector numbers form a contiguous sequence starting at the first session, and continue over added sessions and their gaps. Hence, if a CD mastering program wants to add a single file to a CD-R that has an ISO 9660 volume, it has to append a session containing at least an updated copy of the entire directory tree, plus the new file. The duplicated directory entries can still reference the data files in the previous session(s). In a similar way, file data can be updated or even removed. Removal is, however, only virtual: the removed content does not appear any more in the directory shown to the user, but it is still physically present on the disc. It can therefore be recovered, and it takes up space (such that the CD will become full even though appearing to still have unused space). Support Though it was originally intended for multisession support only to apply to Mode 2 Form 1 formatted discs, some CD writing software supported multisession writing to Mode 1 format discs. Since only some of the early disc drives supported multisession Mode 1 discs, in many cases the second and following sessions would become unreachable in some drives. Some older CD writing software, such as Nero Burning ROM, would not import previous session data from an inserted disc. It could thus only write a subsequent session to a disc on the same computer that had written all the previous sessions, and then only if the previous session data was saved before the writing software was closed down. See also Universal Disk Format (UDF) based on ISO/IEC 13346 (also known as ECMA-167) Write Once Read Many (WORM) References External links ECMA-168 Disk file systems 13490 Ecma standards
Operating System (OS)
940
ISO/IEC 8859-7 ISO/IEC 8859-7:2003, Information technology — 8-bit single-byte coded graphic character sets — Part 7: Latin/Greek alphabet, is part of the ISO/IEC 8859 series of ASCII-based standard character encodings, first edition published in 1987. It is informally referred to as Latin/Greek. It was designed to cover the modern Greek language. The original 1987 version of the standard had the same character assignments as the Greek national standard ELOT 928, published in 1986. The table in this article shows the updated 2003 version which adds three characters (0xA4: euro sign U+20AC, 0xA5: drachma sign U+20AF, 0xAA: Greek ypogegrammeni U+037A). Microsoft has assigned code page 28597 a.k.a. Windows-28597 to ISO-8859-7 in Windows. IBM has assigned code page 813 to ISO 8859-7. (IBM CCSID 813 is the original encoding. CCSID 4909 adds the euro sign. CCSID 9005 further adds the drachma sign and ypogegrammeni.) ISO-8859-7 is the IANA preferred charset name for this standard (formally the 1987 version, but in practice there is no problem using it for the current version, as the changes are pure additions to previously unassigned codes) when supplemented with the C0 and C1 control codes from ISO/IEC 6429. Unicode is preferred for Greek in modern applications, especially as UTF-8 encoding on the Internet. Unicode provides many more glyphs for complete coverage, see Greek alphabet in Unicode and Ancient Greek Musical Notation for tables. Codepage layout See also Windows-1253 ISO 5428 ELOT 927 References External links ISO/IEC 8859-7:1999 - 8-bit single-byte coded graphic character sets, Part 7: Latin/Greek alphabet (draft dated June 10, 1999; superseded by ISO/IEC 8859-7:2003, published October 10, 2003) Standard ECMA-118: 8-Bit Single-Byte Coded Graphic Character Sets - Latin/Greek Alphabet (December 1986) ISO-IR 126 Right-hand Part of Latin/Greek Alphabet (November 30, 1986; superseded by ISO-IR 227) ISO-IR 227 Right-hand Part of Latin/Greek Alphabet (July 28, 2003) ISO/IEC 8859 Computer-related introductions in 1987
Operating System (OS)
941
System of systems System of systems is a collection of task-oriented or dedicated systems that pool their resources and capabilities together to create a new, more complex system which offers more functionality and performance than simply the sum of the constituent systems. Currently, systems of systems is a critical research discipline for which frames of reference, thought processes, quantitative analysis, tools, and design methods are incomplete. The methodology for defining, abstracting, modeling, and analyzing system of systems problems is typically referred to as system of systems engineering. Overview Commonly proposed descriptions—not necessarily definitions—of systems of systems, are outlined below in order of their appearance in the literature: Linking systems into joint system of systems allows for the interoperability and synergism of Command, Control, Computers, Communications and Information (C4I) and Intelligence, Surveillance and Reconnaissance (ISR) Systems: description in the field of information superiority in modern military. System of systems are large-scale concurrent and distributed systems the components of which are complex systems themselves: description in the field of communicating structures and information systems in private enterprise. System of systems education involves the integration of systems into system of systems that ultimately contribute to evolution of the social infrastructure: description in the field of education of engineers on the importance of systems and their integration. System of systems integration is a method to pursue development, integration, interoperability and optimization of systems to enhance performance in future battlefield scenarios: description in the field of information intensive systems integration in the military. Modern systems that comprise system of systems problems are not monolithic, rather they have five common characteristics: operational independence of the individual systems, managerial independence of the systems, geographical distribution, emergent behavior and evolutionary development: description in the field of evolutionary acquisition of complex adaptive systems in the military. Enterprise systems of systems engineering is focused on coupling traditional systems engineering activities with enterprise activities of strategic planning and investment analysis: description in the field of information intensive systems in private enterprise. System of systems problems are a collection of trans-domain networks of heterogeneous systems that are likely to exhibit operational and managerial independence, geographical distribution, and emergent and evolutionary behaviors that would not be apparent if the systems and their interactions are modeled separately: description in the field of National Transportation System, Integrated Military and Space Exploration. Taken together, all these descriptions suggest that a complete system of systems engineering framework is needed to improve decision support for system of systems problems. Specifically, an effective system of systems engineering framework is needed to help decision makers to determine whether related infrastructure, policy and/or technology considerations as an interrelated whole are good, bad or neutral over time. The need to solve system of systems problems is urgent not only because of the growing complexity of today's challenges, but also because such problems require large monetary and resource investments with multi-generational consequences. System-of-systems topics The system-of-systems approach While the individual systems constituting a system of systems can be very different and operate independently, their interactions typically expose and deliver important emergent properties. These emergent patterns have an evolving nature that stakeholders must recognize, analyze and understand. The system of systems approach does not advocate particular tools, methods or practices; instead, it promotes a new way of thinking for solving grand challenges where the interactions of technology, policy, and economics are the primary drivers. System of systems study is related to the general study of designing, complexity and systems engineering, but also brings to the fore the additional challenge of design. Systems of systems typically exhibit the behaviors of complex systems, but not all complex problems fall in the realm of systems of systems. Inherent to system of systems problems are several combinations of traits, not all of which are exhibited by every such problem: Operational Independence of Elements Managerial Independence of Elements Evolutionary Development Emergent Behavior Geographical Distribution of Elements Interdisciplinary Study Heterogeneity of Systems Networks of Systems The first five traits are known as Maier's criteria for identifying system of systems challenges. The remaining three traits have been proposed from the study of mathematical implications of modeling and analyzing system of systems challenges by Dr. Daniel DeLaurentis and his co-researchers at Purdue University. Research Current research into effective approaches to system of systems problems includes: Establishment of an effective frame of reference Crafting of a unifying lexicon Developing effective methodologies to visualize and communicate complex systems Distributed resource management Study of designing architecture Interoperability Data distribution policies: policy definition, design guidance and verification Formal modelling language with integrated tools platform Study of various modeling, simulation and analysis techniques network theory agent based modeling general systems theory probabilistic robust design (including uncertainty modeling/management) object-oriented simulation and programming multi-objective optimization Study of various numerical and visual tools for capturing the interaction of system requirements, concepts and technologies Applications Systems of systems, while still being investigated predominantly in the defense sector, is also seeing application in such fields as national air and auto transportation and space exploration. Other fields where it can be applied include health care, design of the Internet, software integration, and energy management and power systems. Social-ecological interpretations of resilience, where different levels of our world (e.g., the Earth system, the political system) are interpreted as interconnected or nested systems, take a systems-of-systems approach. An application in business can be found for supply chain resilience. Educational institutions and industry Collaboration among wide array of organizations is helping to drive development of defining system of systems problem class and methodology for modeling and analysis of system of systems problems. There are ongoing projects throughout many commercial entities, research institutions, academic programs, and government agences. Major stakeholders in the development of this concept are: Universities working on system of systems problems, including Purdue University, the Georgia Institute of Technology, Old Dominion University, George Mason University, the University of New Mexico, the Massachusetts Institute of Technology, Naval Postgraduate School and Carnegie Mellon University. Corporations active in this research such as The MITRE Corporation, AIRBUS, BAE Systems, Northrop Grumman, Boeing, Raytheon, Thales Group, CAE, Saber Astronautics and Lockheed Martin. Government agencies that perform and support research in systems of systems research and applications, such as DARPA, the U.S. Federal Aviation Administration, NASA and Department of Defense (DoD) For example, DoD recently established the National Centers for System of Systems Engineering to develop a formal methodology for system-of-systems engineering for applications in defense-related projects. In another example, according to the Exploration Systems Architecture Study, NASA established the Exploration Systems Mission Directorate (ESMD) organization to lead the development of a new exploration “system-of-systems” to accomplish the goals outlined by President G.W. Bush in the 2004 Vision for Space Exploration. A number of research projects and support actions, sponsored by the European Commission, are currently in progress. These target Strategic Objective IST-2011.3.3 in the FP7 ICT Work Programme (New paradigms for embedded systems, monitoring and control towards complex systems engineering). This objective has a specific focus on the "design, development and engineering of System-of-Systems". These projects include : T-AREA-SoS (Trans-Atlantic Research and Education Agenda on Systems of Systems), which aims "to increase European competitiveness in, and improve the societal impact of, the development and management of large complex systems in a range of sectors through the creation of a commonly agreed EU-US Systems of Systems (SoS) research agenda". COMPASS (Comprehensive Modelling for Advanced Systems of Systems), aiming to provide a semantic foundation and open tools framework to allow complex SoSs to be successfully and cost-effectively engineered, using methods and tools that promote the construction and early analysis of models. DANSE (Designing for Adaptability and evolutioN in System of systems Engineering), which aims to develop "a new methodology to support evolving, adaptive and iterative System of Systems life-cycle models based on a formal semantics for SoS inter-operations and supported by novel tools for analysis, simulation, and optimisation". ROAD2SOS (Roadmaps for System-of-System Engineering), aiming to develop "strategic research and engineering roadmaps in Systems of Systems Engineering and related case studies". DYMASOS (DYnamic MAnagement of physically-coupled Systems Of Systems), aiming to develop theoretical approaches and engineering tools for dynamic management of SoS based on industrial use cases. AMADEOS (Architecture for Multi-criticality Agile Dependable Evolutionary Open System-of-Systems) aiming to bring time awareness and evolution into the design of System-of- Systems (SoS) with possible emergent behavior, to establish a sound conceptual model, a generic architectural framework and a design methodology. See also Inheritance Software Library Object-oriented programming Model-based systems engineering Enterprise systems engineering Complex adaptive system Systems architecture Process architecture Software architecture Enterprise architecture Ultra-Large-Scale Systems Department of Defense Architecture Framework New Cybernetics References Further reading Yaneer Bar-Yam et al. (2004) "The Characteristics and Emerging Behaviors of System-of-Systems" in: NECSI: Complex Physical, Biological and Social Systems Project, January 7, 2004. Kenneth E. Boulding (1954) "General Systems Theory - The Skeleton of Science," Management Science, Vol. 2, No. 3, ABI/INFORM Global, pp. 197–208. Crossley, W.A., System-of-Systems:, Introduction of Purdue University Schools of Engineering's Signature Area. Mittal, S., Martin, J.L.R. (2013) Netcentric System of Systems Engineering with DEVS Unified Process, CRC Press, Boca Raton, FL DeLaurentis, D. "Understanding Transportation as a System of Systems Design Problem," 43rd AIAA Aerospace Sciences Meeting, Reno, Nevada, January 10–13, 2005. AIAA-2005-0123. J. Lewe, D. Mavris, Foundation for Study of Future Transportation Systems Through Agent-Based Simulation}, in: Proceedings of 24th International Congress of the Aeronautical Sciences (ICAS), Yokohama, Japan, August 2004. Session 8.1. Held, J.M.,The Modelling of Systems of Systems, PhD Thesis, University of Sydney, 2008 D. Luzeaux & J.R. Ruault, "Systems of Systems", ISTE Ltd and John Wiley & Sons Inc, 2010 D. Luzeaux, J.R. Ruault & J.L. Wippler, "Complex Systems and Systems of Systems Engineering", ISTE Ltd and John Wiley & Sons Inc, 2011 Popper, S., Bankes, S., Callaway, R., and DeLaurentis, D. (2004) System-of-Systems Symposium: Report on a Summer Conversation, July 21–22, 2004, Potomac Institute for Policy Studies, Arlington, VA. External links System of Systems - video IBM IEEE International Conference on System of Systems Engineering (SoSE) System of Systems Engineering Center of Excellence System of Systems, Systems Engineering Guide (USD AT&L Aug 2008) International Journal of System of Systems Engineering (IJSSE) Systems engineering Systems theory
Operating System (OS)
942
Windows System Assessment Tool The Windows System Assessment Tool (WinSAT) is a module of Microsoft Windows Vista, Windows 7, Windows 8, Windows 10 and Windows 11 that is available in the Control Panel under Performance Information and Tools (except in Windows 8.1, Windows 10 & Windows 11). It measures various performance characteristics and capabilities of the hardware it is running on and reports them as a Windows Experience Index (WEI) score. The WEI includes five subscores: processor, memory, 2D graphics, 3D graphics, and disk; the basescore is equal to the lowest of the subscores and is not an average of the subscores. WinSAT reports WEI scores on a scale from 1.0 to 5.9 for Windows Vista, 7.9 for Windows 7, and 9.9 for Windows 8, Windows 10 and Windows 11. The WEI enables users to match their computer hardware performance with the performance requirements of software. For example, the Aero graphical user interface will not automatically be enabled unless the system has a WEI score of 3 or higher. The WEI can also be used to show which part of a system would be expected to provide the greatest increase in performance when upgraded. For example, a computer with the lowest subscore being its memory, would benefit more from a RAM upgrade than adding a faster hard drive (or any other component). Detailed raw performance information, like actual disk bandwidth, can be obtained by invoking winsat from the command line. This also allows only specific tests to be re-run. Obtaining the WEI score from the command line is done invoking winsat formal, which also updates the value stored in %systemroot%\Performance\WinSAT\DataStore. (The XML files stored there can be easily hacked to report fake performance values.) The WEI is also available to applications through an API, so they can configure themselves as a function of hardware performance, taking advantage of its capabilities without becoming unacceptably slow. The Windows Experience Index score is not displayed in Windows 8.1 and onwards because the graphical user interface for WinSAT was removed in these versions of Windows, although the command line winsat tool still exists and operates correctly along with a final score when launching the command "shell:games". According to an article in PC Pro, Microsoft removed the WinSAT GUI in order to promote the idea that all kinds of hardware run Windows 8 equally well. History At the 2003 Game Developers Conference Dean Lester, Microsoft's General Manager of Windows Graphics and Gaming, stated in an interview with GameSpot that Microsoft intended to focus on improvements to the PC gaming experience as part of a new gaming initiative for the next version of Windows, Windows Vista, then codenamed "Longhorn." Lester stated that as part of this initiative the operating system would include a games folder that would centralize settings pertinent to gamers and, among other features, display driver streamlining, parental controls for games and the ability to start a Windows game directly from optical media during installation—in a manner similar to games designed for a video game console. Microsoft would also require a new method of displaying system requirements on retail packaging for Windows games with a rating system that would categorize games based on a numerical system. In 2004, Lester expanded further on Microsoft's intentions by stating that the company would work with hardware manufacturers to create PCs for Windows Vista that used a "level system" to designate the performance and capabilities of a system's hardware and that Xbox 360 peripherals would be fully compatible with the operating system. The Windows Experience Index feature in Windows Vista relies on measurements taken with WinSAT to provide an accurate assessment of a system's capabilities—these capabilities are presented in the form of a rating, where a higher rating indicates better performance. Preliminary design elements created for Microsoft by Robert Stein in 2004 suggest that WinSAT was intended to rate a user's hardware during the out-of-box experience; this is a design decision that would be retained for the operating system's release to manufacturing. During the Windows Hardware Engineering Conference of 2005, Microsoft formally unveiled the existence of WinSAT and presented it as a technology not only for games, but one that would allow Windows Vista to make decisions, such as whether to enable desktop composition, based on a machine's hardware capabilities. WinSAT would remain a key focus throughout development of the operating system before its release to manufacturing. Tests WinSAT in Windows Vista and Windows 7 performs the following tests: Direct3D 9 Aero Assessment Direct3D 9 Batch Assessment Direct3D 9 Alpha Blend Assessment Direct3D 9 Texture Load Assessment Direct3D 9 ALU Assessment Direct3D 10 Batch Assessment Direct3D 10 Alpha Blend Assessment Direct3D 10 Texture Load Assessment Direct3D 10 ALU Assessment Direct3D 10 Geometry Assessment Direct3D 10 Constant Buffer Assessment Windows Media Decoding Performance Windows Media Encoding Performance CPU Performance Memory Performance Disk Performance (includes devices such as Solid-state drives) While running, the tests show only a progress bar and a "working" background animation. Aero Glass is deactivated on Windows Vista and Windows 7 during testing so the tool can properly assess the graphics card and CPU. In Windows 8, WinSAT runs under the maintenance scheduler every week. The default schedule is 1am on Sundays. The maintenance scheduler collates various OS tasks into a schedule so the computer is not being randomly interrupted by the individual tasks. The scheduler wakes the computer from sleep, runs all the scheduled tasks and then puts the computer back to sleep. During this weekly task, WinSAT runs long enough to detect if there have been any hardware changes. If so, then the tests are run again. If not, then WinSAT simply ends as the existing scores must be valid. WinSAT cannot perform the above tests when a laptop is battery-operated. References External links WinSAT API How To Get Windows Experience Index (WEI) Score In Windows 8.1 Freeware - get Windows Experience Index in Windows 8.1 and Windows 10 WinSAT Microsoft Store app for Windows 10 System Assessment Tool Benchmarks (computing)
Operating System (OS)
943
Criticism of Linux The criticism of Linux focuses on issues concerning use of operating systems which use the Linux kernel. While the Linux-based Android operating system dominates the smartphone market in many countries, and Linux is used on the New York Stock Exchange and most supercomputers, it is used in few desktop and laptop computers. Much of the criticism of Linux is related to the lack of desktop and laptop adoption, although as of 2015 there has been growing unease with the project's perspective on security and its adoption of systemd has been controversial. Linux kernel criticisms Kernel development politics Some security professionals say that the rise in prominence of operating system-level virtualization using Linux has raised the profile of attacks against the kernel, and that Linus Torvalds is reticent to add mitigations against kernel-level attacks in official releases. Linux 4.12, released in 2017, enabled KASLR by default, but its effectiveness is debated. Con Kolivas, a former kernel developer, tried to optimize the kernel scheduler for interactive desktop use. He finally dropped the support for his patches due to the lack of appreciation for his development. In the 2007 interview Why I quit: kernel developer Con Kolivas he stated: Kernel performance At LinuxCon 2009, Linux creator Linus Torvalds said that the Linux kernel has become "bloated and huge": At LinuxCon 2014, Linux creator Linus Torvalds said he thinks the bloat situation is better because modern PCs are a lot faster: Kernel code quality In an interview with German newspaper Zeit Online in November 2011, Linus Torvalds stated that Linux has become "too complex" and he was concerned that developers would not be able to find their way through the software anymore. He complained that even subsystems have become very complex and he told the publication that he is "afraid of the day" when there will be an error that "cannot be evaluated anymore." Andrew Morton, one of Linux kernel lead developers, explains that many bugs identified in Linux are never fixed: Theo de Raadt, founder of OpenBSD, compares OpenBSD development process to Linux: Desktop use Critics of Linux on the desktop have frequently argued that a lack of top-selling video games on the platform holds adoption back. For instance, as of September 2015, the Steam gaming service has 1,500 games available on Linux, compared to 2,323 games for Mac and 6,500 Windows games. As of October 2021, Proton, a Steam-backed development effort descended from Wine provides compatibility with a large number of Windows-only games, and potentially better performance over Linux-native ports in some cases. ProtonDB is a community-maintained effort to gauge how well different versions of Proton work with a given game. As a desktop operating system, Linux has been criticized on a number of fronts, including: A confusing number of choices of distributions, and desktop environments. Poor open source support for some hardware, in particular drivers for 3D graphics chips, where manufacturers were unwilling to provide full specifications. As a result, many video cards have both open and closed source drivers, usually with different levels of support. Limited availability of widely used commercial applications (such as Adobe Photoshop and Microsoft Word). This is a result of the software developers not supporting Linux rather than any fault of Linux itself. Sometimes this can be solved by running the Windows versions of these programs through Wine, a virtual machine, or dual-booting. Even so, this creates a chicken or the egg situation where developers make programs for Windows due to its market share, and consumers use Windows due to availability of the programs. Distribution fragmentation Another common complaint levelled against Linux is the abundance of distributions available. As of November, 2021, DistroWatch lists 275 distributions. While Linux advocates have defended the number as an example of freedom of choice, other critics cite the large number as cause for confusion and lack of standardization in Linux operating systems. Alexander Wolfe wrote in InformationWeek: Caitlyn Martin from LinuxDevCenter has been critical of the number of Linux distributions: Hardware support In recent decades (since the established dominance of Microsoft Windows) hardware developers have often been reluctant to provide full technical documentation for their products, to allow drivers to be written. This has meant that a Linux user had to carefully hand pick the hardware that made up the system to ensure functionality and compatibility. These problems have largely been addressed: At one time, Linux systems required removable media, such as floppy discs and CD-ROMs, to be manually mounted before they could be accessed. Mounting media is now automatic in nearly all distributions, with the development of the udev. Some companies, such as EmperorLinux, have addressed the problems of laptop hardware compatibility by mating modified Linux distributions with specially selected hardware to ensure compatibility from delivery. Directory structure The traditional directory structure, which is a heritage from Linux's Unix roots in the 1970s, has been criticized as inappropriate for desktop end users. Some Linux distributions like GoboLinux and moonOS have proposed alternative hierarchies that were argued to be easier for end users, though they achieved little acceptance. Criticism by Microsoft In 2004, Microsoft initiated its Get the Facts marketing campaign, which specifically criticized Linux server usage. In particular, it claimed that the vulnerabilities of Windows are fewer in number than those of Linux distributions, that Windows is more reliable and secure than Linux, that the total cost of ownership of Linux is higher (due to complexity, acquisition costs, and support costs), that use of Linux places a burden of liability on businesses, and that "Linux vendors provide little, if any indemnification coverage." In addition, the corporation published various studies in an attempt to prove this – the factuality of which has been heavily disputed by different authors who claim that Microsoft's comparisons are flawed. Many Linux distributors now offer indemnification to customers. Internal Microsoft reports from the Halloween documents leak have presented conflicting views. Particularly documents from 1998 and 1999 ceded that "Linux ... is trusted in mission critical applications, and – due to its open source code – has a long term credibility which exceeds many other competitive OSs", "An advanced Win32 GUI user would have a short learning cycle to become productive [under Linux]", "Long term, my simple experiments do indicate that Linux has a chance at the desktop market ...", and "Overall respondents felt the most compelling reason to support OSS was that it 'Offers a low total cost of ownership (TCO)'." Responses to criticism The Linux community has had mixed responses to these and other criticisms. As mentioned above, while some criticism has led to new features and better user-friendliness, the Linux community as a whole has a reputation for being resistant to criticism. Writing for PC World, Keir Thomas, noted that, "Most of the time the world of Linux tends to be anti-critical. If anybody in the community dares be critical, they get stomped upon." In a 2015 interview, Linus Torvalds also mentioned the tendency of Linux desktop environment projects to blame their users instead of themselves in case of criticism. See also Criticism of desktop Linux Criticism of Microsoft Windows The Unix-Haters Handbook References Linux Linux
Operating System (OS)
944
MBC-550 The Sanyo MBC-550 is a small and inexpensive personal computer in "pizza-box" style, featuring an Intel 8088 microprocessor and running a version of MS-DOS. Sold by Sanyo, it was the least expensive early IBM PC compatible. The MBC-550 has much better video display possibilities than the CGA card (8 colors at 640x200 resolution, vs CGA's 4 colors at 320x200 or 2 colors at 640x200), but it is not completely compatible with the IBM-PC. The computer lacks a standard BIOS, having only a minimal bootloader in ROM that accesses hardware directly to load a RAM-based BIOS. The diskette format (FM rather than MFM) used is not completely compatible with the IBM PC, but special software on an original PC or PC/XT (but not PC/AT) can read and write the diskettes, and software expecting a standard 18.2 Hz clock interrupt has to be rewritten. The MBC-550 was also the computer for NRI training. Starting by building the computer, the NRI promised you would be "qualified to service and repair virtually every major brand of computer". NRI advertised in Popular Mechanics and Popular Science throughout 1985. The MBC-550 is less PC compatible than the IBM PCjr. Its inability to use much PC software was a significant disadvantage; InfoWorld reported in August 1985 that Sanyo "has initiated a campaign to sell off" of MBC-550 inventory. The company's newer computers were, an executive claimed, 99% PC compatible. Soft Sector Magazine SOFT SECTOR was a magazine for people who owned Sanyo MBC-550 and 555 DOS computers. (But much of the content equally applied to most IBM clones at the time.) A typical issue includes news, reviews, how-to's, technical advice and education, tips and tricks, as well as BASIC language programs that you could copy from the printed page, and adapt to suit your needs. Models MBC-550 : 1 x 5.25" disk drive (160 KB) MBC-555 : 2 x 5.25" disk drive (160 KB) MBC-555-2 : 2 x 5.25" disk drive (360 KB) MBC-555-3 : 2 x 5.25" disk drive (720 KB) References Sanyo products IBM PC compatibles
Operating System (OS)
945
Windows Media Windows Media is a discontinued multimedia framework for media creation and distribution for Microsoft Windows. It consists of a software development kit (SDK) with several application programming interfaces (API) and a number of prebuilt technologies, and is the replacement of NetShow technologies. The Windows Media SDK was replaced by Media Foundation when Windows Vista was released. Software Windows Media Center Windows Media Player Windows Media Encoder Windows Media Services Windows Movie Maker Formats Advanced Systems Format (ASF) Advanced Stream Redirector (ASX) Windows Media Audio (WMA) Windows Media Playlist (WPL) Windows Media Video (WMV) and VC-1 Windows Media Station (NSC) WMV HD, (Windows Media Video High Definition), the branding name for high definition (HD) media content encoded using Windows Media codecs. WMV HD is not a separate codec. HD Photo (formerly Windows Media Photo, standardized as JPEG XR) DVR-MS, the recording format used by Windows Media Center SAMI, the closed caption format developed by Microsoft. It can be used to synchronize captions and audio descriptions with online video. Protocols Media Stream Broadcast (MSB), for multicast distribution of Advanced Systems Format content over a network Media Transfer Protocol (MTP), for transferring and synchronizing media on portable devices Microsoft Media Services (MMS), the streaming transport protocol Windows Media DRM, an implementation of digital rights management Website WindowsMedia.com See also QuickTime - Apple Computer's multimedia framework Silverlight External links Official website Description of the algorithm used for WMA encryption Microsoft Windows multimedia technology Multimedia frameworks
Operating System (OS)
946
Conversational Monitor System The Conversational Monitor System (CMS – originally: "Cambridge Monitor System") is a simple interactive single-user operating system. CMS was originally developed as part of IBM's CP/CMS operating system, which went into production use in 1967. CMS is part of IBM's VM family, which runs on IBM mainframe computers. VM was first announced in 1972, and is still in use today as z/VM. CMS runs as a "guest" operating system in a private virtual machine created by the VM control program. The control program plus CMS together create a multi-user time-sharing operating system. History CMS was originally developed as part of IBM's CP/CMS operating system. At the time, the acronym meant "Cambridge Monitor System" (but also: "Console Monitor System"). CMS first ran under CP-40, a one-off research system using custom hardware at IBM's Cambridge Scientific Center. Production use at CSC began in January 1967. The CMS user interface drew heavily on experience with the influential first-generation time-sharing system CTSS, some of whose developers worked on CP/CMS. (CTSS was used as an early CP/CMS development platform.) Later in 1967, CP/CMS became generally available on the IBM System/360 Model 67, where, although the new control program CP-67 was a substantial re-implementation of CP-40, CMS remained essentially the same. IBM provided CP/CMS "as is" – without any support, in source code form, as part of the IBM Type-III Library. CP/CMS was thus an open source system. Despite this lack of support from IBM, CP/CMS achieved great success as a time-sharing platform; by 1972, there were some 44 CP/CMS systems in use, including commercial sites that resold access to CP/CMS. In 1972, IBM released its VM/370 operating system, a re-implementation of CP/CMS for the System/370, in an announcement that also added virtual memory hardware to the System/370 series. Unlike CP/CMS, VM/370 was supported by IBM. VM went through a series of versions, and is still in use today as z/VM. Through all its distinct versions and releases, the CMS platform remained still quite recognizable as a close descendant of the original CMS version running under CP-40. Many key user interface decisions familiar to today's users had already been made in 1965, as part of the CP-40 effort. See CMS under CP-40 for examples. Both VM and CP/CMS had checkered histories at IBM. VM was not one of IBM's "strategic" operating systems, which were primarily the OS and DOS families, and it suffered from IBM political infighting over time-sharing versus batch processing goals. This conflict is why CP/CMS was originally released as an unsupported system, and why VM often had limited development and support resources within IBM. An exceptionally strong user community, first established in the self-support days of CP/CMS but remaining active after the launch of VM, made substantial contributions to the operating system, and mitigated the difficulties of running IBM's "other operating system". Architecture CMS is an intrinsic part of the VM/CMS architecture, established with CP/CMS. Each CMS user has control over a private virtual machine – a simulated copy of the underlying physical computer – in which CMS runs as a stand-alone operating system. This approach has remained consistent through the years, and is based on: Full virtualization, used to create multiple independent virtual machines that each completely simulate the underlying hardware Paravirtualization, used to provide a hypervisor interface that CMS uses to access VM services; this is implemented by the non-virtualized DIAG (diagnose) instruction More details on how CMS interacts with the virtual machine environment can be found in the VM and CP/CMS articles. CMS was originally built as a stand-alone operating system, capable of running on a bare machine (though of course nobody would choose to do so). However, CMS can no longer run outside the VM environment, which provides the hypervisor interface needed for various critical functions. Features CMS provides users an environment for running applications or batch jobs, managing data files, creating and debugging applications, doing cross-platform development, and communicating with other systems or users. CMS is still in development and wide use today. Basic environment Users log into VM, providing a userid and password, and then boot their own virtual machine. This can be done by issuing the command "IPL CMS" ("IPL" = initial program load, traditional IBM jargon for booting a machine); though this is normally done automatically for the user. Personal customization is done by a standard shell script file named "PROFILE EXEC", which sets up user-specified environmental defaults, such as which disks and libraries are accessed. Terminal support CMS started in the era of teletype-style paper terminals, and the later "glass teletype" dumb terminals. By the late 1970s, however, most VM users were connecting via full-screen terminals – particularly the IBM 3270, the ubiquitous transaction processing terminal on IBM mainframes. The 3270 played a strategic role in IBM's product line, making its selection a natural choice for large data centers of the day. Many other manufacturers eventually offered bisync terminals that emulated the 3270 protocol. 3270s had local buffer storage, some processing capabilities, and generally dealt with an entire screen of data at a time. They handled editing tasks locally, and then transmitted a set of fields (or the entire page) at once when the ENTER key or a program function key (PFK) was pressed. The 3270 family incorporated "smart" control units, concentrators, and other network processing elements, communicating with the mainframe over dedicated circuits at relatively high speeds, via a bisync synchronous communication protocol. (These mainframe-oriented communication technologies provided some of the capabilities taken for granted in modern communication networks, such as device addressing, routing, error correction, and support for a variety of configurations such as multipoint and multidrop topologies.) The 3270 approach differed from lower-cost dumb terminals of the period, which were point-to-point and asynchronous. Commercial time-sharing users, an important segment of early CP/CMS and VM sites, relied on such devices because they could connect via 300- or 1200 bit/s modems over normal voice-grade telephone circuits. Installing a dedicated circuit for a 3270 was often not practical, economical, or timely. The 3270's block-oriented approach was more consistent with IBM's batch- and punched card-oriented view of computing, and was particularly important for IBM mainframes of the day. Unlike contemporary minicomputers, most IBM mainframes were not equipped for character-at-a-time interrupts. Dumb terminal support relied on terminal control units such as the IBM 270x (see IBM 3705) or Memorex 1270. These asynchronous terminal controllers assembled a line of characters, up to a fixed maximum length, until the RETURN key was pressed. Typing too many characters would result in an error, a familiar situation to users of the day. (Most data centers did not include this equipment, except as needed for dial-up access. The 3270 approach was preferred.) Block-oriented terminals like the 3270 made it practical to implement screen-oriented editors on mainframes – as opposed to line-oriented editors, the previous norm. This had been an important advantage of contemporary minicomputers and other character-oriented systems, and its availability via the 3270 was warmly welcomed. A gulf developed between the 3270 world, focused on page-oriented mainframe transaction processing (especially via CICS), and the asynch terminal world, focused on character-oriented minicomputers and dial-up timesharing. Asynchronous terminal vendors gradually improved their products with a range of smart terminal features, usually accessed via escape sequences. However, these devices rarely competed for 3270 users; IBM maintained its dominance over mainframe data center hardware purchase decisions. Viewed in retrospect, there was a major philosophical divergence between block-oriented and character-oriented computing. Asynchronous terminal controllers and 3270s both provided the mainframe with block-oriented interactions – essentially, they made the terminal input look like a card reader. This approach, preferred by IBM, led to the development of entirely different user interface paradigms and programming strategies. Character-oriented systems evolved differently. The difference is apparent when comparing the atomic transaction approach of dominant CICS with the interactive, stream-oriented style of UNIX. VM/CMS evolved somewhere between these extremes. CMS has a command-driven, stateful, interactive environment, rather than adopting the CICS approach of a stateless transaction-oriented interface. Yet CMS responds to page- or line-at-a-time interaction, instead of character interrupts. Performance CMS earned a very good reputation for being efficient, and for having good human factors for ease of use, relative to the standards of the time (and of course prior to widespread use of graphical user interface environments such as are commonly used today). It was not uncommon to have hundreds (later: thousands) of concurrent CMS interactive users on the same VM mainframe, with sub-second response times for common, 'trivial' functions. VM/CMS consistently outperformed MVS and other IBM operating systems in terms of support for simultaneous interactive users. Programming and major applications Many CMS users programmed in such languages as COBOL, FORTRAN, PL/I, C/370, APL, and the scripting language REXX. VM/CMS was often used as a development platform for production systems that ran under IBM's other operating systems, such as MVS. Other CMS users worked with commercial software packages such as FOCUS, NOMAD, SPSS, and SAS. At one time, CMS was also a major environment for e-mail and office productivity; an important product was IBM's PROFS (later renamed OfficeVision). Two commonly used CMS tools are the editor XEDIT and the REXX programming language. Both of these products have been ported to other platforms, and are now widely used outside the mainframe environment. References See VM (operating system) for VM-related sources and source citations. Notes See also CMS file system 1967 software IBM mainframe operating systems Command shells VM (operating system)
Operating System (OS)
947
Transaction Processing Facility Transaction Processing Facility (TPF) is an IBM real-time operating system for mainframe computers descended from the IBM System/360 family, including zSeries and System z9. TPF delivers fast, high-volume, high-throughput transaction processing, handling large, continuous loads of essentially simple transactions across large, geographically dispersed networks. While there are other industrial-strength transaction processing systems, notably IBM's own CICS and IMS, TPF's specialty is extreme volume, large numbers of concurrent users, and very fast response times. For example, it handles VISA credit card transaction processing during the peak holiday shopping season. The TPF passenger reservation application PARS, or its international version IPARS, is used by many airlines. PARS is an application program; TPF is an operating system. One of TPF's major optional components is a high performance, specialized database facility called TPF Database Facility (TPFDF). A close cousin of TPF, the transaction monitor ALCS, was developed by IBM to integrate TPF services into the more common mainframe operating system MVS, now z/OS. History TPF evolved from the Airlines Control Program (ACP), a free package developed in the mid-1960s by IBM in association with major North American and European airlines. In 1979, IBM introduced TPF as a replacement for ACP — and as a priced software product. The new name suggests its greater scope and evolution into non-airline related entities. TPF was traditionally an IBM System/370 assembly language environment for performance reasons, and many TPF assembler applications persist. However, more recent versions of TPF encourage the use of C. Another programming language called SabreTalk was born and died on TPF. IBM announced the delivery of the current release of TPF, dubbed z/TPF V1.1, in September 2005. Most significantly, z/TPF adds 64-bit addressing and mandates use of the 64-bit GNU development tools. The GCC compiler and the DIGNUS Systems/C++ and Systems/C are the only supported compilers for z/TPF. The Dignus compilers offer reduced source code changes when moving from TPF 4.1 to z/TPF. Users Current users include Sabre (reservations), VISA Inc. (authorizations), American Airlines, American Express (authorizations), DXC Technology SHARES (reservations), Holiday Inn (central reservations), Amtrak, Marriott International, Travelport (Galileo, Apollo, Worldspan, Axess Japan GDS), Citibank, Air Canada, Trenitalia (reservations), Delta Air Lines (reservations and operations) and Japan Airlines. Operating environment Tightly coupled Although IBM's 3083 was aimed at running TPF on a "fast... uniprocessor,", TPF is capable of running on a multiprocessor, that is, on systems in which there is more than one CPU. Within the LPAR, the CPUs are referred to as instruction streams or simply I-streams. When running on a LPAR with more than one I-stream, TPF is said to be running tightly coupled. TPF adheres to SMP concepts; no concept of NUMA-based distinctions between memory addresses exist. The depth of the CPU ready list is measured as any incoming transaction is received, and queued for the I-stream with the lowest demand, thus maintaining continuous load balancing among available processors. In cases where loosely coupled configurations are populated by multiprocessor CPCs (Central Processing Complex, i.e. the physical machine packaged in one system cabinet), SMP takes place within the CPC as described here, whereas sharing of inter-CPC resources takes place as described under Loosely coupled, below. In the TPF architecture, all memory (except for a 4KB-sized prefix area) is shared among all I-streams. In instances where memory-resident data must or should be kept separated by I-stream, the programmer typically allocates a storage area into a number of subsections equal to the number of I-streams, then accesses the desired I-stream associated area by taking the base address of the allocated area, and adding to it the product of the I-stream relative number times the size of each subsection. Loosely coupled TPF is capable of supporting multiple mainframes (of any size themselves — be it single I-stream to multiple I-stream) connecting to and operating on a common database. Currently, 32 IBM mainframes may share the TPF database; if such a system were in operation, it would be called 32-way loosely coupled. The simplest loosely coupled system would be two IBM mainframes sharing one DASD (Direct Access Storage Device). In this case, the control program would be equally loaded into memory and each program or record on DASD could be potentially accessed by either mainframe. In order to serialize accesses between data records on a loosely coupled system, a practice known as record locking must be used. This means that when one mainframe processor obtains a hold on a record, the mechanism must prevent all other processors from obtaining the same hold and communicate to the requesting processors that they are waiting. Within any tightly coupled system, this is easy to manage between I-streams via the use of the Record Hold Table. However, when the lock is obtained offboard of the TPF processor in the DASD control unit, an external process must be used. Historically, the record locking was accomplished in the DASD control unit via an RPQ known as LLF (Limited Locking Facility) and later ELLF (extended). LLF and ELLF were both replaced by the Multipathing Lock Facility (MPLF). To run, clustered (loosely coupled) z/TPF requires either MPLF in all disk control units or an alternative locking device called a Coupling Facility. Processor shared records Records that absolutely must be managed by a record locking process are those which are processor shared. In TPF, most record accesses are done by using record type and ordinal. So if you had defined a record type in the TPF system of 'FRED' and gave it 100 records or ordinals, then in a processor shared scheme, record type 'FRED' ordinal '5' would resolve to exactly the same file address on DASD — clearly necessitating the use of a record locking mechanism. All processor shared records on a TPF system will be accessed via exactly the same file address which will resolve to exactly the same location. Processor unique records A processor unique record is one that is defined such that each processor expected to be in the loosely coupled complex has a record type of 'FRED' and perhaps 100 ordinals. However, if a user on any 2 or more processors examines the file address that record type 'FRED', ordinal '5' resolves to, they will note a different physical address is used. TPF attributes What TPF is not TPF is not a general-purpose operating system. TPF's specialized role is to process transaction input messages, then return output messages on a 1:1 basis at extremely high volume with short maximum elapsed time limits. TPF has no built-in graphical user interface functionality, and TPF has never offered direct graphical display facilities: to implement it on the host would be considered an unnecessary and potentially harmful diversion of real-time system resources. TPF's user interface is command-line driven with simple text display terminals that scroll upward, and there are no mouse-driven cursors, windows, or icons on a TPF Prime CRAS (Computer room agent set — which is best thought of as the "operator's console"). Character messages are intended to be the mode of communications with human users; all work is accomplished via the use of the command line, similar to UNIX without X. There are several products available which connect to Prime CRAS and provide graphical interface functions to the TPF operator, such as TPF Operations Server. Graphical interfaces for end users, if desired, must be provided by external systems. Such systems perform analysis on character content (see Screen scrape) and convert the message to/from the desired graphical form, depending on its context. Being a specialized purpose operating system, TPF does not host a compiler/assembler, text editor, nor implement the concept of a desktop as one might expect to find in a GPOS. TPF application source code is commonly stored in external systems, and likewise built "offline". Starting with z/TPF 1.1, Linux is the supported build platform; executable programs intended for z/TPF operation must observe the ELF format for s390x-ibm-linux. Using TPF requires a knowledge of its Command Guide since there is no support for an online command "directory" or "man"/help facility to which users might be accustomed. Commands created and shipped by IBM for the system administration of TPF are called "functional messages"—commonly referred to as "Z-messages", as they are all prefixed with the letter "Z". Other letters are reserved so that customers may write their own commands. TPF implements debugging in a distributed client-server mode; which is necessary because of the system's headless, multi-processing nature: pausing the entire system in order to trap a single task would be highly counter-productive. Debugger packages have been developed by 3rd party vendors who took very different approaches to the "break/continue" operations required at the TPF host, implementing unique communications protocols used in traffic between the human developer running the debugger client & server-side debug controller, as well as the form and function of debugger program operations at the client side. Two examples of 3rd party debugger packages are Step by Step Trace from Bedford Associates and CMSTPF, TPF/GI, & zTPFGI from TPF Software, Inc. Neither package is wholly compatible with the other, nor with IBM's own offering. IBM's debugging client offering is packaged in an IDE called IBM TPF Toolkit. What TPF is TPF is highly optimized to permit messages from the supported network to either be switched out to another location, routed to an application (specific set of programs) or to permit extremely efficient accesses to database records. Data records Historically, all data on the TPF system had to fit in fixed record (and memory block) sizes of 381, 1055 and 4K bytes. This was due in part to the physical record sizes of blocks located on DASD. Much overhead was saved by freeing up any part of the operating system from breaking large data entities into smaller ones during file operations, and reassembling the same during read operations. Since IBM hardware does I/O via the use of channels and channel programs, TPF would generate very small and efficient channel programs to do its I/O — all in the name of speed. Since the early days also placed a premium on the size of storage media — be it memory or disk, TPF applications evolved into doing very powerful things while using very little resource. Today, much of these limitations are removed. In fact, only because of legacy support are smaller-than-4K DASD records still used. With the advances made in DASD technology, a read/write of a 4K record is just as efficient as a 1055 byte record. The same advances have increased the capacity of each device so that there is no longer a premium placed on the ability to pack data into the smallest model as possible. Programs and residency TPF also had its program segments allocated as 381, 1055 and 4K byte-sized records at different points in its history. Each segment consisted of a single record; with a typically comprehensive application requiring perhaps tens or even hundreds of segments. For the first forty years of TPF's history, these segments were never link-edited. Instead, the relocatable object code (direct output from the assembler) was laid out in memory, had its internally (self-referential) relocatable symbols resolved, then the entire image was written to file for later loading into the system. This created a challenging programming environment in which segments related to one another could not directly address each other, with control transfer between them implemented as the ENTER/BACK system service. In ACP/TPF's earliest days (circa 1965), memory space was severely limited, which gave rise to a distinction between file-resident and core-resident programs—only the most frequently used application programs were written into memory and never removed (core-residency); the rest were stored on file and read in on demand, with their backing memory buffers released post-execution. The introduction of C language to TPF at version 3.0 was first implemented conformant to segment conventions, including the absence of linkage editing. This scheme quickly demonstrated itself to be impractical for anything other than the simplest of C programs. At TPF 4.1, truly and fully linked load modules were introduced to TPF. These were compiled with the z/OS C/C++ compiler using TPF-specific header files and linked with IEWL, resulting in a z/OS-conformant load module, which in no manner could be considered a traditional TPF segment. The TPF loader was extended to read the z/OS-unique load module file format, then lay out file-resident load modules' sections into memory; meanwhile, assembly language programs remained confined to TPF's segment model, creating an obvious disparity between applications written in assembler and those written in higher level languages (HLL). At z/TPF 1.1, all source language types were conceptually unified and fully link-edited to conform to the ELF specification. The segment concept became obsolete, meaning that any program written in any source language—including Assembler—may now be of any size. Furthermore, external references became possible, and separate source code programs that had once been segments could now be directly linked together into a shared object. A value point is that critical legacy applications can benefit from improved efficiency through simple repackaging—calls made between members of a single shared object module now have a much shorter pathlength at run time as compared to calling the system's ENTER/BACK service. Members of the same shared object may now share writeable data regions directly thanks to copy-on-write functionality also introduced at z/TPF 1.1; which coincidentally reinforces TPF's reentrancy requirements. The concepts of file- and memory- residency were also made obsolete, due to a z/TPF design point which sought to have all programs resident in memory at all times. Since z/TPF had to maintain a call stack for high-level language programs, which gave HLL programs the ability to benefit from stack-based memory allocation, it was deemed beneficial to extend the call stack to assembly language programs on an optional basis, which can ease memory pressure and ease recursive programming. All z/TPF executable programs are now packaged as ELF shared objects. Memory usage Historically and in step with the previous, core blocks— memory— were also 381, 1055 and 4 K bytes in size. Since ALL memory blocks had to be of this size, most of the overhead for obtaining memory found in other systems was discarded. The programmer merely needed to decide what size block would fit the need and ask for it. TPF would maintain a list of blocks in use and simply hand the first block on the available list. Physical memory was divided into sections reserved for each size so a 1055 byte block always came from a section and returned there, the only overhead needed was to add its address to the appropriate physical block table's list. No compaction or data collection was required. As applications got more advanced demands for memory increased, and once C became available memory chunks of indeterminate or large size were required. This gave rise to the use of heap storage and some memory management routines. To ease the overhead, TPF memory was broken into frames— 4 KB in size (1 MB with z/TPF). If an application needs a certain number of bytes, the number of contiguous frames required to fill that need are granted. References Bibliography Transaction Processing Facility: A Guide for Application Programmers (Yourdon Press Computing Series) by R. Jason Martin (Hardcover - April 1990), External links z/TPF (IBM) TPF User Group (TPF User Group) Real-time operating systems IBM mainframe operating systems Transaction processing facility
Operating System (OS)
948
Time-sharing system evolution This article covers the evolution of time-sharing systems, providing links to major early time-sharing operating systems, showing their subsequent evolution. Time-sharing Time-sharing was first proposed in the mid- to late-1950s and first implemented in the early 1960s. The concept was born out of the realization that a single expensive computer could be efficiently utilized if a multitasking, multiprogramming operating system allowed multiple users simultaneous interactive access. Typically an individual user would enter bursts of information followed by long pauses; but with a group of users working at the same time, the pauses of one user would be filled by the activity of the others. Similarly, small slices of time spent waiting for disk, tape, or network input could be granted to other users. Given an optimal group size, the overall process could be very efficient. Each user would use their own computer terminal, initially electromechanical teleprinters such as the Teletype Model 33 ASR or the Friden Flexowriter; from about 1970 these were progressively superseded by CRT–based units such as the DEC VT05, Datapoint 2200 and Lear Siegler ADM-3A. Terminals were initially linked to a nearby computer via current loop or serial cables, by conventional telegraph circuits provided by PTTs and over specialist digital leased lines such T1. Modems such as the Bell 103 and successors, allowed remote and higher-speed use over the analogue voice telephone network. Family tree of major systems See details and additional systems in the table below. Relationships shown here are for the purpose of grouping entries and do not reflect all influences. The Cambridge Multiple-Access System was the first time-sharing system developed outside the United States. System descriptions and relationships See also History of CP/CMS has many period details and sources. Timeline of operating systems References History of software Time-sharing system evolution
Operating System (OS)
949
Dreamlinux Dreamlinux was a Brazilian computer operating system based on Debian Linux. It can boot as a live CD, from USB flash drive, or can be installed on a hard drive. The distribution's GUI aims to have a centered animated toolbar. As of October 2012, The Dreamlinux Project has been discontinued. Editions Dreamlinux 2.2 MM GL Edition (2007) DreamLinux Multimedia Edition 2.2 with AIGLX provides Beryl-AIGLX by default, which can be utilized after the initial installation. One of its key features is its ability to configure AIGLX for NVIDIA and ATI cards automatically. The distribution received a favorable review for its appearance and functionality. Dreamlinux 3.0 (2008) Dreamlinux Desktop Edition 3.0 features a complete redesign. It supports a totally independent architecture named Flexiboost, based on overlaid modules. The feature allows the co-existence of two (or more) separate window managers (currently Gnome and Xfce), sharing the same customized appearance. Both working environments share all the applications available. In addition to the 700MB iso file (CD image), a 130MB Multimedia Module is also available, including DVD support. This is primarily intended for use when running from USB flash drive, rather than from live-CD mode. New applications The following applications were not included in previous releases: Gthumb (replacing GQview) Pidgin instant messenger; Ndiswrapper module WineHQ + Wine Doors installer Other improvements Now booting from any CDROM or DVD-R/W unit Improved Dreamlinux Control Panel Improved Dreamlinux Installer Improved Easy Install application Theme-Switcher on Gnome changes theme without the need to restart X Setup-Network Manager for stop, start, restart, stop network on booting, start network on booting. Network is now set up to automatically start during boot. Cupsys also starts on boot New wizard for emerald-themes New wallpapers New icons New Avant Window Manager themes and AWN-Dock (check AWN Manager on DCP) CompizFusion enabler in DCP switches default Engage dock to AWN Dock. New GDM themes, now featuring countdowns Dreamlinux 3.5 (2009) Dreamlinux 3.5 is an update to the original Dreamlinux 3.0 desktop. This release features the XFCE desktop with the Gnome Desktop as an additional option in the form of a module. This release uses the Debian Lenny desktop. It features the Linux kernel version 2.6.28.5 as well as new icons and a new GTK+ theme. There is also the option to install directly to a USB Memory Stick in two modes. Live Dream This runs the same as a Live CD, and does not save changes. Persistent Dream This runs as though Dream is installed onto the hard drive, and saves any changes to configuration that are made. It is only recommended for use on USB drives that are 2 GB. DreamLinux 5.0 (2012) DreamLinux 5.0 is based on Debian Wheezy 7.0 with Linux kernel 3.1. The only edition available is an ISO image around 956 MB. It features: Xfce 4.8 desktop with quite similar look to MAC OS X user interface. Programming environments for Ruby Lua, Vala, C, C++, Python and Perl Server and network applications: Apache2, PHP5, MySQL, Samba, Netatalk, TorrentFlux, SSH, Bluetooth, Network-Manager, Avahi-Daemon (Bonjour), Preload, Fancontrol, Cpufreqd. Pre-installed applications for end-users: Chromium web browser. Audio, video codec for playing many multimedia formats, SoftMaker office suite Textmaker, Planmaker and Presentations. Graphics editors “Gimp and InkScape“, along with shotwell photo manager and FoxitReader PDF reader application. Dreamlinux 5.0 offers new installer called FlexiBoot which allows users to easily install Dreamlinux 5.0 in USB external hard drive and use it anywhere, or install it to the internal hard drive. MKDistro is a simple utility that allows users to build their own customized Dreamlinux and Debian-based distribution. Live USB A Live USB version of Dreamlinux can be created manually or with UNetbootin. References External links Debian-based distributions Portuguese-language Linux distributions Linux distributions
Operating System (OS)
950
ExoPC The EXOPC is a Tablet PC, in slate form, that uses Windows 7 Home Premium as its operating system, and is designed by the company of the same name, based in Quebec, Canada. The EXOPC Slate is manufactured by Pegatron. The first EXOPC slate was launched in October 2010 directly from EXOPC Corp. on their website, and in Canada through the company Hypertechnologie Ciara. Hypertechnologie Ciara markets the slate under the name Ciara Vibe. Probitas markets the EXOPC as Mobi-One in Southern Europe and North Africa. RM Education markets the EXOPC in the UK as the RM Slate. Leader Computers markets the EXOPC in Australia. The EXOPC Slate is also currently available in the United States via the Microsoft Store, both online and in stores. Mustek markets it as the Mecer Lucid Slate in South Africa. Hardware The architecture is based on an Intel Atom-M Pineview N450 CPU that is clocked at 1.66 GHz, and includes 2 GB of DDR2 SDRAM and 32 GB of solid-state drive (SSD) storage in its basic version, with an alternative model having a larger 64 GB SSD. The EXOPC is also equipped with an accelerometer, which lets the display change from a portrait mode to a landscape mode by turning the slate in either direction. Internally it has four mini-PCIe slots of which three provide space for full-length cards and one half length. Three of these slots are in use and the fourth is available, but intended for a WWAN card. The unit also provides a SIM card slot. Display The EXOPC has an 11.6-inch diagonal, capacitive multi-touch screen. The screen has a resolution of 1366 × 768 pixels (WXGA), a 16:9 ratio, and has 135 pixels per inch. The screen's firmware currently allows detection of two points of simultaneous touch, but is technically capable of up to 10 points of touch. A light sensor built into the front of the tablet automatically adjusts the display brightness to ambient condition. It is also possible to use a capacitive stylus for precision work, such as hand-drawn art and graphic works. Connectivity The EXOPC offers connectivity equivalent to that of a standard laptop: Wi-Fi IEEE 802.11b/IEEE 802.11g / IEEE 802.11n Bluetooth 2.1 + EDR Two USB 2.0 ports Audio in/out SuperJack Mini-HDMI for connecting to an external monitor or television, with a maximum output resolution of 1080p (upscaled from 1366 × 768) Dock connector External power supply Recharging the battery is done through a standard external power supply: Size: Weight: Input: 100–240 V Output: 19 V, 2.1 amperes Software features Operating system The EXOPC uses Microsoft Windows 7 as its operating system. The company has developed a GUI interface around the standard Windows 7 GUI, nicknamed by the EXOPC community as the Connect Four Interface due to its full screen of interactive circles arranged in a grid pattern. A dedicated button on the touch-screen interface will minimize the EXOPC layer and reveal the Windows 7 desktop, allowing the user to have the EXOPC Slate act as a standard Windows computer when needed. Applications Pre-installed applications The EXOPC comes with the following pre-installed applications: Microsoft Security Essentials Microsoft .NET framework 4.0 Microsoft Silverlight runtime for IE Adobe Flash Player 10.2 and Acrobat Reader for reading PDF files EXOPC GUI Layer Store-specific applications An application library, similar to the Apple App Store or the Android Market is available for the device, accessible through the EXOPC UI. Feedback The tablet captured the attention of several blogs and websites in the summer of 2010, being heralded as a possible alternative to the iPad. However, early reviews criticized the weight and battery life of the final product, as well as many missing features, the interface itself, sluggishness of the Internet browser, and difficulties to use the on-screen keyboard. See also WeTab – German version with the MeeGo-OS, and similar hardware References External links Official company website Manufacturer website Website of Hypertechnologie Ciara, Inc. EXOPC Microsoft Store Computer companies of Canada Tablet computers
Operating System (OS)
951
X-Win32 In computing, X-Win32 is a proprietary implementation of the X Window System for Microsoft Windows, produced by StarNet Communications. It is based on X11R7.4. X- Win32 allows remote display of UNIX windows on Windows machines in a normal window alongside the other Windows applications Version History X-Win32 was first introduced by StarNet Communications as a product called MicroX in 1991. As the internet became more widely used in the 1990s the name changed to X-Win32. The table below details the origination and transformation of MicroX into X-Win32. A limited set of versions and their release notes are available from the product's website. Features Standard connection protocols - X-Win32 offers six standard connection protocols: ssh, telnet, rexec, rlogin, rsh, and XDMCP Window modes - Like other X servers for Microsoft Windows, X-Win32 has two window modes, Single and Multiple. Single window mode contains all X windows with one large visible root window. Multiple window mode allows the Microsoft Window Manager to manage the X client windows Copy and paste - X-Win32 incorporates a clipboard manager which allows for dynamic copying and pasting of text from X clients to Windows applications and vice versa. A screen-shot tool saves to a PNG file. OpenGL support - X-Win32 uses the GLX extension which allows for OpenGL Support Related products X-Win32 Flash is a version of X-Win32 that can be installed and run directly from a USB Flash Drive Discontinued products X-Win64 was a version for 64-bit Windows, but the extended features in that version can now be found in the current version of X-Win32. X-Win32 LX was a free commercially supported X Server for Microsoft Windows which supported Microsoft Windows Services for UNIX (SFU). Recon-X was an add-on product for all X server products, including X-Win32 competitors such as Exceed and Reflection X, which added suspend and resume capabilities to running X sessions. Features of Recon-X were incorporated into the LIVE product line LinuxLIVE is a LIVE client for Linux systems MacLIVE is a LIVE client for Mac OS X systems LIVE Console is a LIVE client installed with the LIVE server which allows localhost LIVE connections to be made See also Cygwin/X - A free alternative Exceed - A commercial alternative Reflection X - A commercial alternative Xming - Donations or purchase required References External links X-Win32 (product home page) X servers
Operating System (OS)
952
Swecha Swecha is a non-profit organization formerly called as Free Software Foundation Andhra Pradesh (FSF-AP) later changed name to Swecha. It is a Telugu Operating System released in the year 2005, and is a part of Free Software Movement of India (FSMI). The organization is a social movement working towards educating the masses with the essence of Free Software and to provide knowledge to the commoners. Swecha organizes workshops and seminars in the Indian state of Telangana and Andhra Pradesh. Presently Swecha is active GLUG (GNU/Linux User Group) in many engineering colleges like International Institute of Information Technology, Hyderabad , Jawaharlal Nehru Technological University, Hyderabad, Chaitanya Bharathi Institute of Technology, St. Martin's Engineering College, Sridevi Women's Engineering College, Mahatma Gandhi Institute of Technology, SCIENT Institute of Technology, CMR Institute of Technology, Hyderabad, Jyothishmathi College of Engineering and Technology, MVGR College of Engineering, K L University and Ace Engineering College. Objectives The main objectives of the organization are as follows: To take forward free software and its ideological implications to all corners of our country from the developed domains to the underprivileged. To create awareness among computer users in the use of free software. To work towards usage of free software in all streams of sciences and research. To take forward implementation and usage of free software in school education, academics and higher education. To work towards e-literacy and bridging digital divide based on free software and mobilizing the underprivileged. To work among developers on solutions catering to societal & national requirements. To work towards a policy change favoring free software in all walks of life. Activities Swecha hosted a National Convention for Academics and Research which was attended by researchers and academicians from different parts of the country. Former President of India Dr A.P.J. Abdul Kalam while inaugurating the conference and declaring it open has asked everyone to Embrace Free Software and the Philosophy associated with it. Technology should also be within the reach of everybody. At a time when technology has become all pervasive and people are increasingly dependent on it, transparency in software code is the need of the hour."One needs to know what is going on in your mobile phone or computer," said D. Bhuvan Krishna, co-convener of Swecha project which code is available for anyone and everyone to modify fills this gap, felt speakers at an event organised to spread the word of Free and open-source software(FOSS) and celebrate the launch of latest web browser from the Mozilla Foundation's stable, Mozilla Firefox 3.5. Swecha, has organised a 15-day workshop in Chaitanya Bharathi Institute of Technology(CBIT) for budding software engineers from across the country. The idea is to provide students with an opportunity on the importance of students contributing towards the development of free software as it would not only allow students to exercise their creative faculties but also will help society to free itself from the clutches of proprietary software. Swecha, an organisation floated to promote free software movement in India, has organised a one-day workshop on free software in the Department of Computer Science and Systems Engineering, Andhra University College of Engineering, the students who attended the workshop along with the faculty members joined to formally launch the GNU/Linux User Group(GLUG). In order to build a mass movement for free software, Swecha organizes Freedom Fest to promoting use of free software, About 1,500 students from 80 colleges from Andhra Pradesh, Tamil Nadu, Chhattisgarh converged on the campus to voice their concerns against proprietary software and share their passion for free software. Swecha hosted A 2 Days International Technical Symposium on Free Internet (DFI) & Free Software in Hyderabad, Gachibowli in the month of 24 January 2014. More than 4700 participants attended including the 20+ delegates from ThoughtWorks & Social Activist across the world. Swecha organises summer camps every year in which large number of students participate. The camps focus on training students on Free Software Technology and the culture of sharing and collaborative development of free software. In the 2014 itself 15 days camps were conducted for 2000+ students. It is here participants collaboratively engage in the conduct of the Summer Camps. Projects Swecha is a free software project aimed at coming out with a localised version of Linux Operating System in Telugu and providing global software solutions to the local people with the Free Software development model by working together with the community of developers and users all over. The prime objective of Swecha OS is to provide a complete computing solution to a population that speaks and understands only Telugu. The target users of the Distro being the entire community that is a prey of the digital divide. This project helps in coming out with a solution for the digital divide and allows the possibility of digital unite becoming a reality. The project aims at bridging the gap between the computer technology that exists predominantly in English and the Telugu-speaking community of India. The project also aims at providing a framework for development and maintenance of Free Software projects taken up by the community. Bala Swecha is a free software project initiated by the Swecha for tiny tots, It is a school distro with many of the useful interactive applications for the school goers. Its stack is filled with educational suites for all the standards right from elementary to tenth standards. They cover a wide range of applications which make the student learn Maths, Physics, Geography, Chemistry etc., very easily. Swecha has taken up many activities in training the school teachers, computer instructors of several government schools. The aim of the Distro is to deliver a Free Software-based operating system for the project of "Sarva Shiksha Abhiyan" initiated by the government. There isn't such operating system till now which gives full freedom with an educational stack. Swecha has the plans of localizing BalaSwecha for the benefit of Telugu medium students. E-Swecha is a free software project initiated by the Swecha and is aimed at developing a free Operating System, which is not built by a software firm.. neither is it built by a few programmers.. it is a collaborative work of hundreds of Swecha Volunteers/engineering students in and around Hyderabad to, for and by the engineering students. Activism Swecha organised a free software workshop and delivered a talk on "The Age of Inequality", Mr. Palagummi Sainath told the gathered engineering students and researchers that half of the country's children suffered from malnourishment and at the same time, the situation was getting worse for farmers and suicides among them were high, Despite a high growth rate, malnourishment of children in the country remained at 46 per cent which was behind countries of Sub-Saharan Africa where these figures stood between 32 per cent to 35 percent Swecha widespread protests taking place across the country after the arrest of two girls over a Facebook comment, have now reached Hyderabad. On Sunday, a group, consisting mostly of IT professionals, students and academicians, protested at Indira Park against the controversial Section 66 (A) of Information Technology Act The Swecha was in the forefront of the protests against the inclusion of proprietary software in the representation to All India Council for Technical Education (AICTE) against the deal with Microsoft. Swecha organised a seminar on "Employment opportunities in changing technology landscape", on 23 September at Mahima Gardens, Member of German Hacker Association Chaos Computer Club, Andy Müller-Maguhn, director of Social and Economic Justice at Thoughtworks, Matt Simons and secretary of Free Software Movement of India Y.Kiran Chandra addressed the students and later joined the Free Software Movement started by Richard Stallman. The seminar was organised by Swecha, on the free software development model, Mr Neville Roy Singham explained that spying or surveillance can be easily done through hardware as well as software, and that no electronic device can be safe from it. The NSA is doing it because it is simply very cheap for them, and that they are taking in literally every piece of information they can get. More than 3,000 students, mainly from the engineering stream attended the seminar, which continued till late evening, as many other speakers like Renata Avila, a human rights lawyer and internet freedom activist from Guatemala, Dmytri Kleiner, Telemiscommunications specialist, and Zack Exley, ex-Chief Revenue Officer, Wikimedia foundation also conducted seminars and interacted with students. Internet surveillance and digital snooping on the people is the biggest threat to democracy said by the Richard Stallman Internet surveillance and spying is dangerous and threatens the functioning of democracy, Dr. Stallman told students at a seminar on "Free Software and Internet Freedom", organised by Swecha on the Acharya Nagarjuna University campus. Swecha has demanded that the Central and State governments bring in policy changes on information technology to give fillip to hardware manufacturing, setting up of data centres and software design centres. Mr. Y. Kiranchandra Chairman of Swecha quoted data from the mobile telephony market to augment his demand. "The annual market for mobile telephones in the country is about Rs. 16,000 crore, yet India is yet to have its own mobile manufacturing unit. The mobile handsets designed in China, South Korea and Finland are simply being relabelled and sold in the country". See also Free Software Foundation Free Software Movement Free Software Foundation Tamil Nadu Free Software Movement of Karnataka Public Patent Foundation Software Freedom Law Center Guifi.net References External links Free Software Foundation Intellectual property activism Organisations based in Telangana Organisations based in Andhra Pradesh Organizations established in 2005 Science and technology think tanks Non-profit organisations based in India Free and open-source software organizations Software industry in India Digital rights organizations Non-profit technology Human rights organisations based in India 2005 establishments in Andhra Pradesh
Operating System (OS)
953
Criticism of Windows 10 Windows 10, an operating system released by Microsoft in July 2015, has been criticized by reviewers and users. Due to issues mostly about privacy, it has been the subject of a number of negative assessments by various groups. General criticism Critics have noted that Windows10 heavily emphasizes freemium services and contains various advertising facilities. Some outlets have considered these to be a hidden "cost" of the free upgrade offer. Examples include media storefronts, Office 365, paid functionality in bundled games such as Microsoft Solitaire Collection, default settings that display promotions of "suggested" apps in Start menu and "tips" on the lock screen that may contain advertising, ads displayed in File Explorer for Office 365 subscriptions on Redstone 2 builds, and notifications promoting the Microsoft Edge web browser when a different browser is set as default. Update system Windows 10 Home is permanently set to download all updates automatically, including cumulative updates, security patches, and drivers, and users cannot individually select updates to install or not. Microsoft offers a diagnostic tool that can be used to hide updates and prevent them from being reinstalled, but only after they had been already installed, then uninstalled without rebooting the system. However, the software agreement states, specifically for users of Windows10 in Canada, that they may pause updates by disconnecting their device from the Internet. Tom Warren of The Verge felt that, given web browsers such as Google Chrome had already adopted such an automatic update system, such a requirement would help to keep all Windows10 devices secure, and felt that "if you're used to family members calling you for technical support because they've failed to upgrade to the latest Windows service pack or some malware disabled Windows Update then those days will hopefully be over." Concerns were raised that due to these changes, users would be unable to skip the automatic installation of updates that are faulty or cause issues with certain system configurationsalthough build upgrades will also be subject to public beta testing via the Windows Insider Program. There were also concerns that the forced installation of driver updates through Windows Update, where they were previously designated as "optional", could cause conflicts with drivers that were installed independently of Windows Update. Such a situation occurred just prior to the general release of the operating system, when an Nvidia graphics card driver that was automatically pushed to Windows10 users via Windows Update caused issues that prevented the use of certain functions, or prevented their system from booting at all. Criticism was also directed towards Microsoft's decision to no longer provide specific details on the contents of cumulative updates for Windows 10. On February 9, 2016, Microsoft reversed this decision and began to provide release notes for cumulative updates on the Windows website. Windows 10 has also received criticism due to deleting files without user permission after major updates. There can be multiple causes, such as a (now resolved) bug in the upgrade process, programs that are deemed to be incompatible with the new version of Windows and thus get uninstalled, a setting that didn't propagate properly when upgrading to Windows 10, or malware being detected in the files. Some users reported that during the installation of the November upgrade, some applications (particularly utility programs such as CPU-Z and Speccy) were automatically uninstalled during the upgrade process, and some default programs were reset to Microsoft-specified defaults (such as Photos app, and Microsoft Edge for PDF viewing), both without warning. Application .exe files would often get deleted automatically during updates. Further issues were discovered upon the launch of the Anniversary Update ("Redstone"), including a bug that caused some devices to freeze (but addressed by cumulative update KB3176938, released on August 31, 2016), and that fundamental changes to how Windows handles webcams had caused many to stop working. Distribution practices Microsoft was criticized for the tactics that it used to promote its free upgrade campaign for Windows 10, including adware-like behaviors, using deceptive user interfaces to coax users into installing the operating system, downloading installation files without user consent, and making it difficult for users to suppress the advertising and notifications if they did not wish to upgrade to 10. The upgrade offer was marketed and initiated using the "Get Windows 10" (GWX) application, which was first downloaded and installed via Windows Update in March 2015. Registry keys and Group Policy settings could be used to partially disable the GWX mechanism, but the installation of patches to the GWX software via Windows Update could reset these keys back to defaults, and thus reactivate the software. Third-party programs were also created to assist users in applying measures to disable GWX. In September 2015, it was reported that Microsoft was triggering automatic downloads of the Windows10 installation files on all compatible Windows7 or 8.1computers configured to automatically download and install updates, regardless of whether or not they had specifically requested the upgrade. Microsoft officially confirmed the change, claiming it was "an industry practice that reduces time for installation and ensures device readiness." This move was criticized by users who have data caps or devices with low storage capacity, as resources were consumed by the automatic downloads of up to 6GB of data. Other critics argued that Microsoft should not have triggered any downloading of Windows10 installation files without user consent. In October 2015, Windows10 began to appear as an "Optional" update in the Windows Update interface, but pre-selected for installation on some systems. A Microsoft spokesperson said that this was a mistake, and that the download would no longer be pre-selected by default. However, on October 29, 2015, Microsoft announced that it planned to classify Windows10 as a "recommended" update in the Windows Update interface some time in 2016, which will cause an automatic download of installation files and a one-time prompt with a choice to install to appear. In December 2015, it was reported that a new advertising dialog had begun to appear, only containing "Upgrade now" and "Upgrade tonight" buttons, and no obvious method to decline installation besides the close button. In March 2016, some users also alleged that their Windows 7 and 8.1 devices had automatically begun upgrading to 10 without their consent. In June 2016, the GWX dialog's behavior changed to make closing the window imply a consent to a scheduled upgrade. Despite this, an InfoWorld editor disputed the claims that upgrades had begun without any consent at all; testing showed that the upgrade to Windows10 would only begin once the user accepts the end-user license agreement (EULA) presented by its installer, and that not doing so would eventually cause Windows Update to time out with an error, thus halting the installation attempt. It was concluded that these users may have unknowingly clicked the "Accept" prompt without full knowledge that this would begin the upgrade. In December 2016, Microsoft chief marketing officer Chris Capossela admitted that the company had "gone too far", by using this tactic, stating that "we know we want people to be running Windows 10 from a security perspective, but finding the right balance where you’re not stepping over the line of being too aggressive is something we tried and for a lot of the year I think we got it right." On January 21, 2016, Microsoft was sued in small claims court by a user whose computer, shortly after the release of the OS, had attempted to upgrade to Windows10 without her consent. The upgrade failed, and her computer was left in an unstable state thereafter, which disrupted the ability to run her travel agency. The court ruled in favor of the user and awarded her $10,000 in damages, but Microsoft appealed. However, in May 2016, Microsoft dropped the appeal and chose to pay the damages. Shortly after the suit was reported on by the Seattle Times, Microsoft confirmed that it was updating the GWX software once again to add more explicit options for opting out of a free Windows 10 upgrade; the final notification was a full-screen pop-up window notifying users of the impending end of the free upgrade offer, and contained "Remind me later", "Do not notify me again" and "Notify me three more times" options. Privacy and data collection Concerns were shown by advocates and other critics for Windows10's privacy policies and its collection and use of customer data. Under the default "Express" settings, Windows 10 is configured to send various information to Microsoft and other parties, including the collection of user contacts, calendar data, computer's appearance including color of the chassis and "associated input data" to personalize "speech, typing, and inking input", typing and inking data to improve recognition, allow apps to use a unique "advertising ID" for analytics and advertising personalization (functionality introduced by Windows 8.1) and allow apps to request the user's location data and send this data to Microsoft and "trusted partners" to improve location detection (Windows 8 had similar settings, except that location data collection did not include "trusted partners"). Users can opt out from most of this data collection, but telemetry data for error reporting and usage is also sent to Microsoft, and this cannot be disabled on non-Enterprise versions of Windows 10. The use of Cortana intelligent personal assistant also requires the collection of data "such as your device location, data from your calendar, the apps you use, data from your emails and text messages, who you call, your contacts and how often you interact with them on your device” to personalize its functionality. Rock Paper Shotgun writer Alec Meer argued that Microsoft's intent for this data collection lacked transparency, stating that "there is no world in which 45 pages of policy documents and opt-out settings split across 13 different Settings screens and an external website constitutes 'real transparency'." ExtremeTech pointed out that, whilst previously campaigning against Google for similar data collection strategies, "[Microsoft] now hoovers up your data in ways that would make Google jealous." However, it was also pointed out that the requirement for such vast usage of customer data had become a norm, citing the increased reliance on cloud computing and other forms of external processing, as well as similar data collection requirements for services on mobile devices such as Google Now and Siri. In August 2015, Russian politician Nikolai Levichev called for Windows 10 to be banned from use by the Russian government, as it sends user data to servers in the United States (a federal law requiring all online services to store the data of Russian users on servers within the country, or be blocked, took effect September 2016). Writing for ZDNet, Ed Bott said that the lack of complaints by businesses about privacy in Windows10 indicated "how utterly normal those privacy terms are in 2015." In a Computerworld editorial, Preston Gralla said, "The kind of information Windows10 gathers is no different from what other operating systems gather. But Microsoft is held to a different standard than other companies." Microsoft Services Agreement reads that the company's online services may automatically "download software updates or configuration changes, including those that prevent you from accessing the Services, playing counterfeit games, or using unauthorized hardware peripheral devices." Critics interpreted this statement as implying that Microsoft would scan for and delete unlicensed software installed on devices running Windows10. However, others pointed out that this agreement was specifically for Microsoft online services such as Microsoft account, Office 365, Skype, as well as Xbox Live, and that the offending passage most likely referred to digital rights management on Xbox consoles and first-party games, and not plans to police pirated video games installed on Windows10 PCs. Despite this, some torrent trackers announced plans to block Windows10 users, also arguing that the operating system could send information to anti-piracy groups that are affiliated with Microsoft. Writing about these allegations, Ed Bott of ZDNet compared Microsoft's privacy policy to Apple's and Google's and concluded that "after carefully reading the Microsoft Services Agreement, the Windows license agreement...and the Microsoft Privacy Statement carefully, I don't see anything that looks remotely like Big Brother." Columnist Kim Komando argued that "Microsoft might in the future run scans and disable software or hardware it sees as a security threat," consistent with the Windows10 update policy. Following the release of 10, allegations also surfaced that Microsoft had backported the operating system's increased data collection to Windows 7 and Windows 8 via "recommended" patches that added additional "telemetry" features. The updates' addition of a "Diagnostics Tracking Service" is connected specifically to Microsoft's existing Customer Experience Improvement Program (which is an opt-in program that sends additional diagnostic information to Microsoft for addressing issues), and the Application Experience service, which is typically intended for third-party software compatibility requests. This was achieved by including various DLLs and adding the telemetry service executable (all of which notably have versions pertaining to Windows 10 builds) as part of various updates from 2016 onward. The data collection functionality is capable of transmitting personal information, browsing history, the contents of emails, chat, video calls, voice mail, photos, documents, personal files and keystrokes to Microsoft, for analysis, in accordance with the End User License Agreement. The terms of services agreement from Microsoft was updated to state the following: In October 2017, the Dutch Data Protection Authority issued a complaint asserting that Windows10's privacy policies did not comply with the laws of the Netherlands, as it claims that Microsoft does not provide sufficient information on what information is collected at the "Full" telemetry level and how it is processed. Microsoft disputed the claim that it did not provide enough disclosure of the "Full" telemetry level, and stated that it was working with the DDPA to "find appropriate solutions". Antitrust issues In November 2016, Kaspersky Lab filed an antitrust complaint in Russia regarding the bundling of Windows Defender with the operating system, arguing that Microsoft was abusing its position to favor its own, in-house antivirus software over those of other vendors. In June 2017, Kaspersky filed another complaint with the European Commission, accusing the company of frustrating the use of third-party antivirus software on Windows 10 in defense of its "inferior" Windows Defender, including forcibly uninstalling third-party antivirus software during upgrades, and not providing enough time for antivirus developers to certify their software for each new upgrade to Windows 10. Microsoft stated that the company "[engages] deeply with antimalware vendors and have taken a number of steps to address their feedback", and that they had offered to meet Kaspersky executives to discuss any specific concerns. On June 21, 2017, Microsoft issued a blog post confirming that since the "Creators Update", Windows 10 may prompt users to temporarily disable their antivirus software upon installation of a feature update if the current version is not deemed to be compatible, and that the operating system would direct users to relevant updates to their software following the conclusion of the update. Microsoft stated that it had worked with vendors to perform compatibility testing of their software with the update, and to "specify which versions of their software are compatible and where to direct customers after updating." Microsoft reported that as a result of these efforts, around 95% of Windows 10 users "had an antivirus application installed that was already compatible with Windows 10 Creators Update". Microsoft clarified that Windows Defender only operates if the device does not have any other security software installed, or if security software reports that a subscription had lapsed. In Summer 2018, a Windows 10 Insider update received extreme backlash for a planned feature that would have attempted to prevent the installation of other web browsers, such as Chrome or Firefox, telling users that they already have Microsoft Edge and therefore there is no need to change their browser. See also Criticism of Windows XP Criticism of Windows Vista Criticism of Microsoft Criticism of Microsoft Windows Bundling of Microsoft Windows References External links Microsoft criticisms and controversies Operating system criticisms Windows 10
Operating System (OS)
954
Packard Bell Navigator Packard Bell Navigator is an alternative shell for the Windows 3.1 and Windows 95 operating systems that shipped with Packard Bell computers. The shell was designed to be simpler to use for computer novices by representing applications as objects in a virtual home, similar to Microsoft Bob and At Ease. The software was originally developed by a company called Ark Interface, which was acquired by Packard Bell in 1994. Design and functionality Most pre-1995 versions contained a GUI very similar to Apple Computer's At Ease, but without the folders. Unlike At Ease, programs were grouped in sections such as "Microsoft DOS", "Microsoft Windows", "Service & Support", and "Software". The "Software" section was the only section users could customise and modify program icons, paths, and links. The above sections appeared as icons at startup. Navigator was a standard Windows program, meaning when the computer was booted, Windows would start and load Navigator from the startup directory. If the Windows icon in Navigator was clicked, the program would become minimized. If Navigator was removed from the Startup folder, it would not load at Windows startup. This is similar to the design of Microsoft Bob, which ran on top of Windows, and At Ease, which ran on top of the classic Mac OS Finder. It was possible for Navigator to function on non-Packard Bell Windows PCs. Packard Bell Navigator shipped with Packard Bell personal computers in the mid 1990s. A 3D version, named Packard Bell 3D Navigator, shipped in 2000–2001. Security, user accounts, and passwords Unlike At Ease, which allowed multiple password-protected user accounts, Navigator only allowed password protection on each of the sections (only one password). However, multiple user accounts were not possible. All users were granted the same privileges. References External links Toasty Technology GUI Gallery - Navigator 1.1 and Navigator 3.5 Packard Bell Navigator 1.0, 2.0, 3.0 and 3.9 Desktop shell replacement Packard Bell
Operating System (OS)
955
IPodLinux iPodLinux is a µClinux-based Linux distribution designed specifically to run on Apple Inc.'s iPod. When the iPodLinux kernel is booted it takes the place of Apple's iPod operating system and automatically loads Podzilla, an alternative GUI and launcher for a number of additional included programs such as a video player, an image viewer, a command line shell, games, emulators for video game consoles, programming demos, and other experimental or occasionally unfinished software. The project has been inactive since 2009, but its website is still maintained. Further development of free and open source software for iPods have continued with the Rockbox Project, zeroslackr, and freemyipod, which have largely supplanted iPodLinux. Some third party installers are still available. Basic structure iPodLinux in essence consists of a Linux kernel built from µClinux sources using the uClibc C standard library with driver code for iPod components (or reverse engineered drivers where available). It includes userland programs from µClinux and/or BusyBox, a UNIX-style file system (which can be created within HFS+ formatted iPods, or an ext2 partition on FAT32 formatted iPod), and the Podzilla GUI (and its modules). Apple's proprietary iPod OS in contrast uses an invisible boot loader and is based on an ARM processor kernel originally written by Pixo, and the iPod Miller Columns browser program, a GUI written by Apple and Pixo using the Pixo application framework, and other firmware and component drivers written from manufacturer's reference code to support the standard behavior Apple wanted the iPod to have. Features Besides the kernel, iPodLinux features as a primary component podzilla and podzilla2, applications which provide: An iPod-like user interface Video playback with sound Support for AAC, MP3 and basic OGG playback (4G & 5G Music Player Daemon malfunctions, but can be fixed). Many games, including TuxChess, Bluecube (Tetris clone), Chopper, StepMania (a Dance Dance Revolution clone) and more. Recording through audio jack at much higher quality than Apple's firmware Ability to play the games Doom and Doom II (and presumably any Doom Total Conversion; Chex Quest for instance) Color scheme support Ability to run many emulators, such as iBoy (Nintendo Game Boy Emulator), iNES (Nintendo Entertainment System Emulator), iDarcNES (port of the multiple system emulator DarcNES), iMAME (port of Multiple Arcade Machine Emulator), and iGPSP (Game Boy Advance emulator). History The bootloader for the 4th generation iPod was extracted by Nils Schneider, a German computer science student. Previous software methods to extract the necessary bootloader no longer worked. Bernard Leach had previously discovered how to operate the piezo buzzer inside the iPod. Schneider was able to use his program with some modifications to make a series of clicks for each byte of the new iPod's bootloader. The extraction process took 22 hours to complete and required Schneider to construct a soundproof box to prevent outside interference with the process. Server transition On June 11, 2008 the organization's website was suspended and replaced with a redirect to a blank page. The server had its services restored incrementally. On October 1, 2008 the iPodLinux.org DNS address was updated and the server was online again by October 5, 2008. On June 22, 2009 the server was pulled offline again. The server was back online again on September 8. In September 2010 the server went offline again and has not got online again yet. Alexander Papst, one of the developers, has posted a mirror of the site at ipodlinux.wiki. In 2015, the site was offline. However, in 2019, it has gone online. Compatibility According to the iPodLinux wiki, "developers have succeeded in getting [the following features] to work- it does not imply that the feature is ready for widespread use." As of August 5, 2006, only the 1st, 2nd, and 3rd generation iPod are officially supported by iPodLinux, although newer generations are also partially compatible. The iPodLinux project does not plan support for the iPod shuffle due to the lack of a GCC compiler for the shuffle's DSP57000 core, as well as the fact that the iPod shuffle lacks a screen. While later generations work fine for many uses of iPodLinux, not all features work; these later generations will not be officially supported by the project until most or all features from the earlier iPods work on them. Installers are in the process of being made. As of now, there is Installer 2.3 for Microsoft Windows or Linux which can install on any generation iPod (except for the iPod shuffle and iPod nano 2nd generation). As of April, 2008, iPodLinux does not work on the new iPod firmware included with the second and third generation iPod nano or the 6th generation iPod Classic, and installer 2 cannot be used to install iPodLinux on 5.5th generation iPod. In addition to that, the much spoken about audio recording feature currently does not work on the latest ipodlinux/zeroslackr builds. In ipodlinux, an under development message is given under recording, while in zeroslackr, recording is not displayed at all. Arguably one of the project's more notable accomplishments is its video player, released months before rumors about Apple's Video iPod began to spread. This video player only plays uncompressed AVI files, which are basically just a series of bitmap formatted frames with an audio overlay that commonly loses sync with the video output. A new compression technique called MoviePod, released in 2006, enables people to put more video content on their iPod. This function continues to be developed and is a useful function for users of older iPod (especially nano users that, with the help of iPodLinux, can get an extremely small media center that can be held in the palm of the hand). podzilla 2, the second generation of podzilla, and commonly known as pz2, has superseded the original version of podzilla. It included several new features, most notably modularity; users can install new applications without recompiling all of podzilla. This version is the only working set of Podzilla official that will run on 5.5G iPods. See also Rockbox References External links IPodLinux Project home page [ Project home page] () Old project home page Custom firmware Embedded Linux distributions Free software projects Free software primarily written in assembly language Free software programmed in C Free media players IPod software Platform-specific Linux distributions Linux distributions de:IPod#iPod Linux
Operating System (OS)
956
BSAVE BSAVE and BLOAD are commands in many varieties of the BASIC programming language. BSAVE copies RAM to a binary file, and BLOAD copies the contents of the file to RAM. The term "BSAVE image" could mean any of various raw image formats of video display controllers, or more generally any file containing the raw contents of a section of memory. Some platforms provided a BRUN command that, after loading the file into memory, would immediately attempt to execute it as machine code. There is no file compression, and therefore these files load very quickly and without much programming when displayed in native mode. BSAVE files were in general use as a file format when the IBM PC was introduced. It was also in general use on the Apple II in the same time period. Although the commands were available on the Commodore PET line, they were removed from the later (and more popular) Commodore 64 and VIC-20 computers. In 1985 the Commodore 128 was released with Commodore BASIC version 6.9 which restored the BSAVE and BLOAD commands. Origin Some versions of BASIC for home computers in the late 1970s and early 1980s include the command BSAVE (for "Binary Save") and the complementary BLOAD ("Binary Load"). Using the BSAVE command, a block of memory at a given address with a specified length can be written to disk as a file. This file can then be reloaded into memory via BLOAD. Microsoft produced the BASIC interpreters that were bundled with the Apple II (1977), Commodore PET (1977), and IBM PC (1981) which included BSAVE and BLOAD. A BSAVE command is also part of ASCII's MSX-DOS2 Tools for MSX-DOS version 2. ColorBASIC On the Color Computer's ColorBASIC, those were named SAVEM and LOADM instead, with the M referring to machine code, showing that the primary intent was to load programs rather than data; the use of the B prefix to refer to binary indicates a broader view of the possible uses of the command. ColorBASIC uses a different format than GWBASIC. LOADM supports multipart content to be loaded in different places in RAM, which some programs do use, even though SAVEM supports only saving one part. The cassette equivalents were called CLOADM and CSAVEM. In ColorBASIC, the BRUN command is called EXEC instead. Video images The BSAVED format is a device-dependent raster image format; the file header sometimes stores information about the display hardware address, and the size of the graphics data. The graphics data follows the header directly and is stored as raw data in the format of the native adapter's addressable memory. No additional information, such as screen resolution, color depth and palette information, bit planes and so on, is stored. See also Applesoft BASIC QuickBASIC GW-BASIC References Microsoft BASIC Manual BSAVE Command Microsoft BASIC Manual BLOAD Command Apple II DOS & Commands FAQ AppleSoft FAQ Commodore 128 Personal Computer System Guide Commodore Business Machines, Ltd. 1985 C64 Image Formats Part 1 Pictor PC Paint File Format Summary External links How to Save Color Registers After BSAVE of (PICEM) Graphics Complete Instructions to BLOAD and BSAVE EGA and VGA Screens How to BLOAD/BSAVE Multiple Screen Pages for EGA Screens 7–10 The Commodore 128: The Most Versatile 8-Bit Computer Ever Made Articles with example BASIC code Articles with example C code Graphics file formats ASCII art BASIC commands
Operating System (OS)
957
On-screen display An on-screen display (OSD) is an image superimposed on a screen picture, commonly used by modern television sets, VCRs, and DVD players to display information such as volume, channel, and time. History In the past, most adjustments on TV sets were performed with analog controls such as potentiometers and switches. This was used more recently also in monochrome portable TVs. After remote controls were invented, digital adjustments became common. They needed an external display, which was LED, LCD, or VFD based. Including this display increased manufacturing costs. When electronics became more advanced, it became clear that adding some extra devices for an OSD was cheaper than adding a second display device. TV screens had become much bigger and could display much more information than a small second display. OSDs display graphical information superimposed over the picture, which is done by synchronizing the reading from OSD video memory with the TV signal. Some of the first OSD-equipped televisions were introduced by RCA in the late 1970s, simply displaying the channel number and the time of day at the bottom of the screen. An OSD chip was added to the General instruments (GI) varactor tuning chip set designed in conjunction with RCA and Telefunken. The original OSD was merely to placate users who were faced with a snowy screen during auto tuning. Something the original architecture had not seen as an issue until it was first demonstrated. Once a display had been injected, at least in 1981, a real-time clock (RTC) was added to display time and date on video terminals (with greater performance in 1996). In the 1980s, OSD-capable TVs started to be more common, such as Zenith's "System 3" series. Akai have been credited with the introduction of OSD in VCRs in the 1980s, including the introduction of on screen programming. By the mid-1990s, VCRs with these displays became widely available. This made it possible to reduce the size (and cost) of the VFD or LCD in the VCR. Eventually, as VCRs declined in popularity and prices fell, many manufacturers dropped the internal display completely, relying completely on the on-screen display. All DVD players also use on-screen displays. Many PAL television sets use the internal Teletext decoder's graphics rendering system to further reduce costs. More recently (as of about 2005), the decline in CRT-based TV sets and rise in LCD/plasma televisions has seen the use and availability of dedicated OSD devices decline, as it is more cost effective to integrate OSD functions inside the main graphics processor. Modern LCD television monitors usually incorporate only two or three integrated circuits. Examples of integrated circuits to perform dedicated OSD are MAX7456 and STV5730. Both operate with NTSC or PAL, mixing with an existing signal or self-generating. Both have slightly different capabilities. This can be done by PIC video superimposer too. First VCR with on-screen display Akai produced consumer video cassette recorders (VCR) during the 1980s. The Akai VS-2 was the first VCR with an on-screen display, originally named the Interactive Monitor System. By displaying the information directly on the television screen, this innovation eliminated the need for the user to be physically near the VCR to program recording, read the tape counter, or perform other common features. Within a few years, all competing manufacturers had adopted on-screen display technology in their own products. Computers Some computer software also uses OSDs, especially support programs for so-called "enhanced keyboards", which often had additional medias like skipping through music tracks and volume adjustment. Their use outside this field is still uncommon. On-screen displays are also used in camcorders, and can display various information both on the viewfinder and on the TV set the camcorder is connected to. The complexity of graphics offered by such displays has greatly increased over the years, from simple monochrome images to intricate graphical user interfaces. Known problems Several problems exist with regard to on-screen displays. One of them is diagnostics if a television's display system is damaged. Without any external screens, it is almost impossible (without opening the TV) to determine the source of the error. TV accessories that depend heavily on OSDs, such as VCRs or DVD players, are also difficult to configure without the use of a TV. On older VCRs, it was possible to program recording timers without turning on the TV; a modern VCR requires the user to turn on the TV to do so. Usability is generally also decreased with OSDs, as it is necessary to control a multitude of parameters with a few buttons, where earlier, real analog controls with mechanical feedback were available. The drawbacks of using OSDs do not outweigh their main advantage of being more cost-efficient and allowing to design less separate physical tuning controls, which has led to their widespread use. See also Digital on-screen graphic Head-up display, in computing and in video gaming Television news screen layout Character generator Chyron Corporation Pong References Film and video technology
Operating System (OS)
958
Instant-on In computers, instant-on is the ability to boot nearly instantly, thus allowing to go online or to use a specific application without waiting for a PC's traditional operating system to launch. Instant-on technology is today mostly used on laptops, netbooks, and nettops because the user can boot up one program, instead of waiting for the PC's operating system to boot. This allows a user to launch a single program, such as a movie-playing program or an internet browser, without the need of the whole operating system. There still remain a few true instant-on machines such as the Atari ST, as described in the Booting article. These machines had complete Operating Systems resident in ROM similar to the way in which the BIOS function is conventionally provided on current computer architectures. The "instant-on" concept as used here results from loading an OS, such as a legacy system DOS, with a small hard drive footprint. Latency inherent to mechanical drive performance can also be eliminated by using Live USB or Live SD flash memory to load systems at electronic speeds which are orders of magnitude faster. List of systems Acer InstaBoot Netbook (based on Splashtop) Acer RevoBoot Nettop (based on Splashtop) Asus Express Gate motherboards, notebooks, Eee Box (nettop), and EeePCs (based on Splashtop) Canonical product announced in early 2010 Dell Latitude ON, Latitude On Reader (based on Splashtop), Latitude On Flash (based on Splashtop) Google Chrome OS HP QuickWeb Probook notebook (based on Splashtop) HP Instant On Solution Voodoo & Envy notebook (based on Splashtop) HP Instant Web netbook (based on Splashtop) Lenovo QuickStart (based on Splashtop) LG SmartOn (based on Splashtop) Mandriva InstantOn MSI Winki Palm Foleo Phoenix HyperSpace Sony Quick Web Access (based on Splashtop) Splashtop Inc. Splashtop Xandros Presto Timeline In October 2007, ASUS introduced an instant-on capability branded "Express Gate" on select motherboards, using DeviceVM's Splashtop software and dedicated flash memory. In May 2008, Asus shipping "Express Gate" based on Splashtop to its notebooks. In July 2008, HP started shipping Splashtop on its Voodoo notebooks, calling it "Instant On Solution (IOS)." In October 2008, Lenovo started shipping Splashtop on its netbooks, calling it "QuickStart." Dell Computer Corporation announced on 13 August 2008 that they would support "instant on" in their Latitude line of laptops, leveraging "a dedicated low-voltage sub-processor and OS that can enable multi-day battery life and which provides "near-instant access to e-mail, calendar, attachments, contacts and the Web without booting into the system’s main operating system (OS)..." This OS will be running a Linux variant. In January 2009, LG started shipping Splashtop on its netbooks, calling it "Smart On." In June 2009, Acer started using Splashtop on its netbook, calling it "RevoBoot." In June 2009, Sony started installing an instant-on browser-only version of Splashtop software on its Vaio NW laptops. In September 2009, HP announced that it will ship a number of netbooks and commercial notebooks with a "Quick Web" instant-on feature, which also utilizes Splashtop software. In September 2009, Asus shipped EeePC netbooks with "Express Gate," based on Splashtop. In October 2009, Samsung partnered with Phoenix Technologies to offer instant-on notebooks and netbooks. In November 2009, Dell shipped Latitude commercial notebooks, with Latitude On Reader and Latitude On Flash features, based on Splashtop. In November 2009, Google open source Chrome OS alpha. In November 2009, Acer shipped Splashtop on its Aspire One netbooks, called "InstaBoot." In December 2009, Mandriva published an Environment called "Mandriva InstantOn" By end of 2009, Splashtop has shipped on over 30 million PCs. In early 2010, Canonical, the sponsors for the Ubuntu Operating System, will have an instant-on proposition released to market. Information may be released at the CES show in Las Vegas, January 2010. In June 2010, HP buys Phoenix's HyperSpace. In 2010, DeviceVM projected Splashtop will ship on over 100 million PCs. In October 2010, Apple introduced the new MacBook Air as a next generation notebook that will use flash memory technology that enables the ability of instant-on. In February 2011, Splashtop OS released as a free download. Pros and cons An instant-on operating system is like those used by 1980s home computers such as the Commodore 64. This offers many advantages over a standard modern operating system: faster booting; less vulnerable to malware as the system is mostly read-only; allows for diskless computers; lighter; lower power consumption. However, this comes at the price of having limited local functionality, while focusing on web / cloud services. Consumer electronics In the past, consumer electronics manufacturers would emblazon radios and television sets with "Instant On" or "Instant Play" decals. In series filament sets, instant-on was accomplished by adding only a silicon diode across the power switch to keep tube filaments lit at 50% power; the diode was placed such that the typical half wave rectifier of the day was reverse-biased. Instant-on advantages included near-instant operation of the television or radio and potentially longer vacuum tube life; disadvantages included energy consumption and risk of fire. Most solid state consumer electronics are inherently instant-on, so the moniker survived into the early solid state era to differentiate a product from its vacuum-tube based brethren (with CRTs being a notable exception). See also Just enough operating system Windows Hotstart Windows Fast Boot References General Dell targets 19-hour laptop at 'digital nomads' Are “instant on” notebooks the future? External links DeviceVM Splashtop instant-on software HyperSpace instant-on operating environment software Instant-on PCs set to take off with netbooks Are Quick-Start Systems Worth It?: Don't Wait For Windows Booting Netbooks Nettop BIOS
Operating System (OS)
959
List of file formats This is a list of file formats used by computers, organized by type. Filename extension it is usually noted in parentheses if they differ from the file format name or abbreviation. Many operating systems do not limit filenames to one extension shorter than 4 characters, as was common with some operating systems that supported the File Allocation Table (FAT) file system. Examples of operating systems that do not impose this limit include Unix-like systems, and Microsoft Windows NT, 95-98, and ME which have no three character limit on extensions for 32-bit or 64-bit applications on file systems other than pre-Windows 95 and Windows NT 3.5 versions of the FAT file system. Some filenames are given extensions longer than three characters. While MS-DOS and NT always treat the suffix after the last period in a file's name as its extension, in UNIX-like systems, the final period does not necessarily mean that the text after the last period is the file's extension. Some file formats, such as .txt or .text, may be listed multiple times. Archive and compressed .?mn - is a custom file made by Team Gastereler for making it easy to open .arc files for Nintendo which can be opened on PC by these files. These types of files are not available anywhere, as they haven't been released yet. .?Q? – files that are compressed, often by the SQ program. 7z – 7-Zip compressed file A - An external file extension for C/C++ AAPKG – ArchestrA IDE AAC – Advanced Audio Coding ace – ACE compressed file ALZ – ALZip compressed file APK – Android package: Applications installable on Android; package format of the Alpine Linux distribution APPX – Microsoft Application Package (.appx) AT3 – Sony's UMD data compression .bke – BackupEarth.com data compression ARC – pre-Zip data compression ARC - Nintendo U8 Archive (mostly Yaz0 compressed) ARJ – ARJ compressed file ASS (also SAS) – a subtitles file created by Aegisub, a video typesetting application (also a Halo game engine file) B – (B file) Similar to .a, but less compressed. BA – Scifer Archive (.ba), Scifer External Archive Type BB - Is an 3D image file made with the application, Artlantis. big – Special file compression format used by Electronic Arts to compress the data for many of EA's games BIN – compressed archive, can be read and used by CD-ROMs and Java, extractable by 7-zip and WINRAR bjsn – Used to store The Escapists saves on Android. BKF (.bkf) – Microsoft backup created by NTBackup.c Blend - An external 3D file format used by the animation software, Blender. bzip2 (.bz2) – BMP - Bitmap Image - You can create one by right-clicking the home screen, next, click new, then, click Bitmap Image bld – Skyscraper Simulator Building cab – A cabinet (.cab) file is a library of compressed files stored as one file. Cabinet files are used to organize installation files that are copied to the user's system. c4 – JEDMICS image files, a DOD system cals – JEDMICS image files, a DOD system xaml – Used in programs like Visual Studio to create exe files. CLIPFLAIR (.clipflair, .clipflair.zip) – ClipFlair Studio ClipFlair component saved state file (contains component options in XML, extra/attached files and nested components' state in child .clipflair.zip files – activities are also components and can be nested at any depth) CPT, SEA – Compact Pro (Macintosh) DAA – Closed-format, Windows-only compressed disk image deb – Debian install package DMG – an Apple compressed/encrypted format DDZ – a file which can only be used by the "daydreamer engine" created by "fever-dreamer", a program similar to RAGS, it's mainly used to make somewhat short games. DN – Adobe Dimension CC file format DPE – Package of AVE documents made with Aquafadas digital publishing tools. .egg – Alzip Egg Edition compressed file EGT (.egt) – EGT Universal Document also used to create compressed cabinet files replaces .ecab ECAB (.ECAB, .ezip) – EGT Compressed Folder used in advanced systems to compress entire system folders, replaced by EGT Universal Document ESD – Electronic Software Distribution, a compressed and encrypted WIM File ESS (.ess) – EGT SmartSense File, detects files compressed using the EGT compression system. EXE (.exe) – Windows application Flipchart file (.flipchart) – Used in Promethean ActivInspire Flipchart Software. GBP – GBP File Extension – What is a .gbp file and how do I open it? 2 types of files: 1. An archive index file that is created by Genie Timeline . It contains references to the files that the user has chosen to backup; the references can be to an archive file or a batch of files. This files can be opened using Genie-Soft Genie Timeline on Windows. 2. A data output file created by CAD Printed Circuit Board (PCB). This type of file can be opened on Windows using Autodesk EAGLE EAGLE | PCB Design Software | Autodesk, Altium Designer , Viewplot Welcome to Viewplot.com ...For PCB Related Software;...Viewplot The Gerber Viewer & editor in one......PCB Elegance a professional layout package for a affordable price, Gerbv gerbv – A Free/Open Source Gerber Viewer on Mac using Autodesk EAGLE, Gerbv, gEDA gplEDA Homepage and on Linux using Autodesk EAGLE, gEDA, Gerbv GBS (.gbs, .ggp, .gsc) – OtterUI binary scene file GHO (.gho, .ghs) – Norton Ghost GIF (.gif) – Graphics Interchange Format gzip (.gz) – Compressed file HTML (.html) HTML code file IPG (.ipg) – Format in which Apple Inc. packages their iPod games. can be extracted through Winrar jar – ZIP file with manifest for use with Java applications. JPG - Joints Photographic Experts Group - Image File JPEG - Joints Photographic Experts Group - Image File LBR (.Lawrence) – Lawrence Compiler Type file LBR – Library file LQR – LBR Library file compressed by the SQ program. LHA (.lzh) – Lempel, Ziv, Huffman lzip (.lz) – Compressed file lzo lzma – Lempel–Ziv–Markov chain algorithm compressed file LZX MBW (.mbw) – MBRWizard archive MHTML – Mime HTML (Hyper-Text Markup Language) code file MPQ Archives (.mpq) – Used by Blizzard Entertainment BIN (.bin) – MacBinary NL2PKG – NoLimits 2 Package (.nl2pkg) NTH (.nth) – Nokia Theme Used by Nokia Series 40 Cellphones OAR (.oar) – OAR archive OSK - Compressed osu! skin archive OSR - Compressed osu! replay archive OSZ – Compressed osu! beatmap archive PAK – Enhanced type of .ARC archive PAR (.par, .par2) – Parchive PAF (.paf) – Portable Application File PEA (.pea) – PeaZip archive file PNG - Portable Network Graphic - Image File PHP (.php) – PHP code file PYK (.pyk) – Compressed file PK3 (.pk3) – Quake 3 archive (See note on Doom³) PK4 (.pk4) – Doom³ archive (Opens similarly to a zip archive.) PXZ (.pxz) - A compressed layered image file used for the image editing website, pixlr.com . py / pyw – Python code file RAR (.rar) – Rar Archive, for multiple file archive (rar to .r01-.r99 to s01 and so on) RAG, RAGS – Game file, a game playable in the RAGS game-engine, a free program which both allows people to create games, and play games, games created have the format "RAG game file" RaX – Archive file created by RaX RBXL – Roblox Studio place file RBXLX – Roblox Studio XML place file RBXM - Roblox studio script file RPM – Red Hat package/installer for Fedora, RHEL, and similar systems. sb – Scratch file sb2 – Scratch 2.0 file sb3 - Scratch 3.0 file SEN – Scifer Archive (.sen) – Scifer Internal Archive Type SIT (.sitx) – StuffIt (Macintosh) SIS/SISX – Symbian Application Package SKB – Google SketchUp backup File SQ (.sq) – Squish Compressed Archive SWM – Splitted WIM File, usually found on OEM Recovery Partition to store preinstalled Windows image, and to make Recovery backup (to USB Drive) easier (due to FAT32 limitations) SZS – Nintendo Yaz0 Compressed Archive TAR – group of files, packaged as one file TGZ (.tar.gz) – gzipped tar file TB (.tb) – Tabbery Virtual Desktop Tab file TIB (.tib) – Acronis True Image backup UHA – Ultra High Archive Compression UUE (.uue) – unified utility engine – the generic and default format for all things UUe-related. VIV – Archive format used to compress data for several video games, including Need For Speed: High Stakes. VOL – video game data package. VSA – Altiris Virtual Software Archive WAX – Wavexpress – A ZIP alternative optimized for packages containing video, allowing multiple packaged files to be all-or-none delivered with near-instantaneous unpacking via NTFS file system manipulation. WIM – A compressed disk image for installing Windows Vista or higher, Windows Fundamentals for Legacy PC, or restoring a system image made from Backup and Restore (Windows Vista/7) XAP – Windows Phone Application Package xz – xz compressed files, based on LZMA/LZMA2 algorithm Z – Unix compress file zoo – based on LZW zip – popular compression format ZIM – an open file format that stores wiki content for offline usage Physical recordable media archiving ISO – The generic format for most optical media, including CD-ROM, DVD-ROM, Blu-ray Disc, HD DVD and UMD. NRG – The proprietary optical media archive format used by Nero applications. IMG – For archiving DOS formatted floppy disks, larger optical media, and hard disk drives. ADF – Amiga Disk Format, for archiving Amiga floppy disks ADZ – The GZip-compressed version of ADF. DMS – Disk Masher System, a disk-archiving system native to the Amiga. DSK – For archiving floppy disks from a number of other platforms, including the ZX Spectrum and Amstrad CPC. D64 – An archive of a Commodore 64 floppy disk. SDI – System Deployment Image, used for archiving and providing "virtual disk" functionality. MDS – DAEMON tools native disc image format used for making images from optical CD-ROM, DVD-ROM, HD DVD or Blu-ray Disc. It comes together with MDF file and can be mounted with DAEMON Tools. MDX – New DAEMON Tools format that allows getting one MDX disc image file instead of two (MDF and MDS). DMG – Macintosh disk image files (MPEG-1 is found in a .DAT file on a video CD.) CDI – DiscJuggler image file CUE – CDRWrite CUE image file CIF – Easy CD Creator .cif format C2D – Roxio-WinOnCD .c2d format DAA – PowerISO .daa format B6T – BlindWrite 6 image file B5T – BlindWrite 5 image file BWT – BlindWrite 4 image file FFPPKG - FreeFire Profile Export Package Other Extensions HTML - Hypertext Markup Language Computer-aided design Computer-aided is a prefix for several categories of tools (e.g., design, manufacture, engineering) which assist professionals in their respective fields (e.g., machining, architecture, schematics). Computer-aided design (CAD) Computer-aided design (CAD) software assists engineers, architects and other design professionals in project design. 3DXML – Dassault Systemes graphic representation 3MF – Microsoft 3D Manufacturing Format ACP – VA Software VA – Virtual Architecture CAD file AMF – Additive Manufacturing File Format AEC – DataCAD drawing format AR – Ashlar-Vellum Argon – 3D Modeling ART – ArtCAM model ASC – BRL-CAD Geometry File (old ASCII format) ASM – Solidedge Assembly, Pro/ENGINEER Assembly BIN, BIM – Data Design System DDS-CAD BREP – Open CASCADE 3D model (shape) C3D – C3D Toolkit File Format C3P - Construct3 Files CCC – CopyCAD Curves CCM – CopyCAD Model CCS – CopyCAD Session CAD – CadStd CATDrawing – CATIA V5 Drawing document CATPart – CATIA V5 Part document CATProduct – CATIA V5 Assembly document CATProcess – CATIA V5 Manufacturing document cgr – CATIA V5 graphic representation file ckd – KeyCreator CAD Modeling ckt – KeyCreator CAD Modeling CO – Ashlar-Vellum Cobalt – parametric drafting and 3D modeling DRW – Caddie Early version of Caddie drawing – Prior to Caddie changing to DWG DFT – Solidedge Draft DGN – MicroStation design file DGK – Delcam Geometry DMT – Delcam Machining Triangles DXF – ASCII Drawing Interchange file format, AutoCAD DWB – VariCAD drawing file DWF – Autodesk's Web Design Format; AutoCAD & Revit can publish to this format; similar in concept to PDF files; Autodesk Design Review is the reader DWG – Popular file format for Computer Aided Drafting applications, notably AutoCAD, Open Design Alliance applications, and Autodesk Inventor Drawing files EASM – SolidWorks eDrawings assembly file EDRW – eDrawings drawing file EMB – Wilcom ES Designer Embroidery CAD file EPRT – eDrawings part file EscPcb – "esCAD pcb" data file by Electro-System (Japan) EscSch – "esCAD sch" data file by Electro-System (Japan) ESW – AGTEK format EXCELLON – Excellon file EXP – Drawing Express format F3D – Autodesk Fusion 360 archive file FCStd – Native file format of FreeCAD CAD/CAM package FM – FeatureCAM Part File FMZ – FormZ Project file G – BRL-CAD Geometry File GBR – Gerber file GLM – KernelCAD model GRB – T-FLEX CAD File GRI - AppliCad GRIM-In file in readable text form for importing roof and wall cladding job data generated by business management and accounting systems into the modelling/estimating program GRO - AppliCad GRIM-Out file in readable text form for exporting roof and wall cladding data job material and labour costing data, material lists generated by the modelling/estimating program to business management and accounting systems IAM – Autodesk Inventor Assembly file ICD – IronCAD 2D CAD file IDW – Autodesk Inventor Drawing file IFC – buildingSMART for sharing AEC and FM data IGES – Initial Graphics Exchange Specification Intergraph Standard File Formats – Intergraph IO – Stud.io 3d model IPN – Autodesk Inventor Presentation file IPT – Autodesk Inventor Part file JT – Jupiter Tesselation MCD – Monu-CAD (Monument/Headstone Drawing file) MDG – Model of Digital Geometric Kernel model – CATIA V4 part document OCD – Orienteering Computer Aided Design (OCAD) file PAR – Solidedge Part PIPE – PIPE-FLO Professional Piping system design file PLN – ArchiCad project PRT – NX (recently known as Unigraphics), Pro/ENGINEER Part, CADKEY Part PSM – Solidedge Sheet PSMODEL – PowerSHAPE Model PWI – PowerINSPECT File PYT – Pythagoras File SKP – SketchUp Model RLF – ArtCAM Relief RVM – AVEVA PDMS 3D Review model RVT – Autodesk Revit project files RFA – Autodesk Revit family files RXF - AppliCad annotated 3D roof and wall geometry data in readable text form used to exchange 3D model geometry with other systems such as truss design software S12 – Spirit file, by Softtech SCAD – OpenSCAD 3D part model SCDOC – SpaceClaim 3D Part/Assembly SLDASM – SolidWorks Assembly drawing SLDDRW – SolidWorks 2D drawing SLDPRT – SolidWorks 3D part model dotXSI – For Softimage STEP – Standard for the Exchange of Product model data STL – Stereo Lithographic data format used by various CAD systems and stereo lithographic printing machines. STD – Power Vision Plus – Electricity Meter Data (Circutor) TCT – TurboCAD drawing template TCW – TurboCAD for Windows 2D and 3D drawing UNV – I-DEAS I-DEAS (Integrated Design and Engineering Analysis Software) VC6 – Ashlar-Vellum Graphite – 2D and 3D drafting VLM – Ashlar-Vellum Vellum, Vellum 2D, Vellum Draft, Vellum 3D, DrawingBoard VS – Ashlar-Vellum Vellum Solids WRL – Similar to STL, but includes color. Used by various CAD systems and 3D printing rapid prototyping machines. Also used for VRML models on the web. X_B – Parasolids binary format X_T – Parasolids XE – Ashlar-Vellum Xenon – for associative 3D modeling ZOFZPROJ – ZofzPCB 3D PCB model, containing mesh, netlist and BOM Electronic design automation (EDA) Electronic design automation (EDA), or electronic computer-aided design (ECAD), is specific to the field of electrical engineering. BRD – Board file for EAGLE Layout Editor, a commercial PCB design tool BSDL – Description language for testing through JTAG CDL – Transistor-level netlist format for IC design CPF – Power-domain specification in system-on-a-chip (SoC) implementation (see also UPF) DEF – Gate-level layout DSPF – Detailed Standard Parasitic Format, Analog-level parasitics of interconnections in IC design EDIF – Vendor neutral gate-level netlist format FSDB – Analog waveform format (see also Waveform viewer) GDSII – Format for PCB and layout of integrated circuits HEX – ASCII-coded binary format for memory dumps LEF – Library Exchange Format, physical abstract of cells for IC design LIB – Library modeling (function, timing) format MS12 – NI Multisim file OASIS – Open Artwork System Interchange Standard OpenAccess – Design database format with APIs PSF – Cadence proprietary format to store simulation results/waveforms (2GB limit) PSFXL – Cadence proprietary format to store simulation results/waveforms SDC – Synopsys Design Constraints, format for synthesis constraints SDF – Standard for gate-level timings SPEF – Standard format for parasitics of interconnections in IC design SPI, CIR – SPICE Netlist, device-level netlist and commands for simulation SREC, S19 – S-record, ASCII-coded format for memory dumps SST2 – Cadence proprietary format to store mixed-signal simulation results/waveforms STIL – Standard Test Interface Language, IEEE1450-1999 standard for Test Patterns for IC SV – SystemVerilog source file S*P – Touchstone/EEsof Scattering parameter data file – multi-port blackbox performance, measurement or simulated TLF – Contains timing and logical information about a collection of cells (circuit elements) UPF – Standard for Power-domain specification in SoC implementation V – Verilog source file VCD – Standard format for digital simulation waveform VHD, VHDL – VHDL source file WGL – Waveform Generation Language, format for Test Patterns for IC Test technology Files output from Automatic Test Equipment or post-processed from such. Standard Test Data Format Database 4DB – 4D database Structure file 4DD – 4D database Data file 4DIndy – 4D database Structure Index file 4DIndx – 4D database Data Index file 4DR – 4D database Data resource file (in old 4D versions) ACCDB – Microsoft Database (Microsoft Office Access 2007 and later) ACCDE – Compiled Microsoft Database (Microsoft Office Access 2007 and later) ADT – Sybase Advantage Database Server (ADS) APR – Lotus Approach data entry & reports BOX – Lotus Notes Post Office mail routing database CHML – Krasbit Technologies Encrypted database file for 1 click integration between contact management software and the chameleon(tm) line of imaging workflow solutions DAF – Digital Anchor data file DAT – DOS Basic DAT – Intersystems Caché database file DB – Paradox DB – SQLite DBF – db/dbase II,III,IV and V, Clipper, Harbour/xHarbour, Fox/FoxPro, Oracle DTA – Sage Sterling database file EGT – EGT Universal Document, used to compress sql databases to smaller files, may contain original EGT database style. ESS – EGT SmartSense is a database of files and its compression style. Specific to EGT SmartSense EAP – Enterprise Architect Project FDB – Firebird Databases FDB – Navision database file FP, FP3, FP5, and FP7 – FileMaker Pro FRM – MySQL table definition GDB – Borland InterBase Databases GTABLE – Google Drive Fusion Table KEXI – Kexi database file (SQLite-based) KEXIC – shortcut to a database connection for a Kexi databases on a server KEXIS – shortcut to a Kexi database LDB – Temporary database file, only existing when database is open LIRS - Layered Intager Storage. Stores intageres with characters such as semicolons to create lists of data. MDA – Add-in file for Microsoft Access MDB – Microsoft Access database ADP – Microsoft Access project (used for accessing databases on a server) MDE – Compiled Microsoft Database (Access) MDF – Microsoft SQL Server Database MYD – MySQL MyISAM table data MYI – MySQL MyISAM table index NCF – Lotus Notes configuration file NSF – Lotus Notes database NTF – Lotus Notes database design template NV2 – QW Page NewViews object oriented accounting database ODB – LibreOffice Base or OpenOffice Base database ORA – Oracle tablespace files sometimes get this extension (also used for configuration files) PCONTACT – WinIM Contact file PDB – Palm OS Database PDI – Portable Database Image PDX – Corel Paradox database management PRC – Palm OS resource database SQL – bundled SQL queries REC – GNU recutils database REL – Sage Retrieve 4GL data file RIN – Sage Retrieve 4GL index file SDB – StarOffice's StarBase SDF – SQL Compact Database file sqlite – SQLite UDL – Universal Data Link waData – Wakanda (software) database Data file waIndx – Wakanda (software) database Index file waModel – Wakanda (software) database Model file waJournal – Wakanda (software) database Journal file WDB – Microsoft Works Database WMDB – Windows Media Database file – The CurrentDatabase_360.wmdb file can contain file name, file properties, music, video, photo and playlist information. Big Data (Distributed) Avro - Data format appropriate for ingestion of record based attributes. Distinguishing characteristic is schema is stored on each row enabling schema evolution. Parquet - Columnar data storage. It is typically used within the Hadoop ecosystem. ORC - Similar to Parquet, but has better data compression and schema evolution handling. Desktop publishing AI – Adobe Illustrator AVE / ZAVE – Aquafadas CDR – CorelDRAW CHP / pub / STY / CAP / CIF / VGR / FRM – Ventura Publisher – Xerox (DOS / GEM) CPT – Corel Photo-Paint DTP – Greenstreet Publisher, GST PressWorks FM – Adobe FrameMaker GDRAW – Google Drive Drawing ILDOC – Broadvision Quicksilver document INDD – Adobe InDesign MCF – FotoInsight Designer PDF – Adobe Acrobat or Adobe Reader PMD – Adobe PageMaker PPP – Serif PagePlus PSD – Adobe Photoshop PUB – Microsoft Publisher QXD – QuarkXPress SLA / SCD – Scribus XCF – File format used by the GIMP, as well as other programs Document These files store formatted text and plain text. 0 – Plain Text Document, normally used for licensing 1ST – Plain Text Document, normally preceded by the words "README" (README.1ST) 600 – Plain Text Document, used in UNZIP history log 602 – Text602 document ABW – AbiWord document ACL – MS Word AutoCorrect List AFP – Advanced Function Presentation – IBc AMI – Lotus Ami Pro Amigaguide ANS – American National Standards Institute (ANSI) text ASC – ASCII text AWW – Ability Write CCF – Color Chat 1.0 CSV – ASCII text as comma-separated values, used in spreadsheets and database management systems CWK – ClarisWorks-AppleWorks document DBK – DocBook XML sub-format DITA – Darwin Information Typing Architecture document DOC – Microsoft Word document DOCM – Microsoft Word macro-enabled document DOCX – Office Open XML document DOT – Microsoft Word document template DOTX – Office Open XML text document template DWD – DavkaWriter Heb/Eng word processor file EGT – EGT Universal Document EPUB – EPUB open standard for e-books EZW – Reagency Systems easyOFFER document FDX – Final Draft FTM – Fielded Text Meta FTX – Fielded Text (Declared) GDOC – Google Drive Document HTML – HyperText Markup Language (.html, .htm) HWP – Haansoft (Hancom) Hangul Word Processor document HWPML – Haansoft (Hancom) Hangul Word Processor Markup Language document LOG – Text log file LWP – Lotus Word Pro MBP – metadata for Mobipocket documents MD – Markdown text document ME – Plain text document normally preceded by the word "READ" (READ.ME) MCW – Microsoft Word for Macintosh (versions 4.0–5.1) Mobi – Mobipocket documents NB – Mathematica Notebook nb – Nota Bene Document (Academic Writing Software) NBP – Mathematica Player Notebook NEIS – 학교생활기록부 작성 프로그램 (Student Record Writing Program) Document NT – N-Triples RDF container (.nt) NQ – N-Quads RDF container (.nq) ODM – OpenDocument master document ODOC – Synology Drive Office Document ODT – OpenDocument text document OSHEET – Synology Drive Office Spreadsheet OTT – OpenDocument text document template OMM – OmmWriter text document PAGES – Apple Pages document PAP – Papyrus word processor document PDAX – Portable Document Archive (PDA) document index file PDF – Portable Document Format QUOX – Question Object File Format for Quobject Designer or Quobject Explorer Radix-64 RTF – Rich Text document RPT – Crystal Reports SDW – StarWriter text document, used in earlier versions of StarOffice SE – Shuttle Document STW – OpenOffice.org XML (obsolete) text document template Sxw – OpenOffice.org XML (obsolete) text document TeX – TeX INFO – Texinfo Troff TXT – ASCII or Unicode plain text file UOF – Uniform Office Format UOML – Unique Object Markup Language VIA – Revoware VIA Document Project File WPD – WordPerfect document WPS – Microsoft Works document WPT – Microsoft Works document template WRD – WordIt! document WRF – ThinkFree Write WRI – Microsoft Write document XHTML (xhtml, xht) – eXtensible HyperText Markup Language XML – eXtensible Markup Language XPS – Open XML Paper Specification Financial records MYO – MYOB Limited (Windows) File MYOB – MYOB Limited (Mac) File TAX – TurboTax File YNAB – You Need a Budget (YNAB) File Financial data transfer formats Interactive Financial Exchange (IFX) – XML-based specification for various forms of financial transactions Open Financial Exchange (.ofx) – open standard supported by CheckFree and Microsoft and partly by Intuit; SGML and later XML based QFX – proprietary pay-only format used only by Intuit Quicken Interchange Format (.qif) – open standard formerly supported by Intuit Font file ABF – Adobe Binary Screen Font AFM – Adobe Font Metrics BDF – Bitmap Distribution Format BMF – ByteMap Font Format BRFNT - Binary Revolution Font Format FNT – Bitmapped Font – Graphics Environment Manager (GEM) FON – Bitmapped Font – Microsoft Windows MGF – MicroGrafx Font OTF – OpenType Font PCF – Portable Compiled Format PostScript Font – Type 1, Type 2 PFA – Printer Font ASCII PFB – Printer Font Binary – Adobe PFM – Printer Font Metrics – Adobe AFM – Adobe Font Metrics FOND – Font Description resource – Mac OS SFD – FontForge spline font database Font SNF – Server Normal Format TDF – TheDraw Font TFM – TeX font metric TTF (.ttf, .ttc) – TrueType Font UFO – Unified Font Object is a cross-platform, cross-application, human readable, future proof format for storing font data. WOFF – Web Open Font Format Geographic information system ASC – ASCII point of interest (POI) text file APR – ESRI ArcView 3.3 and earlier project file DEM – USGS DEM file format E00 – ARC/INFO interchange file format GeoJSON –Geographically located data in object notation GeoTIFF – Geographically located raster data GML – Geography Markup Language file GPX – XML-based interchange format ITN – TomTom Itinerary format MXD – ESRI ArcGIS project file, 8.0 and higher NTF – National Transfer Format file OV2 – TomTom POI overlay file SHP – ESRI shapefile TAB – MapInfo Table file format World TIFF – Geographically located raster data: text file giving corner coordinate, raster cells per unit, and rotation DTED – Digital Terrain Elevation Data KML – Keyhole Markup Language, XML-based Graphical information organizers 3DT – 3D Topicscape, the database in which the meta-data of a 3D Topicscape is held, it is a form of 3D concept map (like a 3D mind-map) used to organize ideas, information, and computer files ATY – 3D Topicscape file, produced when an association type is exported; used to permit round-trip (export Topicscape, change files and folders as desired, re-import to 3D Topicscape) CAG – Linear Reference System FES – 3D Topicscape file, produced when a fileless occurrence in 3D Topicscape is exported to Windows. Used to permit round-trip (export Topicscape, change files and folders as desired, re-import them to 3D Topicscape) MGMF – MindGenius Mind Mapping Software file format MM – FreeMind mind map file (XML) MMP – Mind Manager mind map file TPC – 3D Topicscape file, produced when an inter-Topicscape topic link file is exported to Windows; used to permit round-trip (export Topicscape, change files and folders as desired, re-import to 3D Topicscape) Graphics Color palettes ACT – Adobe Color Table. Contains a raw color palette and consists of 256 24-bit RGB colour values. ASE – Adobe Swatch Exchange. Used by Adobe Photoshop, Illustrator, and InDesign. GPL – GIMP palette file. Uses a text representation of color names and RGB values. Various open source graphical editors can read this format, including GIMP, Inkscape, Krita, KolourPaint, Scribus, CinePaint, and MyPaint. PAL – Microsoft RIFF palette file Color management ICC/ICM – Color profile conforming the specification of the ICC. Raster graphics Raster or bitmap files store images as a group of pixels. ART – America Online proprietary format BLP – Blizzard Entertainment proprietary texture format BMP – Microsoft Windows Bitmap formatted image BTI – Nintendo proprietary texture format CD5 – Chasys Draw IES image CIT – Intergraph is a monochrome bitmap format CPT – Corel PHOTO-PAINT image CR2 – Canon camera raw format; photos have this on some Canon cameras if the quality RAW is selected in camera settings CLIP – CLIP STUDIO PAINT format CPL – Windows control panel file DDS – DirectX texture file DIB – Device-Independent Bitmap graphic DjVu – DjVu for scanned documents EGT – EGT Universal Document, used in EGT SmartSense to compress PNG files to yet a smaller file Exif – Exchangeable image file format (Exif) is a specification for the image format used by digital cameras GIF – CompuServe's Graphics Interchange Format GRF – Zebra Technologies proprietary format ICNS – format for icons in macOS. Contains bitmap images at multiple resolutions and bitdepths with alpha channel. ICO – a format used for icons in Microsoft Windows. Contains small bitmap images at multiple resolutions and bitdepths with 1-bit transparency or alpha channel. IFF (.iff, .ilbm, .lbm) – ILBM JNG – a single-frame MNG using JPEG compression and possibly an alpha channel JPEG, JFIF (.jpg or .jpeg) – Joint Photographic Experts Group; a lossy image format widely used to display photographic images JP2 – JPEG2000 JPS – JPEG Stereo KRA – Krita image file LBM – Deluxe Paint image file MAX – ScanSoft PaperPort document MIFF – ImageMagick's native file format MNG – Multiple-image Network Graphics, the animated version of PNG MSP – a format used by old versions of Microsoft Paint; replaced by BMP in Microsoft Windows 3.0 NITF – A U.S. Government standard commonly used in Intelligence systems OTB – Over The Air bitmap, a specification designed by Nokia for black and white images for mobile phones PBM – Portable bitmap PC1 – Low resolution, compressed Degas picture file PC2 – Medium resolution, compressed Degas picture file PC3 – High resolution, compressed Degas picture file PCF – Pixel Coordination Format PCX – a lossless format used by ZSoft's PC Paint, popular for a time on DOS systems. PDN – Paint.NET image file PGM – Portable graymap PI1 – Low resolution, uncompressed Degas picture file PI2 – Medium resolution, uncompressed Degas picture file; also Portrait Innovations encrypted image format PI3 – High resolution, uncompressed Degas picture file PICT, PCT – Apple Macintosh PICT image PNG – Portable Network Graphic (lossless, recommended for display and edition of graphic images) PNM – Portable anymap graphic bitmap image PNS – PNG Stereo PPM – Portable Pixmap (Pixel Map) image PSB – Adobe Photoshop Big image file (for large files) PSD, PDD – Adobe Photoshop Drawing PSP – Paint Shop Pro image PX – Pixel image editor image file PXM – Pixelmator image file PXR – Pixar Image Computer image file QFX – QuickLink Fax image RAW – General term for minimally processed image data (acquired by a digital camera) RLE – a run-length encoding image SCT – Scitex Continuous Tone image file SGI, RGB, INT, BW – Silicon Graphics Image TGA (.tga, .targa, .icb, .vda, .vst, .pix) – Truevision TGA (Targa) image TIFF (.tif or .tiff) – Tagged Image File Format (usually lossless, but many variants exist, including lossy ones) TIFF/EP (.tif or .tiff) – Tag Image File Format / Electronic Photography, ISO 12234-2; tends to be used as a basis for other formats rather than in its own right. VTF – Valve Texture Format XBM – X Window System Bitmap XCF – GIMP image (from Gimp's origin at the eXperimental Computing Facility of the University of California) XPM – X Window System Pixmap ZIF – Zoomable/Zoomify Image Format (a web-friendly, TIFF-based, zoomable image format) Vector graphics Vector graphics use geometric primitives such as points, lines, curves, and polygons to represent images. 3DV – 3-D wireframe graphics by Oscar Garcia AMF – Additive Manufacturing File Format AWG – Ability Draw AI – Adobe Illustrator Document CGM – Computer Graphics Metafile, an ISO Standard CDR – CorelDRAW Document CMX – CorelDRAW vector image DP – Drawing Program file for PERQ DRAWIO – Diagrams.net offline diagram DXF – ASCII Drawing Interchange file Format, used in AutoCAD and other CAD-programs E2D – 2-dimensional vector graphics used by the editor which is included in JFire EGT – EGT Universal Document, EGT Vector Draw images are used to draw vector to a website EPS – Encapsulated Postscript FS – FlexiPro file GBR – Gerber file ODG – OpenDocument Drawing MOVIE.BYU RenderMan SVG – Scalable Vector Graphics, employs XML Scene description languages (3D vector image formats) STL – Stereo Lithographic data format (see STL (file format)) used by various CAD systems and stereo lithographic printing machines. See above. VRML Uses .wrl extension – Virtual Reality Modeling Language, for the creation of 3D viewable web images. X3D SXD – OpenOffice.org XML (obsolete) Drawing TGAX - Texture format used by Zwift V2D – voucher design used by the voucher management included in JFire VDOC – Vector format used in AnyCut, CutStorm, DrawCut, DragonCut, FutureDRAW, MasterCut, SignMaster, VinylMaster software by Future Corporation VSD – Vector format used by Microsoft Visio VSDX – Vector format used by MS Visio and opened by VSDX Annotator VND – Vision numeric Drawing file used in TypeEdit, Gravostyle. WMF – Windows Meta File EMF – Enhanced (Windows) MetaFile, an extension to WMF ART – Xara – Drawing (superseded by XAR) XAR – Xara – Drawing 3D graphics 3D graphics are 3D models that allow building models in real-time or non-real-time 3D rendering. 3DMF – QuickDraw 3D Metafile (.3dmf) 3DM – OpenNURBS Initiative 3D Model (used by Rhinoceros 3D) (.3dm) 3MF – Microsoft 3D Manufacturing Format (.3mf) 3DS – legacy 3D Studio Model (.3ds) ABC – Alembic (computer graphics) AC – AC3D Model (.ac) AMF – Additive Manufacturing File Format AN8 – Anim8or Model (.an8) AOI – Art of Illusion Model (.aoi) ASM – PTC Creo assembly (.asm) B3D – Blitz3D Model (.b3d) BLEND – Blender (.blend) BLOCK – Blender encrypted blend files (.block) BMD3 – Nintendo GameCube first-party J3D proprietary model format (.bmd) BDL4 – Nintendo GameCube and Wii first-party J3D proprietary model format (2002, 2006–2010) (.bdl) BRRES – Nintendo Wii first-party proprietary model format 2010+ (.brres) BFRES – Nintendo Wii U and later Switch first-party proprietary model format C4D – Cinema 4D (.c4d) Cal3D – Cal3D (.cal3d) CCP4 – X-ray crystallography voxels (electron density) CFL – Compressed File Library (.cfl) COB – Caligari Object (.cob) CORE3D – Coreona 3D Coreona 3D Virtual File(.core3d) CTM – OpenCTM (.ctm) DAE – COLLADA (.dae) DFF – RenderWare binary stream, commonly used by Grand Theft Auto III-era games as well as other RenderWare titles DPM – deepMesh (.dpm) DTS – Torque Game Engine (.dts) EGG – Panda3D Engine FACT – Electric Image (.fac) FBX – Autodesk FBX (.fbx) G – BRL-CAD geometry (.g) GLB – a binary form of glTF required to be loaded in Facebook 3D Posts. (.glb) GLM – Ghoul Mesh (.glm) glTF – the JSON-based standard developed by Khronos Group (.gltf) IO - Bricklink Stud.io 2.0 Model File (.io) IOB – Imagine (3D modeling software) (.iob) JAS – Cheetah 3D file (.jas) JMESH - Universal mesh data exchange file based on JMesh specification (.jmsh for text/JSON based, .bmsh for binary/UBJSON based) LDR - LDraw Model File (.ldr) LWO – Lightwave Object (.lwo) LWS – Lightwave Scene (.lws) LXF – LEGO Digital Designer Model file (.lxf) LXO – Luxology Modo (software) file (.lxo) M3D – Model3D, universal, engine-neutral format (.m3d) MA – Autodesk Maya ASCII File (.ma) MAX – Autodesk 3D Studio Max file (.max) MB – Autodesk Maya Binary File (.mb) MPD - LDraw Multi-Part Document Model File (.mpd) MD2 – Quake 2 model format (.md2) MD3 – Quake 3 model format (.md3) MD5 – Doom 3 model format (.md5) MDX – Blizzard Entertainment's own model format (.mdx) MESH – New York University(.m) MESH – Meshwork Model (.mesh) MM3D – Misfit Model 3d (.mm3d) MPO – Multi-Picture Object – This JPEG standard is used for 3d images, as with the Nintendo 3DS MRC – voxels in cryo-electron microscopy NIF – Gamebryo NetImmerse File (.nif) OBJ – Wavefront .obj file (.obj) OFF – OFF Object file format (.off) OGEX – Open Game Engine Exchange (OpenGEX) format (.ogex) PLY – Polygon File Format / Stanford Triangle Format (.ply) PRC – Adobe PRC (embedded in PDF files) PRT – PTC Creo part (.prt) POV – POV-Ray document (.pov) R3D – Realsoft 3D (Real-3D) (.r3d) RWX – RenderWare Object (.rwx) SIA – Nevercenter Silo Object (.sia) SIB – Nevercenter Silo Object (.sib) SKP – Google Sketchup file (.skp) SLDASM – SolidWorks Assembly Document (.sldasm) SLDPRT – SolidWorks Part Document (.sldprt) SMD – Valve Studiomdl Data format (.smd) U3D – Universal 3D format (.u3d) USD – Universal Scene Description (.usd) USDA – Universal Scene Description , Human-readable text format (.usda) USDC – Universal Scene Description , Binary format (.usdc) USDZ – Universal Scene Description Zip (.usdz) VIM – Revizto visual information model format (.vimproj) VRML97 – VRML Virtual reality modeling language (.wrl) VUE – Vue scene file (.vue) VWX – Vectorworks (.vwx) WINGS – Wings3D (.wings) W3D – Westwood 3D Model (.w3d) X – DirectX 3D Model (.x) X3D – Extensible 3D (.x3d) Z3D – Zmodeler (.z3d) ZBMX - Mecabricks Blender Add-On (.zbmx) Links and shortcuts Alias (Mac OS) JNLP – Java Network Launching Protocol, an XML file used by Java Web Start for starting Java applets over the Internet LNK – binary-format file shortcut in Microsoft Windows 95 and later APPREF-MS – File shortcut format used by ClickOnce NAL - ZENworks Instant shortcut (opens a .EXE not on the C:/ ) URL – INI file pointing to a URL bookmarks/Internet shortcut in Microsoft Windows WEBLOC – Property list file pointing to a URL bookmarks/Internet shortcut in macOS SYM – Symbolic link .desktop – Desktop entry on Linux Desktop environments Mathematical Harwell-Boeing file format – a format designed to store sparse matrices MML – MathML – Mathematical Markup Language ODF – OpenDocument Math Formula SXM – OpenOffice.org XML (obsolete) Math Formula Object code, executable files, shared and dynamically linked libraries .8BF files – plugins for some photo editing programs including Adobe Photoshop, Paint Shop Pro, GIMP and Helicon Filter. .a – Objective C native static library a.out – (no suffix for executable image, .o for object files, .so for shared object files) classic UNIX object format, now often superseded by ELF APK – Android Application Package APP – A folder found on macOS systems containing program code and resources, appearing as one file. BAC – an executable image for the RSTS/E system, created using the BASIC-PLUS COMPILE command BPL – a Win32 PE file created with Borland Delphi or C++Builder containing a package. Bundle – a Macintosh plugin created with Xcode or make which holds executable code, data files, and folders for that code. .Class – used in Java COFF (no suffix for executable image, .o for object files) – UNIX Common Object File Format, now often superseded by ELF COM files – commands used in DOS and CP/M DCU – Delphi compiled unit DLL – library used in Windows and OS/2 to store data, resources and code. DOL – the format used by the GameCube and Wii, short for Dolphin, which was the codename of the GameCube. .EAR – archives of Java enterprise applications ELF – (no suffix for executable image, .o for object files, .so for shared object files) used in many modern Unix and Unix-like systems, including Solaris, other System V Release 4 derivatives, Linux, and BSD) expander (see bundle) DOS executable (.exe – used in DOS) .IPA – apple IOS application executable file. Another form of zip file. JEFF – a file format allowing execution directly from static memory .JAR – archives of Java class files .XPI – PKZIP archive that can be run by Mozilla web browsers to install software. Mach-O – (no suffix for executable image, .o for object files, .dylib and .bundle for shared object files) Mach-based systems, notably native format of macOS, iOS, watchOS, and tvOS NetWare Loadable Module (.NLM) – the native 32-bit binaries compiled for Novell's NetWare Operating System (versions 3 and newer) New Executable (.EXE – used in multitasking ("European") MS-DOS 4.0, 16-bit Microsoft Windows, and OS/2) .o – un-linked object files directly from the compiler Portable Executable (.EXE, – used in Microsoft Windows and some other systems) Preferred Executable Format – (classic Mac OS for PowerPC applications; compatible with macOS via a classic (Mac OS X) emulator) RLL – used in Microsoft operating systems together with a DLL file to store program resources .s1es – Executable used for S1ES learning system. .so – shared library, typically ELF Value Added Process (.VAP) – the native 16-bit binaries compiled for Novell's NetWare Operating System (version 2, NetWare 286, Advanced NetWare, etc.) .WAR – archives of Java Web applications XBE – Xbox executable .XAP – Windows Phone package XCOFF – (no suffix for executable image, .o for object files, .a for shared object files) extended COFF, used in AIX XEX – Xbox 360 executable LIST – variable list Object extensions .VBX – Visual Basic extensions .OCX – Object Control extensions .TLB – Windows Type Library Page description language DVI – Device independent format EGT – Universal Document can be used to store CSS type styles (*.egt) PLD PCL PDF – Portable Document Format PostScript (.ps, .ps.gz) SNP – Microsoft Access Report Snapshot XPS XSL-FO (Formatting Objects) Configurations, Metadata CSS – Cascading Style Sheets XSLT, XSL – XML Style Sheet (.xslt, .xsl) TPL – Web template (.tpl) Personal information manager MSG – Microsoft Outlook task manager ORG – Lotus Organizer PIM package ORG - Emacs Org-Mode Mindmanager, contacts, calendar, email-integration PST, OST – Microsoft Outlook email communication SC2 – Microsoft Schedule+ calendar Presentation GSLIDES – Google Drive Presentation KEY, KEYNOTE – Apple Keynote Presentation NB – Mathematica Slideshow NBP – Mathematica Player slideshow ODP – OpenDocument Presentation OTP – OpenDocument Presentation template PEZ – Prezi Desktop Presentation POT – Microsoft PowerPoint template PPS – Microsoft PowerPoint Show PPT – Microsoft PowerPoint Presentation PPTX – Office Open XML Presentation PRZ – Lotus Freelance Graphics SDD – StarOffice's StarImpress SHF – ThinkFree Show SHOW – Haansoft(Hancom) Presentation software document SHW – Corel Presentations slide show creation SLP – Logix-4D Manager Show Control Project SSPSS – SongShow Plus Slide Show STI – OpenOffice.org XML (obsolete) Presentation template SXI – OpenOffice.org XML (obsolete) Presentation THMX – Microsoft PowerPoint theme template WATCH – Dataton Watchout Presentation Project management software MPP – Microsoft Project Reference management software Formats of files used for bibliographic information (citation) management. bib – BibTeX enl – EndNote ris – Research Information Systems RIS (file format) Scientific data (data exchange) FITS (Flexible Image Transport System) – standard data format for astronomy (.fits) Silo – a storage format for visualization developed at Lawrence Livermore National Laboratory SPC – spectroscopic data EAS3 – binary format for structured data EOSSA – Electro-Optic Space Situational Awareness format OST (Open Spatio-Temporal) – extensible, mainly images with related data, or just pure data; meant as an open alternative for microscope images CCP4 – X-ray crystallography voxels (electron density) MRC – voxels in cryo-electron microscopy HITRAN – spectroscopic data with one optical/infrared transition per line in the ASCII file (.hit) .root – hierarchical platform-independent compressed binary format used by ROOT Simple Data Format (SDF) – a platform-independent, precision-preserving binary data I/O format capable of handling large, multi-dimensional arrays. MYD – Everfine LEDSpec software file for LED measurements CSDM (Core Scientific Dataset Model) – model for multi-dimensional and correlated datasets from various spectroscopies, diffraction, microscopy, and imaging techniques (.csdf, .csdfe). Multi-domain NetCDF – Network common data format HDR, [HDF], h4 or h5 – Hierarchical Data Format SDXF – (Structured Data Exchange Format) CDF – Common Data Format CGNS – CFD General Notation System FMF – Full-Metadata Format Meteorology GRIB – Grid in Binary, WMO format for weather model data BUFR – WMO format for weather observation data PP – UK Met Office format for weather model data NASA-Ames – Simple text format for observation data. First used in aircraft studies of the atmosphere. Chemistry CML – Chemical Markup Language (CML) (.cml) Chemical table file (CTab) (.mol, .sd, .sdf) Joint Committee on Atomic and Molecular Physical Data (JCAMP) (.dx, .jdx) Simplified molecular input line entry specification (SMILES) (.smi) Mathematics graph6, sparse6 – ASCII encoding of Adjacency matrices (.g6, .s6) Biology Molecular biology and bioinformatics: AB1 – In DNA sequencing, chromatogram files used by instruments from Applied Biosystems ACE – A sequence assembly format ASN.1– Abstract Syntax Notation One, is an International Standards Organization (ISO) data representation format used to achieve interoperability between platforms. NCBI uses ASN.1 for the storage and retrieval of data such as nucleotide and protein sequences, structures, genomes, and PubMed records. BAM – Binary Alignment/Map format (compressed SAM format) BCF – Binary compressed VCF format BED – The browser extensible display format is used for describing genes and other features of DNA sequences CAF – Common Assembly Format for sequence assembly CRAM – compressed file format for storing biological sequences aligned to a reference sequence DDBJ – The flatfile format used by the DDBJ to represent database records for nucleotide and peptide sequences from DDBJ databases. EMBL – The flatfile format used by the EMBL to represent database records for nucleotide and peptide sequences from EMBL databases. FASTA – The FASTA format, for sequence data. Sometimes also given as FNA or FAA (Fasta Nucleic Acid or Fasta Amino Acid). FASTQ – The FASTQ format, for sequence data with quality. Sometimes also given as QUAL. GCPROJ – The Genome Compiler project. Advanced format for genetic data to be designed, shared and visualized. GenBank – The flatfile format used by the NCBI to represent database records for nucleotide and peptide sequences from the GenBank and RefSeq databases GFF – The General feature format is used to describe genes and other features of DNA, RNA, and protein sequences GTF – The Gene transfer format is used to hold information about gene structure MAF – The Multiple Alignment Format stores multiple alignments for whole-genome to whole-genome comparisons NCBI ASN.1 – Structured ASN.1 format used at National Center for Biotechnology Information for DNA and protein data NEXUS – The Nexus file encodes mixed information about genetic sequence data in a block structured format NeXML–XML format for phylogenetic trees NWK – The Newick tree format is a way of representing graph-theoretical trees with edge lengths using parentheses and commas and useful to hold phylogenetic trees. PDB – structures of biomolecules deposited in Protein Data Bank, also used to exchange protein and nucleic acid structures PHD – Phred output, from the base-calling software Phred PLN – Protein Line Notation used in proteax software specification SAM – Sequence Alignment Map format, in which the results of the 1000 Genomes Project will be released SBML – The Systems Biology Markup Language is used to store biochemical network computational models SCF – Staden chromatogram files used to store data from DNA sequencing SFF – Standard Flowgram Format SRA – format used by the National Center for Biotechnology Information Short Read Archive to store high-throughput DNA sequence data Stockholm – The Stockholm format for representing multiple sequence alignments Swiss-Prot – The flatfile format used to represent database records for protein sequences from the Swiss-Prot database VCF – Variant Call Format, a standard created by the 1000 Genomes Project that lists and annotates the entire collection of human variants (with the exception of approximately 1.6 million variants). Biomedical imaging Digital Imaging and Communications in Medicine (DICOM) (.dcm) Neuroimaging Informatics Technology Initiative (NIfTI) .nii – single-file (combined data and meta-data) style .nii.gz – gzip-compressed, used transparently by some software, notably the FMRIB Software Library (FSL) .gii – single-file (combined data and meta-data) style; NIfTI offspring for brain surface data .img,.hdr – dual-file (separate data and meta-data, respectively) style AFNI data, meta-data (.BRIK,.HEAD) Massachusetts General Hospital imaging format, used by the FreeSurfer brain analysis package .MGH – uncompressed .MGZ – zip-compressed Analyze data, meta-data (.img,.hdr) Medical Imaging NetCDF (MINC) format, previously based on NetCDF; since version 2.0, based on HDF5 (.mnc) Biomedical signals (time series) ACQ – AcqKnowledge format for Windows/PC from Biopac Systems Inc., Goleta, CA, USA ADICHT – LabChart format from ADInstruments Pty Ltd, Bella Vista NSW, Australia BCI2000 – The BCI2000 project, Albany, NY, USA BDF – BioSemi data format from BioSemi B.V. Amsterdam, Netherlands BKR – The EEG data format developed at the University of Technology Graz, Austria CFWB – Chart Data Format from ADInstruments Pty Ltd, Bella Vista NSW, Australia DICOM – Waveform An extension of Dicom for storing waveform data ecgML – A markup language for electrocardiogram data acquisition and analysis EDF/EDF+ – European Data Format FEF – File Exchange Format for Vital signs, CEN TS 14271 GDF v1.x – The General Data Format for biomedical signals, version 1.x GDF v2.x – The General Data Format for biomedical signals, version 2.x HL7aECG – Health Level 7 v3 annotated ECG MFER – Medical waveform Format Encoding Rules OpenXDF – Open Exchange Data Format from Neurotronics, Inc., Gainesville, FL, USA SCP-ECG – Standard Communication Protocol for Computer assisted electrocardiography EN1064:2007 SIGIF – A digital SIGnal Interchange Format with application in neurophysiology WFDB – Format of Physiobank XDF – eXtensible Data Format Other biomedical formats Health Level 7 (HL7) – a framework for exchange, integration, sharing, and retrieval of health information electronically xDT – a family of data exchange formats for medical records Biometric formats CBF – Common Biometric Format, based on CBEFF 2.0 (Common Biometric ExFramework). EBF – Extended Biometric Format, based on CBF but with S/MIME encryption support and semantic extensions CBFX – XML Common Biometric Format, based upon XCBF 1.1 (OASIS XML Common Biometric Format) EBFX – XML Extended Biometric Format, based on CBFX but with W3C XML Encryption support and semantic extensions ADB – Ada body ADS – Ada specification AHK – AutoHotkey script file APPLESCRIPT- applescript – see SCPT AS – Adobe Flash ActionScript File AU3 – AutoIt version 3 BAT – Batch file BAS – QBasic & QuickBASIC BTM — Batch file CLASS — Compiled Java binary CLJS – ClojureScript CMD – Batch file Coffee – CoffeeScript C – C CPP – C++ CS - C# INO – Arduino sketch (program) EGG – Chicken EGT – EGT Asterisk Application Source File, EGT Universal Document ERB – Embedded Ruby, Ruby on Rails Script File GO – Go HTA – HTML Application IBI – Icarus script ICI – ICI IJS – J script .ipynb – IPython Notebook ITCL – Itcl JS – JavaScript and JScript JSFL – Adobe JavaScript language .kt - Kotlin LUA – Lua M – Mathematica package file MRC – mIRC Script NCF – NetWare Command File (scripting for Novell's NetWare OS) NUC – compiled script NUD – C++ External module written in C++ NUT – Squirrel O — Compiled and optimized C/C++ binary pde – Processing (programming language), Processing script PHP – PHP PHP? – PHP (? = version number) PL – Perl PM – Perl module PS1 – Windows PowerShell shell script PS1XML – Windows PowerShell format and type definitions PSC1 – Windows PowerShell console file PSD1 – Windows PowerShell data file PSM1 – Windows PowerShell module file PY – Python PYC – Python byte code files PYO – Python R – R scripts r – REBOL scripts RB – Ruby RDP – RDP connection red – Red scripts RS – Rust (programming language) SB2/SB3 – Scratch SCPT – Applescript SCPTD – See SCPT. SDL – State Description Language SH – Shell script SYJS – SyMAT JavaScript SYPY – SyMAT Python TCL – Tcl TNS – Ti-Nspire Code/File TS - Typescript VBS – Visual Basic Script XPL – XProc script/pipeline ebuild – Gentoo Linux's portage package. Security Authentication and general encryption formats are listed here. OpenPGP Message Format – used by Pretty Good Privacy, GNU Privacy Guard, and other OpenPGP software; can contain keys, signed data, or encrypted data; can be binary or text ("ASCII armored") Certificates and keys GXK – Galaxkey, an encryption platform for authorized, private and confidential email communication OpenSSH private key (.ssh) – Secure Shell private key; format generated by ssh-keygen or converted from PPK with PuTTYgen OpenSSH public key (.pub) – Secure Shell public key; format generated by ssh-keygen or PuTTYgen PuTTY private key (.ppk) – Secure Shell private key, in the format generated by PuTTYgen instead of the format used by OpenSSH nSign public key (.nSign) - nSign public key in a custom format X.509 Distinguished Encoding Rules (.cer, .crt, .der) – stores certificates PKCS#7 SignedData (.p7b, .p7c) – commonly appears without main data, just certificates or certificate revocation lists (CRLs) PKCS#12 (.p12, .pfx) – can store public certificates and private keys PEM – Privacy-enhanced Electronic Mail: full format not widely used, but often used to store Distinguished Encoding Rules in Base64 format PFX – Microsoft predecessor of PKCS#12 Encrypted files This section shows file formats for encrypted general data, rather than a specific program's data. AXX – Encrypted file, created with AxCrypt EEA – An encrypted CAB, ostensibly for protecting email attachments TC – Virtual encrypted disk container, created by TrueCrypt KODE – Encrypted file, created with KodeFile nSignE - An encrypted private key, created by nSign Password files Password files (sometimes called keychain files) contain lists of other passwords, usually encrypted. BPW – Encrypted password file created by Bitser password manager KDB – KeePass 1 database KDBX – KeePass 2 database Signal data (non-audio) ACQ – AcqKnowledge format for Windows/PC from Biopac ADICHT – LabChart format from ADInstruments BKR – The EEG data format developed at the University of Technology Graz BDF, CFG – Configuration file for Comtrade data CFWB – Chart Data format from ADInstruments DAT – Raw data file for Comtrade data EDF – European data format FEF – File Exchange Format for Vital signs GDF – General data formats for biomedical signals GMS – Gesture And Motion Signal format IROCK – intelliRock Sensor Data File Format MFER – Medical waveform Format Encoding Rules SAC – Seismic Analysis Code, earthquake seismology data format SCP-ECG – Standard Communication Protocol for Computer assisted electrocardiography SEED, MSEED – Standard for the Exchange of Earthquake Data, seismological data and sensor metadata SEGY – Reflection seismology data format SIGIF – SIGnal Interchange Format WIN, WIN32 – NIED/ERI seismic data format (.cnt) Sound and music Lossless audio Uncompressed 8SVX – Commodore-Amiga 8-bit sound (usually in an IFF container) 16SVX – Commodore-Amiga 16-bit sound (usually in an IFF container) AIFF, AIF, AIFC – Audio Interchange File Format AU – Simple audio file format introduced by Sun Microsystems BWF – Broadcast Wave Format, an extension of WAVE CDDA – Compact Disc Digital Audio DSF, DFF – Direct Stream Digital audio file, also used in Super Audio CD RAW – Raw samples without any header or sync WAV – Microsoft Wave Compressed RA, RM – RealAudio format FLAC – Free lossless codec of the Ogg project LA – Lossless audio PAC – LPAC APE – Monkey's Audio OFR, OFS, OFF – OptimFROG RKA – RKAU SHN – Shorten TAK – Tom's Lossless Audio Kompressor THD – Dolby TrueHD TTA – Free lossless audio codec (True Audio) WV – WavPack WMA – Windows Media Audio 9 Lossless BRSTM – Binary Revolution Stream DTS, DTSHD, DTSMA – DTS (sound system) AST – Nintendo Audio Stream AW – Nintendo Audio Sample used in first-party games PSF – Portable Sound Format, PlayStation variant (originally PlayStation Sound Format) Lossy audio AC3 – Usually used for Dolby Digital tracks AMR – For GSM and UMTS based mobile phones MP1 – MPEG Layer 1 MP2 – MPEG Layer 2 MP3 MPEG Layer 3 SPX – Speex (Ogg project, specialized for voice, low bitrates) GSM – GSM Full Rate, originally developed for use in mobile phones WMA – Windows Media Audio AAC – Advanced Audio Coding (usually in an MPEG-4 container) MPC – Musepack VQF – Yamaha TwinVQ OTS – Audio File (similar to MP3, with more data stored in the file and slightly better compression; designed for use with OtsLabs' OtsAV) SWA – Adobe Shockwave Audio (Same compression as MP3 with additional header information specific to Adobe Director) VOX – Dialogic ADPCM Low Sample Rate Digitized Voice VOC – Creative Labs Soundblaster Creative Voice 8-bit & 16-bit Also output format of RCA Audio Recorders DWD – DiamondWare Digitized SMP – Turtlebeach SampleVision OGG – Ogg Vorbis Tracker modules and related MOD – Soundtracker and Protracker sample and melody modules MT2 – MadTracker 2 module S3M – Scream Tracker 3 module XM – Fast Tracker module IT – Impulse Tracker module NSF – NES Sound Format MID, MIDI – Standard MIDI file; most often just notes and controls but occasionally also sample dumps (.mid, .rmi) FTM – FamiTracker Project file BTM – BambooTracker Project file Sheet music files ABC – ABC Notation sheet music file DARMS – DARMS File Format also known as the Ford-Columbia Format ETF – Enigma Transportation Format abandoned sheet music exchange format GP* – Guitar Pro sheet music and tablature file KERN – Kern File Format sheet music file LY – LilyPond sheet music file MEI – Music Encoding Initiative file format that attempts to encode all musical notations MUS, MUSX – Finale sheet music file MXL, XML – MusicXML standard sheet music exchange format MSCX, MSCZ – MuseScore sheet music file SMDL – Standard Music Description Language sheet music file SIB – Sibelius sheet music file Other file formats pertaining to audio NIFF – Notation Interchange File Format PTB – Power Tab Editor tab ASF – Advanced Systems Format CUST – DeliPlayer custom sound format GYM – Genesis YM2612 log JAM – Jam music format MNG – Background music for the Creatures game series, starting from Creatures 2 RMJ – RealJukebox Media used for RealPlayer SID – Sound Interface Device – Commodore 64 instructions to play SID music and sound effects SPC – Super NES sound format TXM – Track ax media VGM – Stands for "Video Game Music", log for several different chips YM – Atari ST/Amstrad CPC YM2149 sound chip format PVD – Portable Voice Document used for Oaisys & Mitel call recordings Playlist formats AIMPPL – AIMP Playlist format ASX – Advanced Stream Redirector RAM – Real Audio Metafile For RealAudio files only. XPL – HDi playlist XSPF – XML Shareable Playlist Format ZPL – Xbox Music (Formerly Zune) Playlist format from Microsoft M3U – Multimedia playlist file PLS – Multimedia playlist, originally developed for use with the museArc Audio editing and music production ALS – Ableton Live set ALC – Ableton Live clip ALP – Ableton Live pack ATMOS, AUDIO, METADATA – Dolby Atmos Rendering and Mastering related file AUP – Audacity project file AUP3 – Audacity 3.0 project file BAND – GarageBand project file CEL – Adobe Audition loop file (Cool Edit Loop) CAU Caustic project file CPR – Steinberg Cubase project file CWP – Cakewalk Sonar project file DRM – Steinberg Cubase drum file DMKIT – Image-Line's Drumaxx drum kit file ENS – Native Instruments Reaktor Ensemble FLP – Image Line FL Studio project file GRIR – Native Instruments Komplete Guitar Rig Impulse Response LOGIC – Logic Pro X project file MMP – LMMS project file (alternatively MMPZ for compressed formats) MMR – MAGIX Music Maker project file MX6HS – Mixcraft 6 Home Studio project file NPR – Steinberg Nuendo project file OMF, OMFI – Open Media Framework Interchange OMFI succeeds OMF (Open Media Framework) PTX - Pro Tools 10 or later project file PTF - Pro Tools 7 up to Pro Tools 9 project file PTS - Legacy Pro Tools project file RIN – Soundways RIN-M file containing sound recording participant credits and song information RPP, RPP-BAK – REAPER project file REAPEAKS – REAPER peak (waveform cache) file SES – Adobe Audition multitrack session file SFK – Sound Forge waveform cache file SFL – Sound Forge sound file SNG – MIDI sequence file (MidiSoft, Korg, etc.) or n-Track Studio project file STF – StudioFactory project file. It contains all necessary patches, samples, tracks and settings to play the file SND – Akai MPC sound file SYN – SynFactory project file. It contains all necessary patches, samples, tracks and settings to play the file UST – Utau Editor sequence excluding wave-file VCLS – VocaListener project file VPR – Vocaloid 5 Editor sequence excluding wave-file VSQ – Vocaloid 2 Editor sequence excluding wave-file VSQX – Vocaloid 3 & 4 Editor sequence excluding wave-file Recorded television formats DVR-MS – Windows XP Media Center Edition's Windows Media Center recorded television format WTV – Windows Vista's and up Windows Media Center recorded television format Source code for computer programs ADA, ADB, 2.ADA – Ada (body) source ADS, 1.ADA – Ada (specification) source ASM, S – Assembly language source BAS – BASIC, FreeBASIC, Visual Basic, BASIC-PLUS source, PICAXE basic BB – Blitz Basic Blitz3D BMX – Blitz Basic BlitzMax C – C source CLJ – Clojure source code CLS – Visual Basic class COB, CBL – COBOL source CPP, CC, CXX, C, CBP – C++ source CS – C# source CSPROJ – C# project (Visual Studio .NET) D – D source DBA – DarkBASIC source DBPro123 – DarkBASIC Professional project E – Eiffel source EFS – EGT Forever Source File EGT – EGT Asterisk Source File, could be J, C#, VB.net, EF 2.0 (EGT Forever) EL – Emacs Lisp source FOR, FTN, F, F77, F90 – Fortran source FRM – Visual Basic form FRX – Visual Basic form stash file (binary form file) FTH – Forth source GED – Game Maker Extension Editable file as of version 7.0 GM6 – Game Maker Editable file as of version 6.x GMD – Game Maker Editable file up to version 5.x GMK – Game Maker Editable file as of version 7.0 GML – Game Maker Language script file GO – Go source H – C/C++ header file HPP, HXX – C++ header file HS – Haskell source I – SWIG interface file INC – Turbo Pascal included source JAVA – Java source L – lex source LGT – Logtalk source LISP – Common Lisp source M – Objective-C source M – MATLAB M – Mathematica M4 – m4 source ML – Standard ML and OCaml source MSQR – M² source file, created by Mattia Marziali N – Nemerle source NB – Nuclear Basic source P – Parser source PAS, PP, P – Pascal source (DPR for projects) PHP, PHP3, PHP4, PHP5, PHPS, Phtml – PHP source PIV – Pivot stickfigure animator PL, PM – Perl PLI, PL1 – PL/I PRG – Ashton-Tate; dbII, dbIII and dbIV, db, db7, clipper, Microsoft Fox and FoxPro, harbour, xharbour, and Xbase PRO – IDL POL – Apcera Policy Language doclet PY – Python source R – R source RED – Red source REDS – Red/System source RB – Ruby source RESX – Resource file for .NET applications RC, RC2 – Resource script files to generate resources for .NET applications RKT, RKTL – Racket source SCALA – Scala source SCI, SCE – Scilab SCM – Scheme source SD7 – Seed7 source SKB, SKC – Sage Retrieve 4GL Common Area (Main and Amended backup) SKD – Sage Retrieve 4GL Database SKF, SKG – Sage Retrieve 4GL File Layouts (Main and Amended backup) SKI – Sage Retrieve 4GL Instructions SKK – Sage Retrieve 4GL Report Generator SKM – Sage Retrieve 4GL Menu SKO – Sage Retrieve 4GL Program SKP, SKQ – Sage Retrieve 4GL Print Layouts (Main and Amended backup) SKS, SKT – Sage Retrieve 4GL Screen Layouts (Main and Amended backup) SKZ – Sage Retrieve 4GL Security File SLN – Visual Studio solution SPIN – Spin source (for Parallax Propeller microcontrollers) STK – Stickfigure file for Pivot stickfigure animator SWG – SWIG source code TCL – TCL source code VAP – Visual Studio Analyzer project VB – Visual Basic.NET source VBG – Visual Studio compatible project group VBP, VIP – Visual Basic project VBPROJ – Visual Basic .NET project VCPROJ – Visual C++ project VDPROJ – Visual Studio deployment project XPL – XProc script/pipeline XQ – XQuery file XSL – XSLT stylesheet Y – yacc source Spreadsheet 123 – Lotus 1-2-3 AB2 – Abykus worksheet AB3 – Abykus workbook AWS – Ability Spreadsheet BCSV – Nintendo proprietary table format CLF – ThinkFree Calc CELL – Haansoft(Hancom) SpreadSheet software document CSV – Comma-Separated Values GSHEET – Google Drive Spreadsheet numbers – An Apple Numbers Spreadsheet file gnumeric – Gnumeric spreadsheet, a gziped XML file LCW – Lucid 3-D ODS – OpenDocument spreadsheet OTS – OpenDocument spreadsheet template QPW – Quattro Pro spreadsheet SDC – StarOffice StarCalc Spreadsheet SLK – SYLK (SYmbolic LinK) STC – OpenOffice.org XML (obsolete) Spreadsheet template SXC – OpenOffice.org XML (obsolete) Spreadsheet TAB – tab delimited columns; also TSV (Tab-Separated Values) TXT – text file VC – Visicalc WK1 – Lotus 1-2-3 up to version 2.01 WK3 – Lotus 1-2-3 version 3.0 WK4 – Lotus 1-2-3 version 4.0 WKS – Lotus 1-2-3 WKS – Microsoft Works WQ1 – Quattro Pro DOS version XLK – Microsoft Excel worksheet backup XLS – Microsoft Excel worksheet sheet (97–2003) XLSB – Microsoft Excel binary workbook XLSM – Microsoft Excel Macro-enabled workbook XLSX – Office Open XML worksheet sheet XLR – Microsoft Works version 6.0 XLT – Microsoft Excel worksheet template XLTM – Microsoft Excel Macro-enabled worksheet template XLW – Microsoft Excel worksheet workspace (version 4.0) Tabulated data TSV – Tab-separated values CSV – Comma-separated values db – databank format; accessible by many econometric applications dif – accessible by many spreadsheet applications Video AAF – mostly intended to hold edit decisions and rendering information, but can also contain compressed media essence 3GP – the most common video format for cell phones GIF – Animated GIF (simple animation; until recently often avoided because of patent problems) ASF – container (enables any form of compression to be used; MPEG-4 is common; video in ASF-containers is also called Windows Media Video (WMV)) AVCHD – Advanced Video Codec High Definition AVI – container (a shell, which enables any form of compression to be used) BIK (.bik) – Bink Video file. A video compression system developed by RAD Game Tools BRAW - a video format used by Blackmagic's Ursa Mini Pro 12K cameras. CAM – aMSN webcam log file COLLAB – Blackboard Collaborate session recording DAT – video standard data file (automatically created when we attempted to burn as video file on the CD) DSH DVR-MS – Windows XP Media Center Edition's Windows Media Center recorded television format FLV – Flash video (encoded to run in a flash animation) M1V MPEG-1 – Video M2V MPEG-2 – Video NOA - rare movie format use in some Japanese eroges around 2002 FLA – Adobe Flash (for producing) FLR – (text file which contains scripts extracted from SWF by a free ActionScript decompiler named FLARE) SOL – Adobe Flash shared object ("Flash cookie") STR - Sony PlayStation video stream M4V – video container file format developed by Apple Matroska (*.mkv) – Matroska is a container format, which enables any video format such as MPEG-4 ASP or AVC to be used along with other content such as subtitles and detailed meta information WRAP – MediaForge (*.wrap) MNG – mainly simple animation containing PNG and JPEG objects, often somewhat more complex than animated GIF QuickTime (.mov) – container which enables any form of compression to be used; Sorenson codec is the most common; QTCH is the filetype for cached video and audio streams MPEG (.mpeg, .mpg, .mpe) THP – Nintendo proprietary movie/video format MPEG-4 Part 14, shortened "MP4" – multimedia container (most often used for Sony's PlayStation Portable and Apple's iPod) MXF – Material Exchange Format (standardized wrapper format for audio/visual material developed by SMPTE) ROQ – used by Quake 3 NSV – Nullsoft Streaming Video (media container designed for streaming video content over the Internet) Ogg – container, multimedia RM – RealMedia SVI – Samsung video format for portable players SMI – SAMI Caption file (HTML like subtitle for movie files) SMK (.smk) – Smacker video file. A video compression system developed by RAD Game Tools SWF – Adobe Flash (for viewing) WMV – Windows Media Video (See ASF) WTV – Windows Vista's and up Windows Media Center recorded television format YUV – raw video format; resolution (horizontal x vertical) and sample structure 4:2:2 or 4:2:0 must be known explicitly WebM – video file format for web video using HTML5 Video editing, production BRAW – Blackmagic Design RAW video file name FCP – Final Cut Pro project file MSWMM – Windows Movie Maker project file PPJ & PRPROJ– Adobe Premiere Pro video editing file IMOVIEPROJ – iMovie project file VEG & VEG-BAK – Sony Vegas project file SUF – Sony camera configuration file (setup.suf) produced by XDCAM-EX camcorders WLMP – Windows Live Movie Maker project file KDENLIVE – Kdenlive project file VPJ – VideoPad project file MOTN – Apple Motion project file IMOVIEMOBILE – iMovie project file for iOS users WFP / WVE — Wondershare Filmora Project PDS - Cyberlink PowerDirector project VPROJ - VSDC Free Video Editor project file Video game data List of common file formats of data for video games on systems that support filesystems, most commonly PC games. Minecraft — files used by Mojang to develop Minecraft MCADDON – format used by the Bedrock Edition of Minecraft for add-ons; Resource packs for the game MCFUNCTION – format used by Minecraft for storing functions MCMETA – format used by Minecraft for storing data for customizable texture packs for the game MCPACK – format used by the Bedrock Edition of Minecraft for in-game texture packs; full addons for the game MCR – format used by Minecraft for storing data for in-game worlds before version 1.2 MCTEMPLATE – format used by the Bedrock Edition of Minecraft for world templates MCWORLD – format used by the Bedrock Edition of Minecraft for in-game worlds NBS – format used by Note Block Studio, a tool that can be used to make note block songs for Minecraft. TrackMania/Maniaplanet Engine – Formats used by games based on the TrackMania engine. GBX - All user-created content is stored in this file type. REPLAY.GBX - Stores the replay of a race. CHALLENGE.GBX/MAP.GBX - Stores tracks/maps. SYSTEMCONFIG.GBX - Launcher info. TRACKMANIAVEHICLE.GBX - Info about a certain car type. VEHICLETUNINGS.GBX - Vehicle physics. SOLID.GBX - A block's model. ITEM.GBX - Custom Maniaplanet item. BLOCK.GBX - Custom Maniaplanet block. TEXTURE.GBX - Info about a texture that are used in materials. MATERIAL.GBX - Info about a material such as surface type that are used in Solids. TMEDCLASSIC.GBX - Block info. GHOST.GBX - Player ghosts in Trackmania and TrackMania Turbo. CONTROLSTYLE.GBX - Menu files. SCORES.GBX - Stores info about the player's best times. PROFILE.GBX - Stores a player's info such as their login. DDS - Almost every texture in the game uses this format. PAK - Stores environment data such as valid blocks. LOC - A locator. Locators allow the game to download content such as car skins from an external server. SCRIPT.TXT - Scripts for Maniaplanet such as menus and game modes. XML - ManiaLinks. Doom engine – Formats used by games based on the Doom engine. DEH – DeHackEd files to mutate the game executable (not officially part of the DOOM engine) DSG – Saved game LMP – A lump is an entry in a DOOM wad. LMP – Saved demo recording MUS – Music file (usually contained within a WAD file) WAD – Data storage (contains music, maps, and textures) Quake engine – Formats used by games based on the Quake engine. BSP – (For Binary space partitioning) compiled map format MAP – Raw map format used by editors like GtkRadiant or QuArK MDL/MD2/MD3/MD5 – Model for an item used in the game PAK/PK2 – Data storage PK3/PK4 – used by the Quake II, Quake III Arena and Quake 4 game engines, respectively, to store game data, textures etc. They are actually .zip files. .dat – not specific file type, often generic extension for "data" files for a variety of applications sometimes used for general data contained within the .PK3/PK4 files .fontdat – a .dat file used for formatting game fonts .roq – Video format .sav – Savegame format Unreal Engine – Formats used by games based on the Unreal engine. U – Unreal script format UAX – Animations format for Unreal Engine 2 UMX – Map format for Unreal Tournament UMX – Music format for Unreal Engine 1 UNR – Map format for Unreal UPK – Package format for cooked content in Unreal Engine 3 USX – Sound format for Unreal Engine 1 and Unreal Engine 2 UT2 – Map format for Unreal Tournament 2003 and Unreal Tournament 2004 UT3 – Map format for Unreal Tournament 3 UTX – Texture format for Unreal Engine 1 and Unreal Engine 2 UXX – Cache format; these are files a client downloaded from server (which can be converted to regular formats) Duke Nukem 3D Engine – Formats used by games based on this engine DMO – Save game GRP – Data storage MAP – Map (usually constructed with BUILD.EXE) Diablo Engine – Formats used by Diablo by Blizzard Entertainment. SV – Save Game ITM – Item File Real Virtuality Engine – Formats used by Bohemia Interactive. Operation:Flashpoint, ARMA 2, VBS2 SQF – Format used for general editing SQM – Format used for mission files PBO – Binarized file used for compiled models LIP – Format that is created from WAV files to create in-game accurate lip-synch for character animations. Source Engine – Formats used by Valve. Half-Life 2, Counter-Strike: Source, Day of Defeat: Source, Half-Life 2: Episode One, Team Fortress 2, Half-Life 2: Episode Two, Portal, Left 4 Dead, Left 4 Dead 2, Alien Swarm, Portal 2, Counter-Strike: Global Offensive, Titanfall, Insurgency, Titanfall 2, Day of Infamy VMF – Valve Hammer Map editor raw map file VMX - Valve Hammer Map editor backup map file BSP – Source Engine compiled map file MDL – Source Engine model format SMD – Source Engine uncompiled model format PCF – Source Engine particle effect file HL2 – Half-Life 2 save format DEM – Source Engine demo format VPK – Source Engine pack format VTF – Source Engine texture format VMT – Source Engine material format. Pokemon Generation V CGB - Pokemon Black and White/Pokemon Black 2 and White 2 C-Gear skins. Other Formats ARC - used to store New Super Mario Bros. Wii level data B – used for Grand Theft Auto saved game files BOL – used for levels on Poing!PC DBPF – The Sims 2, DBPF, Package DIVA – Project DIVA timings, element coördinates, MP3 references, notes, animation poses and scores. ESM, ESP – Master and Plugin data archives for the Creation Engine HAMBU - format used by the Aidan's Funhouse game RGTW for storing map data HE0, HE2, HE4 HE games File GCF – format used by the Steam content management system for file archives IMG – format used by Renderware-based Grand Theft Auto games for data storage LOVE – format used by the LOVE2D Engine MAP – format used by Halo: Combat Evolved for archive compression, Doom³, and various other games MCA – format used by Minecraft for storing data for in-game worlds NBT – format used by Minecraft for storing program variables along with their (Java) type identifiers OEC – format used by OE-Cake for scene data storage OSB - osu! storyboard data OSC - osu!stream combined stream data OSF2 - free osu!stream song file OSR – osu! replay data OSU – osu! beatmap data OSZ2 - paid osu!stream song file P3D – format for panda3d by Disney PLAGUEINC - format used by Plague Inc. for storing custom scenario information POD – format used by Terminal Reality RCT – Used for templates and save files in RollerCoaster Tycoon games REP – used by Blizzard Entertainment for scenario replays in StarCraft. Simcity 4, DBPF (.dat, .SC4Lot, .SC4Model) – All game plugins use this format, commonly with different file extensions SMZIP – ZIP-based package for StepMania songs, themes and announcer packs. SOLITAIRETHEME8 - A solitaire theme for Windows solitaire USLD – format used by Unison Shift to store level layouts. VVVVVV – format used by VVVVVV CPS – format used by The Powder Toy, Powder Toy save STM – format used by The Powder Toy, Powder Toy stamp PKG – format used by Bungie for the PC Beta of Destiny 2, for nearly all the game's assets. CHR – format used by Team Salvato, for the character files of Doki Doki Literature Club! Z5 – format used by Z-machine for story files in interactive fiction. scworld – format used by Survivalcraft to store sandbox worlds. scskin – format used by Survivalcraft to store player skins. scbtex – format used by Survivalcraft to store block textures. prison – format used by Prison Architect to save prisons escape – format used by Prison Architect to save escape attempts Video game storage media List of the most common filename extensions used when a game's ROM image or storage medium is copied from an original read-only memory (ROM) device to an external memory such as hard disk for back up purposes or for making the game playable with an emulator. In the case of cartridge-based software, if the platform specific extension is not used then filename extensions ".rom" or ".bin" are usually used to clarify that the file contains a copy of a content of a ROM. ROM, disk or tape images usually do not consist of one file or ROM, rather an entire file or ROM structure contained within one file on the backup medium. A26 – Atari 2600 (.a26) A52 – Atari 5200 (.a52) A78 – Atari 7800 (.a78) LNX – Atari Lynx (.lnx) JAG,J64 – Atari Jaguar (.jag, .j64) ISO, WBFS, WAD, WDF – Wii and WiiU (.iso, .wbfs, .wad, .wdf) GCM, ISO – GameCube (.gcm, .iso) min - Pokemon mini (.min) NDS – Nintendo DS (.nds) 3DS – Nintendo 3DS (.3ds) CIA – Installation File (.cia) GB – Game Boy (.gb) (this applies to the original Game Boy and the Game Boy Color) GBC – Game Boy Color (.gbc) GBA – Game Boy Advance (.gba) GBA – Game Boy Advance (.gba) SAV – Game Boy Advance Saved Data Files (.sav) SGM – Visual Boy Advance Save States (.sgm) N64, V64, Z64, U64, USA, JAP, PAL, EUR, BIN – Nintendo 64 (.n64, .v64, .z64, .u64, .usa, .jap, .pal, .eur, .bin) PJ – Project 64 Save States (.pj) NES – Nintendo Entertainment System (.nes) FDS – Famicom Disk System (.fds) JST – Jnes Save States (.jst) FC? – FCEUX Save States (.fc#, where # is any character, usually a number) GG – Game Gear (.gg) SMS – Master System (.sms) SG – SG-1000 (.sg) SMD,BIN – Mega Drive/Genesis (.smd or .bin) 32X – Sega 32X (.32x) SMC,078,SFC – Super NES (.smc, .078, or .sfc) (.078 is for split ROMs, which are rare) FIG – Super Famicom (Japanese releases are rarely .fig, above extensions are more common) SRM – Super NES Saved Data Files (.srm) ZST – ZSNES Save States (.zst, .zs1-.zs9, .z10-.z99) FRZ – Snes9X Save States (.frz, .000-.008) PCE – TurboGrafx-16/PC Engine (.pce) NPC, NGP – Neo Geo Pocket (.npc, .ngp) NGC – Neo Geo Pocket Color (.ngc) VB – Virtual Boy (.vb) INT – Intellivision (.int) MIN – Pokémon Mini (.min) VEC – Vectrex (.vec) BIN – Odyssey² (.bin) WS – WonderSwan (.ws) WSC – WonderSwan Color (.wsc) TZX – ZX Spectrum (.tzx) (for exact copies of ZX Spectrum games) TAP – for tape images without copy protection Z80,SNA – (for snapshots of the emulator RAM) DSK – (for disk images) TAP – Commodore 64 (.tap) (for tape images including copy protection) T64 – (for tape images without copy protection, considerably smaller than .tap files) D64 – (for disk images) CRT – (for cartridge images) ADF – Amiga (.adf) (for 880K diskette images) ADZ – GZip-compressed version of the above. DMS – Disk Masher System, previously used as a disk-archiving system native to the Amiga, also supported by emulators. Virtual machines Microsoft Virtual PC, Virtual Server VFD – Virtual Floppy Disk (.vfd) VHD – Virtual Hard Disk (.vhd) VUD – Virtual Undo Disk (.vud) VMC – Virtual Machine Configuration (.vmc) VSV – Virtual Machine Saved State (.vsv) EMC VMware ESX, GSX, Workstation, Player LOG – Virtual Machine Logfile (.log) VMDK, DSK – Virtual Machine Disk (.vmdk, .dsk) NVRAM – Virtual Machine BIOS (.nvram) VMEM – Virtual Machine paging file (.vmem) VMSD – Virtual Machine snapshot metadata (.vmsd) VMSN – Virtual Machine snapshot (.vmsn) VMSS,STD – Virtual Machine suspended state (.vmss, .std) VMTM – Virtual Machine team data (.vmtm) VMX,CFG – Virtual Machine configuration (.vmx, .cfg) VMXF – Virtual Machine team configuration (.vmxf) VirtualBox VDI – VirtualBox Virtual Disk Image (.vdi) Vbox-extpack – VirtualBox extension pack. (.vbox-extpack) Parallels Workstation HDD – Virtual Machine hard disk (.hdd) PVS – Virtual Machine preferences/configuration (.pvs) SAV – Virtual Machine saved state (.sav) QEMU COW – Copy-on-write QCOW – QEMU copy-on-write Qcow QCOW2 – QEMU copy-on-write – version 2 Qcow QED – QEMU enhanced disk format Web page Static DTD – Document Type Definition (standard), MUST be public and free HTML (.html, .htm) – HyperText Markup Language XHTML (.xhtml, .xht) – eXtensible HyperText Markup Language MHTML (.mht, .mhtml) – Archived HTML, store all data on one web page (text, images, etc.) in one big file MAF (.maff) – web archive based on ZIP Dynamically generated ASP (.asp) – Microsoft Active Server Page ASPX – (.aspx) – Microsoft Active Server Page. NET ADP – AOLserver Dynamic Page BML – (.bml) – Better Markup Language (templating) CFM – (.cfm) – ColdFusion CGI – (.cgi) iHTML – (.ihtml) – Inline HTML JSP – (.jsp) JavaServer Pages Lasso – (.las, .lasso, .lassoapp) – A file created or served with the Lasso Programming Language PL – Perl (.pl) PHP – (.php, .php?, .phtml) – ? is version number (previously abbreviated Personal Home Page, later changed to PHP: Hypertext Preprocessor) SSI – (.shtml) – HTML with Server Side Includes (Apache) SSI – (.stm) – HTML with Server Side Includes (Apache) Markup languages and other web standards-based formats Atom – (.atom, .xml) – Another syndication format. EML – (.eml) – Format used by several desktop email clients. JSON-LD – (.jsonld) – A JSON-based serialization for linked data. KPRX – (.kprx) – A XML-based serialization for workflow definition generated by K2. PS – (.ps) – A XML-based serialization for test automation scripts called PowerScripts for K2 based applications. Metalink – (.metalink, .met) – A format to list metadata about downloads, such as mirrors, checksums, and other information. RSS – (.rss, .xml) – Syndication format. Markdown – (.markdown, .md) – Plain text formatting syntax, which is popularly used to format "readme" files. Shuttle – (.se) – Another lightweight markup language. Other AXD – cookie extensions found in temporary internet folder BDF – Binary Data Format – raw data from recovered blocks of unallocated space on a hard drive CBP – CD Box Labeler Pro, CentraBuilder, Code::Blocks Project File, Conlab Project CEX – SolidWorks Enterprise PDM Vault File COL – Nintendo GameCube proprietary collision file (.col) CREDX – CredX Dat File DDB – Generating code for Vocaloid singers voice (see .DDI) DDI – Vocaloid phoneme library (Japanese, English, Korean, Spanish, Chinese, Catalan) DUPX – DuupeCheck database management tool project file FTM – Family Tree Maker data file FTMB – Family Tree Maker backup file GA3 – Graphical Analysis 3 GEDCOM (.ged) – (GEnealogical Data COMmunication) format to exchange genealogy data between different genealogy software HLP – Windows help file IGC – flight tracks downloaded from GPS devices in the FAI's prescribed format INF – similar format to INI file; used to install device drivers under Windows, inter alia. JAM – JAM Message Base Format for BBSes KMC – tests made with KatzReview's MegaCrammer KCL – Nintendo GameCube/Wii proprietary collision file (.kcl) KTR – Hitachi Vantara Pentaho Data Integration/Kettle Transformation Project file LNK – Microsoft Windows format for Hyperlinks to Executables LSM – LSMaker script file (program using layered .jpg to create special effects; specifically designed to render lightsabers from the Star Wars universe) (.lsm) NARC – Archive format used in Nintendo DS games. OER – AU OER Tool, Open Educational Resource editor PA – Used to assign sound effects to materials in KCL files (.pa) PIF – Used to run MS-DOS programs under Windows POR – So called "portable" SPSS files, readable by PSPP PXZ – Compressed file to exchange media elements with PSALMO RISE – File containing RISE generated information model evolution SCR - Windows Screen Saver file TOPC – TopicCrunch SEO Project file holding keywords, domain, and search engine settings (ASCII) XLF – Utah State University Extensible LADAR Format XMC – Assisted contact lists format, based on XML and used in kindergartens and schools ZED – My Heritage Family Tree Zone file – a text file containing a DNS zone Cursors ANI – Animated cursor CUR – Cursor file Smes – Hawk's Dock configuration file Generalized files General data formats These file formats are fairly well defined by long-term use or a general standard, but the content of each file is often highly specific to particular software or has been extended by further standards for specific uses. Text-based CSV – comma-separated values HTML – hyper text markup language CSS – cascading style sheets INI – a configuration text file whose format is substantially similar between applications JSON – JavaScript Object Notation is an openly used data format now used by many languages, not just JavaScript TSV – tab-separated values XML – an open data format YAML – an open data format ReStructuredText – an open text format for technical documents used mainly in the Python programming language Markdown (.md) – an open lightweight markup language to create simple but rich text, often used to format README files AsciiDoc – an open human-readable markup document format semantically equivalent to DocBook Generic file extensions These are filename extensions and broad types reused frequently with differing formats or no specific format by different programs. Binary files Bak file (.bak, .bk) – various backup formats: some just copies of data files, some in application-specific data backup formats, some formats for general file backup programs BIN – binary data, often memory dumps of executable code or data to be re-used by the same software that originated it DAT – data file, usually binary data proprietary to the program that created it, or an MPEG-1 stream of Video CD DSK – file representations of various disk storage images RAW – raw (unprocessed) data Text files configuration file (.cnf, .conf, .cfg) – substantially software-specific logfiles (.log) – usually text, but sometimes binary plain text (.asc or .txt) – human-readable plain text, usually no more specific Partial files Differences and patches diff – text file differences created by the program diff and applied as updates by patch Incomplete transfers !UT (.!ut) – partly complete uTorrent download CRDOWNLOAD (.crdownload) – partly complete Google Chrome download OPDOWNLOAD (.opdownload) – partly complete Opera download PART (.part) – partly complete Mozilla Firefox or Transmission download PARTIAL (.partial) – partly complete Internet Explorer or Microsoft Edge download Temporary files Temporary file (.temp, .tmp, various others) – sometimes in a specific format, but often just raw data in the middle of processing Pseudo-pipeline file – used to simulate a software pipe See also List of filename extensions MIME#Content-Type, a standard for referring to file formats List of motion and gesture file formats List of file signatures, or "magic numbers" References External links
Operating System (OS)
960
Dual format Dual format is a technique used to allow two completely different systems software to reside on the same disk. The term was used on the Amiga and Atari ST platform to indicate that the disk could be inserted into either machine and it would still boot and run. The secret behind this lies in a very special layout of the first track of the disk which contained an Amiga and an Atari ST bootsector at the same time by fooling the operating system to think that the track resolved into the format it expected. Only a few games used that technique. Three of them were produced by Eclipse Software Design (Stone Age, Monster Business and Lethal Xcess). ST/Amiga Format magazine also used the technique for their coverdisks. Amiga and PC dual-format games also existed, like the budget version of Rick Dangerous. The game 3D Pool even contained a Tri-format disk, which contained the Amiga, Atari ST and PC versions of the game. Most dual and tri-format disks were implemented using technology developed by Rob Computing. Recently, "dual format" has come to identify software which distributed on a single disk which includes both Windows and Macintosh versions of its installation program. A list of a few of the Dual Format floppy disk titles: Action Fighter (Amiga/PC dual-format disk) Lethal Xcess - Wings of Death II (Amiga/Atari ST dual-format disks) Monster Business (Amiga/Atari ST dual-format disk) Populous: The Promised Lands (Amiga/Atari ST dual-format disk) Rick Dangerous Kixx (Amiga/PC dual-format disk) Rick Dangerous 2 Kixx (Amiga/PC dual-format disk) Stone Age (Amiga/Atari ST dual-format disk) Street Fighter (Amiga/PC dual-format disk) StarGlider 2 (Amiga/Atari ST dual-format disk) 3D Pool (Amiga/Atari ST/PC tri-format disk) Stunt Car Racer Kixx (Amiga/PC dual-format disk) Bionic Commando Kixx (Amiga/PC dual-format disk) Carrier Command Kixx (Amiga/PC dual-format disk) Blasteroids Kixx (Amiga//PC dual-format disk) E-Motion Kixx (Amiga//PC dual-format disk) Indiana Jones and the Last Crusade Action Kixx (Amiga//PC dual-format disk) Out Run Kixx (Amiga/PC dual-format disk) World Class Leader Board Kixx (Amiga/PC dual-format disk) International Soccer Challenge Kixx (Amiga/PC dual-format disk) MicroProse Soccer Kixx (Amiga/PC dual-format disk) See also References Amiga Atari ST IBM PC compatibles Macintosh platform Rotating disc computer storage media Software distribution Video game distribution
Operating System (OS)
961
AN/FYQ-93 FYQ-93 was a computer system used from 1983–2006, and built for the Joint Surveillance System (JSS) by the Hughes Aircraft Company. The system consisted of a fault tolerant central computer complex using a two string concept that interfaced with many display consoles and interfaced with external radars to provide a region-sector display of air traffic. This system was composed of a suite of computers and peripheral equipment configured to receive plot data from ground radar systems, perform track processing, and present track data to both weapons controllers forward and lateral communications links. The HMD-22 consoles displayed data from various radars including the AN/GSQ-235. The data was routed to the Cheyenne Mountain complex from installations located in the continental United States (CONUS), Canada, Alaska and Hawaii. The need for the FYQ-93 system became apparent in the 1970s when the Semi-Automatic Ground Environment (SAGE) system became technologically obsolete and logistically unsupportable. The FYQ-93 system was conceived and specified in the late 1970s. It was manufactured and delivered during the first half of the 1980s and by the end of 1984, all nine facilities were in place. Enough of the system was in place in mid 1983 for the SAGE system to officially shut down and the JSS became the air defense system of the United States and Canada. The large network of military long range radar sites was closed and a much smaller number (43) of FAA Joint Use sites replaced them. The JSS was a joint USAF/FAA radar use program. The ACC portion of the JSS was composed of four CONUS SOCCs equipped with FYQ-93 computers, and 47 ground-based FPS-93 Search Radars. FAA equipment was a mix of Air Route Surveillance Radar (ARSR) 1, 2, and 3 systems. Collocated with most radar sites were UHF ground-air-ground (G/A/G) transmitter/receiver (GATR) facilities. Fourteen sites have VHF radios also. The GATR facility provided radio access to fighters and AWACS aircraft from the SOCCs. The JSS radars sent surveillance data to the SOCCs who then forwarded tracks of interest to the CONUS ROCC and North American Air Defense Command (NORAD). Radar and track data were sent through landlines as TADIL-B data and through HF radio links as TADIL-A data. Both TADIL links were provided by the Radar Data Information Link (RADIL). CONUS SOCCs communicated with the CONUS ROCC and NORAD by voice and data landline circuits. Internally a single "string" of the FYQ-93 system included one Hughes H5118ME Central Computer and two Hughes HMP-1116 Peripheral computers. Radar data was input and buffered in one 1116 for orderly transfer to the 5118, which then constructed the "air picture". The second 1116 on the string handled program loading, console commands, and data storage. The output of the string fed another 1116 called a "Display Controller" (DC), which sent data to and received switch actions from the HMD-22 consoles. Typically there were two strings and two DCs processing in parallel, one on standby in case of a malfunction in its counterpart. Either string could feed either DC for further equipment reliability. The software was written in a proprietary version of the programming language JOVIAL termed JSS JOVIAL. The system was updated over time to change tape drives to disk cartridges and single-line printers to multi-line printers. The memory in the H5118ME was expanded at least twice to the system maximum of 512,000 18-bit words. The H5118E was eventually upgraded to the H5118M computer which had 1 megabyte of memory and could handle 1.2 million instructions per second while the original model had a memory of 256 kilobytes and a clock speed of 150000 instructions per second. Although the H5118M was part of the NATO Integrated Air Defense System it is unclear if JSS received the same upgrades. Internal to Hughes, the next generation Air Defense and Air Traffic Control systems were being developed as JSS was being deployed. The next generation was based on using any computer of a certain processing class to replace the 5118 computer. Examples include DEC VAX and Norsk Data Systems. This was driven in part by the needs of different sovereign states who wanted their computers used for their in-country systems. This was also driven by the great miniaturization of computer hardware. The next generation Hughes systems used 2K X 2K resolution 20" X 20" color raster displays, touch entry, voice synthesis and recognition consoles, dual redundant Fiber Optic Token ring buses to link all consoles and computers, extensive processing in the consoles including mission processing, and movement into software written in the programming language Ada. The FYQ-93 was part of a long history of developing air defense Systems starting in the 1950s. The FYQ-93 was based on the Combat Grande System which was one of the first systems to extensively use science and engineering principals to develop software. This allowed for extensive re-use and optimization for the needs of each nation state installing and using the Hughes Systems. See also Joint Surveillance System NATO Integrated Air Defense System References Computing by computer model Joint Surveillance System radar stations
Operating System (OS)
962
Programmed input–output Programmed input–output (also programmed input/output, programmed I/O, PIO) is a method of data transmission, via input/output (I/O), between a central processing unit (CPU) and a peripheral device, such as a network adapter or a Parallel ATA storage device. Each data item transfer is initiated by an instruction in the program, involving the CPU for every transaction. In contrast, in direct memory access (DMA) operations, the CPU is uninvolved in the data transfer. The term can refer to either memory-mapped I/O (MMIO) or port-mapped I/O (PMIO). PMIO refers to transfers using a special address space outside of normal memory, usually accessed with dedicated instructions, such as IN and OUT in x86 architectures. MMIO refers to transfers to I/O devices that are mapped into the normal address space available to the program. PMIO was very useful for early microprocessors with small address spaces, since the valuable resource was not consumed by the I/O devices. The best known example of a PC device that uses programmed I/O is the AT Attachment (ATA) interface and Serial ATA interface; however, the AT Attachment interface can also be operated in any of several DMA modes. Many older devices in a PC also use PIO, including legacy serial ports, legacy parallel ports when not in ECP mode, keyboard and mouse PS/2 ports, legacy Musical Instrument Digital Interface (MIDI) and joystick ports, the interval timer, and older network interfaces. PIO mode in the ATA interface The PIO interface is grouped into different modes that correspond to different transfer rates. The electrical signaling among the different modes is similar — only the cycle time between transactions is reduced in order to achieve a higher transfer rate. All ATA devices support the slowest mode — Mode 0. By accessing the information registers (using Mode 0) on an ATA drive, the CPU is able to determine the maximum transfer rate for the device and configure the ATA controller for optimal performance. The PIO modes require a great deal of CPU overhead to configure a data transaction and transfer the data. Because of this inefficiency, the DMA (and eventually Ultra Direct Memory Access (UDMA) interface was created to increase performance. The simple digital logic needed to implement a PIO transfer still makes this transfer method useful today, especially if high transfer rates are unneeded as in embedded systems, or with field-programmable gate array (FPGA) chips, where PIO mode can be used with no significant performance loss. Two additional advanced timing modes have been defined in the CompactFlash specification 2.0. Those are PIO modes 5 and 6. They are specific to CompactFlash. PIO Mode 5 A PIO Mode 5 was proposed with operation at 22 MB/s, but was never implemented on hard disks because CPUs of the time would have been crippled waiting for the hard disk at the proposed PIO 5 timings, and the DMA standard ultimately obviated it. While no hard disk drive was ever manufactured to support this mode, some motherboard manufacturers preemptively provided BIOS support for it. PIO Mode 5 can be used with CompactFlash cards connected to ATA via CF-to-ATA adapters. See also WDMA (computer) – single/multi-word DMA AT Attachment – ATA specification Input/output Interrupt List of device bandwidths CompactFlash References Input/output
Operating System (OS)
963
Windows service In Windows NT operating systems, a Windows service is a computer program that operates in the background. It is similar in concept to a Unix daemon. A Windows service must conform to the interface rules and protocols of the Service Control Manager, the component responsible for managing Windows services. It is the Services and Controller app, services.exe, that launches all the services and manages their actions, such as start, end, etc. Windows services can be configured to start when the operating system is started and run in the background as long as Windows is running. Alternatively, they can be started manually or by an event. Windows NT operating systems include numerous services which run in context of three user accounts: System, Network Service and Local Service. These Windows components are often associated with Host Process for Windows Services. Because Windows services operate in the context of their own dedicated user accounts, they can operate when a user is not logged on. Prior to Windows Vista, services installed as an "interactive service" could interact with Windows desktop and show a graphical user interface. In Windows Vista, however, interactive services are deprecated and may not operate properly, as a result of Windows Service hardening. Administration Windows administrators can manage services via: The Services snap-in (found under Administrative Tools in Windows Control Panel) Sc.exe Windows PowerShell Services snap-in The Services snap-in, built upon Microsoft Management Console, can connect to the local computer or a remote computer on the network, enabling users to: view a list of installed services along with service name, descriptions and configuration start, stop, pause or restart services specify service parameters when applicable change the startup type. Acceptable startup types include: Automatic: The service starts at system logon. Automatic (Delayed): The service starts a short while after the system has finished starting up. This option was introduced in Windows Vista in an attempt to reduce the boot-to-desktop time. However, not all services support delayed start. Manual: The service starts only when explicitly summoned. Disabled: The service is disabled. It will not run. change the user account context in which the service operates configure recovery actions that should be taken if a service fails inspect service dependencies, discovering which services or device drivers depend on a given service or upon which services or device drivers a given service depends export the list of services as a text file or as a CSV file Command line The command-line tool to manage Windows services is sc.exe. It is available for all versions of Windows NT. This utility is included with Windows XP and later and also in ReactOS. The sc command's scope of management is restricted to the local computer. However, starting with Windows Server 2003, not only can sc do all that the Services snap-in does, but it can also install and uninstall services. The sc command duplicates some features of the net command. The ReactOS version was developed by Ged Murphy and is licensed under the GPL. Examples The following example enumerates the status for active services & drivers. C:\>sc query The following example displays the status for the Windows Event log service. C:\>sc query eventlog PowerShell The Microsoft.PowerShell.Management PowerShell module (included with Windows) has several cmdlets which can be used to manage Windows services: Get-Service New-Service Restart-Service Resume-Service Set-Service Start-Service Stop-Service Suspend-Service Other management tools Windows also includes components that can do a subset of what the snap-in, Sc.exe and PowerShell do. The net command can start, stop, pause or resume a Windows service. In Windows Vista and later, Windows Task Manager can show a list of installed services and start or stop them. MSConfig can enable or disable (see startup type description above) Windows services. Installation Windows services are installed and removed via *.INF setup scripts by SetupAPI; an installed service can be started immediately following its installation, and a running service can be stopped before its deinstallation. Development Writing native services For a program to run as a Windows service, the program needs to be written to handle service start, stop, and pause messages from the Service Control Manager (SCM) through the System Services API. SCM is the Windows component responsible for managing service processes. Wrapping applications as a service The Windows Resource Kit for Windows NT 3.51, Windows NT 4.0 and Windows 2000 provides tools to control the use and registration of services: SrvAny.exe acts as a service wrapper to handle the interface expected of a service (e.g. handle service_start and respond sometime later with service_started or service_failed) and allow any executable or script to be configured as a service. Sc.exe allows new services to be installed, started, stopped and uninstalled. See also Windows services Windows Service Hardening svchost.exe Concept Background process Daemon (computing) DOS Protected Mode Services Terminate and stay resident program Device driver Operating system service management Service Control Manager Service Management Facility Service wrapper References Further reading David B. Probert, Windows Service Processes External links Windows Sysinternals: Autoruns for Windows v13.4 – An extremely detailed query of services Service Management With Windows Sc From Command Line – Windows Service Management Tutorial Windows Service Manager Tray Process (computing)
Operating System (OS)
964
Windows Update Windows Update is a Microsoft service for the Windows 9x and Windows NT families of operating system, which automates downloading and installing Microsoft Windows software updates over the Internet. The service delivers software updates for Windows, as well as the various Microsoft antivirus products, including Windows Defender and Microsoft Security Essentials. Since its inception, Microsoft has introduced two extensions of the service: Microsoft Update and Windows Update for Business. The former expands the core service to include other Microsoft products, such as Microsoft Office and Microsoft Expression Studio. The latter is available to business editions of Windows 10 and permits postponing updates or receiving updates only after they have undergone rigorous testing. As the service has evolved over the years, so has its client software. For a decade, the primary client component of the service was the Windows Update web app that could only be run on Internet Explorer. Starting with Windows Vista, the primary client component became Windows Update Agent, an integral component of the operating system. The service provides several kinds of updates. Security updates or critical updates mitigate vulnerabilities against security exploits against Microsoft Windows. Cumulative updates are updates that bundle multiple updates, both new and previously released updates. Cumulative updates were introduced with Windows 10 and have been backported to Windows 7 and Windows 8.1. Microsoft routinely releases updates on the second Tuesday of each month (known as the Patch Tuesday), but can provide them whenever a new update is urgently required to prevent a newly discovered or prevalent exploit. System administrators can configure Windows Update to install critical updates for Microsoft Windows automatically, so long as the computer has an Internet connection. Clients Windows Update web app Windows Update was introduced as a web app with the launch of Windows 98 and offered additional desktop themes, games, device driver updates, and optional components such as NetMeeting. Windows 95 and Windows NT 4.0 were retroactively given the ability to access the Windows Update website and download updates designed for those operating systems, starting with the release of Internet Explorer 4. The initial focus of Windows Update was free add-ons and new technologies for Windows. Security fixes for Outlook Express, Internet Explorer and other programs appeared later, as did access to beta versions of upcoming Microsoft software, e.g. Internet Explorer 5. Fixes to Windows 98 to resolve the Year 2000 problem were distributed using Windows Update in December 1998. Microsoft attributed the sales success of Windows 98 in part to Windows Update. The Windows Update web app requires either Internet Explorer or a third-party web browser that supports the ActiveX technology. The first version of the web app, version 3, does not send any personally-identifiable information to Microsoft. Instead, the app downloads a full list of every available update and chooses which one to download and install. But the list grew so large that the performance impact of processing became a concern. Arie Slob, writing for the Windows-help.net newsletter in March 2003, noted that the size of the update list had exceeded , which caused delays of more than a minute for dial-up users. Windows Update v4, released in 2001 in conjunction with Windows XP, changed this. This version of the app makes an inventory of the system's hardware and Microsoft software and sends them to the service, thus offloading the processing burden to Microsoft servers. Critical Update Notification Utility Critical Update Notification Utility (initially Critical Update Notification Tool) is a background process that checks the Windows Update web site on a regular schedule for new updates that have been marked as "Critical". It was released shortly after Windows 98. By default, this check occurs every five minutes, plus when Internet Explorer starts; however, the user could configure the next check to occur only at certain times of the day or on certain days of the week. The tool queries the Microsoft server for a file called "cucif.cab", which contained a list of all the critical updates released for the operating system. The tool then compares this list with the list of installed updates on its machine and displays an update availability notification. Once the check is executed, any custom schedule defined by the user is reverted to the default. Microsoft stated that this ensures that users received notification of critical updates in a timely manner. An analysis done by security researcher H. D. Moore in early 1999 was critical of this approach, describing it as "horribly inefficient" and susceptible to attacks. In a posting to BugTraq, he explained that, "every single Windows 98 computer that wishes to get an update has to rely on a single host for the security. If that one server got compromised one day, or an attacker cracks the [Microsoft] DNS server again, there could be millions of users installing trojans every hour. The scope of this attack is big enough to attract crackers who actually know what they are doing..." Microsoft continued to promote the tool through 1999 and the first half of 2000. Initial releases of Windows 2000 shipped with the tool. The tool did not support Windows 95 and Windows NT 4.0. Automatic Updates Automatic Updates is the successor of the Critical Update Notification Utility. It was released in 2000, along with Windows Me. It supports Windows 2000 SP3 as well. Unlike its predecessor, Automatic Updates can download and install updates. Instead of the five-minute schedule used by its predecessor, Automatic Updates checks the Windows Update servers once a day. After Windows Me is installed, a notification balloon prompts the user to configure the Automatic Updates client. The user can choose from three notification schemes: Being notified before downloading the update, being notified before installing the update, or both. If new updates are ready to be installed, the user may install them before turning off the computer. A shield icon will be displayed on the Shutdown button during this time. Windows XP and Windows 2000 SP3 include Background Intelligent Transfer Service, a Windows service for transferring files in the background without user interaction. As a system component, it is capable of monitoring the user's Internet usage, and throttling its own bandwidth usage in order to prioritize user-initiated activities. The Automatic Updates client for these operating systems was updated to use this system service. Automatic Updates in Windows XP gained notoriety for repeatedly interrupting the user while working on their computer. Every time an update requiring a reboot was installed, Automatic Updates would prompt the user with a dialog box that allowed the user to restart immediately or dismiss the dialog box, which would reappear in ten minutes; a behavior that Jeff Atwood described as "perhaps the naggiest dialog box ever." In 2013, it was observed that shortly after the startup process, Automatic Updates (wuauclt.exe) and Service Host (svchost.exe) in Windows XP would claim 100% of a computer's CPU capacity for extended periods of time (between ten minutes to two hours), making affected computers unusable. According to Woody Leonhart of InfoWorld, early reports of this issue could be seen in Microsoft TechNet forums in late May 2013, although Microsoft first received large number of complaints about this issue in September 2013. The cause was an exponential algorithm in the evaluation of superseded updates which had grown large over the decade following the release of Windows XP. Microsoft's attempts to fix the issue in October, November and December proved futile, causing the issue to be escalated to the top priority. Windows Update Agent Starting with Windows Vista and Windows Server 2008, Windows Update Agent replaces both the Windows Update web app and the Automatic Updates client. It is in charge of downloading and installing software update from Windows Update, as well as the on-premises servers of Windows Server Updates Services or System Center Configuration Manager. Windows Update Agent can be managed through a Control Panel applet, as well as Group Policy, Microsoft Intune and Windows PowerShell. It can also be set to automatically download and install both important and recommended updates. In prior versions of Windows, such updates were only available through the Windows Update web site. Additionally, Windows Update in Windows Vista supports downloading Windows Ultimate Extras, optional software for Windows Vista Ultimate Edition. Unlike Automatic Updates in Windows XP, Windows Update Agent in Windows Vista and Windows 7 allows the user to postpone the mandatory restart (required for the update process to complete) for up to four hours. The revised dialog box that prompts for the restart appears under other windows, instead of on top of them. However, standard user accounts only have 15 minutes to respond to this dialog box. This was changed with Windows 8: Users have 3 days (72 hours) before the computer reboots automatically after installing automatic updates that require a reboot. Windows 8 also consolidates the restart requests for non-critical updates into just one per month. Additionally, the login screen notifies them of the restart requirements. Windows Update Agent makes use of the Transactional NTFS feature introduced with Windows Vista to apply updates to Windows system files. This feature helps Windows recover cleanly in the event of an unexpected failure, as file changes are committed atomically. Windows 10 contains major changes to Windows Update Agent operations; it no longer allows the manual, selective installation of updates. All updates, regardless of type (this includes hardware drivers), are downloaded and installed automatically, and users are only given the option to choose whether their system would reboot automatically to install updates when the system is inactive, or be notified to schedule a reboot. Microsoft offers a diagnostic tool that can be used to hide troublesome device drivers and prevent them from being reinstalled, but only after they had been already installed, then uninstalled without rebooting the system. Windows Update Agent on Windows 10 supports peer to peer distribution of updates; by default, systems' bandwidth is used to distribute previously downloaded updates to other users, in combination with Microsoft servers. Users may optionally change Windows Update to only perform peer to peer updates within their local area network. Windows 10 also introduced cumulative updates. For example, if Microsoft released updates KB00001 in July, KB00002 in August, and KB00003 in September, Microsoft would release cumulative update KB00004 which packs KB00001, KB00002, and KB00003 together. Installing KB00004 will also install KB00001, KB00002 and KB00003, mitigating the need for multiple restarts and reducing the number of downloads needed. KB00004 may also include other fixes with their own KB-number that were not separately released. A disadvantage of cumulative updates is that downloading and installing updates that fix individual problems is no longer possible. KB stands for knowledge base as in Microsoft Knowledge Base. Windows Update for Business Windows Update for Business is a term for a set of features in the Pro, Enterprise and Education editions of Windows 10, intended to ease the administration of Windows across organizations. It enables IT pros to: Switch between the standard and the deferred release branches of Windows 10. This feature has since been removed as Microsoft retired the deferred branch. Defer automatic installation of ordinary updates for 30 days. Starting with Windows 10 version 20H1, this feature is more difficult to access. Defer automatic installation of Windows upgrades (a.k.a "feature updates") for 365 days. Starting with Windows 10 version 20H1, these updates are no longer automatically offered. These features were added in Windows 10 version 1511. They are intended for large organizations with many computers, so they can logically group their computers for gradual deployment. Microsoft recommends a small set of pilot computers to receive the updates almost immediately, while the set of most critical computers to receive them after every other group has done so, and has experienced their effects. Other Microsoft update management solutions, such as Windows Server Update Services or System Center Configuration Manager, do not override Windows Update for Business. Rather, they force Windows 10 into the "dual scan mode". This can cause confusion for administrators who do not comprehend the full ramifications of the dual scan mode. Complementary software and services As organizations continued to use more computers, the per-machine Windows Update clients started to become unwieldy and insufficient. In response to the need of organizations for deploying updates to many machines, Microsoft introduced Software Update Services (SUS), which was later renamed Windows Server Update Services (WSUS). A component of the Windows Server family of operating systems, WSUS downloads updates for Microsoft products to a server computer on which it is running and redistributes them to the computers within the organization over a local area network (LAN). One of the benefits of this method is a reduction in the consumption of Internet bandwidth, equal to (N-1)×S, where N is the number of computers in the organization and S is the size made by the updates. Additionally, WSUS permits administrators to test updates on a small group of test computers before deploying them to all systems, in order to ensure that business continuity is not disrupted because of the changes of the updates. For very large organizations, multiple WSUS servers can be chained together hierarchically. Only one server in this hierarchy downloads from the Internet. Update packages distributed via the Windows Update service can be individually downloaded from Microsoft Update Catalog. These updates can be installed on computers without internet access (e.g. via USB flash drive) or slipstreamed with a Windows installation. In case of the former, Windows Update Agent (wusa.exe) can install these files. In case of the latter, Microsoft deployment utilities such as DISM, WADK and MDT can consume these packages. Microsoft offers System Center Configuration Manager for very complex deployment and servicing scenarios. The product integrates with all of the aforesaid tools (WSUS, DISM, WADK, MDT) to automate the process. A number of tools have been created by independent software vendors which provide the ability for Windows Updates to be automatically downloaded for, or added to, an online or offline system. One common use for offline updates is to ensure a system is fully patched against security vulnerabilities before being connected to the Internet or another network. A second use is that downloads can be very large, but may be dependent on a slow or unreliable network connection, or the same updates may be needed for more than one machine. AutoPatcher, WSUS Offline Update, PortableUpdate, and Windows Updates Downloader are examples of such tools. Service At the beginning of 2005, Windows Update was being accessed by about 150 million people, with about 112 million of those using Automatic Updates. As of 2008, Windows Update had about 500 million clients, processed about 350 million unique scans per day, and maintained an average of 1.5 million simultaneous connections to client machines. On Patch Tuesday, the day Microsoft typically releases new software updates, outbound traffic could exceed 500 gigabits per second. Approximately 90% of all clients used automatic updates to initiate software updates, with the remaining 10% using the Windows Update web site. The web site is built using ASP.NET, and processes an average of 90,000 page requests per second. Traditionally, the service provided each patch in its own proprietary archive file. Occasionally, Microsoft released service packs which bundled all updates released over the course of years for a certain product. Starting with Windows 10, however, all patches are delivered in cumulative packages. On 15 August 2016, Microsoft announced that effective October 2016, all future patches to Windows 7 and 8.1 would become cumulative as with Windows 10. The ability to download and install individual updates would be removed as existing updates are transitioned to this model. This has resulted in increasing download sizes of each monthly update. An analysis done by Computerworld determined that the download size for Windows 7 x64 has increased from 119.4MB in October 2016 to 203MB in October 2017. Initially, Microsoft was very vague about specific changes within each cumulative update package. However, since early 2016, Microsoft has begun releasing more detailed information on the specific changes. In 2011, the update service was decommissioned for Windows 98, 98 SE and ME and the old updates for those systems were removed from its servers. On August 3, 2020, the update service was decommissioned for Windows 2000, XP, Server 2003 and Vista due to Microsoft discontinuing SHA-1 updates. The old updates are still available on the Microsoft Update Catalog. Microsoft Update At the February 2005 RSA Conference, Microsoft announced the first beta of Microsoft Update, an optional replacement for Windows Update that provides security patches, service packs and other updates for both Windows and other Microsoft software. The initial release in June 2005 provided support for Microsoft Office 2003, Exchange 2003, and SQL Server 2000, running on Windows 2000, XP, and Server 2003. Over time, the list has expanded to include other Microsoft products, such as Windows Live, Windows Defender, Visual Studio, runtimes and redistributables, Zune Software, Virtual PC and Virtual Server, CAPICOM, Microsoft Lync, Microsoft Expression Studio, and other server products. It also offers Silverlight and Windows Media Player as optional downloads if applicable to the operating system. Office Update Office Update is a free online service that allows users to detect and install updates for certain Microsoft Office products. The original update service supported Office 2000, Office XP, Office 2003 and Office 2007. On 1 August 2009 Microsoft decommissioned the Office Update service, merging it with Microsoft Update. Microsoft Update does not support Office 2000. With the introduction of the Office 365 licensing program, however, Microsoft once again activated a separate Office update service to service Office 365 customers. Owners of perpetual Microsoft Office licenses continue to receive updates through Microsoft Update. References External links Microsoft Update website Microsoft Technical Security Notifications Patch utilities Software update managers Windows components
Operating System (OS)
965
HRS-100 HRS-100, ХРС-100, GVS-100 or ГВС-100, (see Ref.#1, #2, #3 and #4) (, , ) was a third generation hybrid computer developed by Mihajlo Pupin Institute (Serbia, then SFR Yugoslavia) and engineers from USSR in the period from 1968 to 1971. Three systems HRS-100 were deployed in Academy of Sciences of USSR in Moscow and Novosibirsk (Akademgorodok) in 1971 and 1978. More production was contemplated for use in Czechoslovakia and German Democratic Republic (DDR), but that was not realised. HRS-100 was invented and developed to study the dynamical systems in real and accelerated scale time and for efficient solving of wide array of scientific tasks at the institutes of the A.S. of USSR (in the fields: Aerospace-nautics, Energetics, Control engineering, Microelectronics, Telecommunications, Bio-medical investigations, Chemical industry etc.). Overview HRS-100 was composed of: Digital computer: central processor 16 kilowords of 0.9 μs 36-bit magnetic core primary memory, expandable to 64 kilowords. secondary disk storage peripheral devices (teleprinters, punched tape reader/punchers, parallel printers and punched card readers). multiple Analog computer modules Interconnection devices multiple analog and digital Peripheral devices Central processing unit HRS-100 has a 32-bit TTL MSI processor with following capabilities: four basic arithmetic operations are implemented in hardware for both fixed point and floating point operations Addressing modes: immediate/literal, absolute/direct, relative, unlimited-depth multi-level memory indirect and relative-indirect 7 index registers and dedicated "index arithmetic" hardware 32 interrupt "channels" (10 from within the CPU, 10 from peripherals and 12 from interconnection devices and analog computer) Primary memory Primary memory was made up of 0.9 μs cycle time magnetic core modules. Each 36-bit word is organized as follows: 32 data bits 1 parity bit 3 program protection bits specifying which program (Operating System and up to 7 running applications) has access Secondary storage Secondary storage was composed of up to 8 of the CDC 9432D removable-media disk drive devices. Capacity of one set of disk platters was about 4 million 6-bit words or 768,000 words of HRS-100 computer. Total, combined, capacity of 8 drives is, therefore, 6,144,000 words. Each disk set comprised 6 platters out of which 10 surfaces are used. Data was organized into 100 cylinders and 16 1536-bit sectors (48 HRS-100 words). Average data access time was 100 ms (max. 165 ms). Maximum seek time was 25 ms. Raw transfer sector write speed was 208,333 characters/s. Peripherals Peripherals communicate with the computer using interrupts and full length of HRS-100 words. Each separate unit has its own controller. Following devices were produced or planned: 5 to 8 channel Punched tape reader type PE 1001 (500-1000 characters/s) 5 to 8 channel Tape puncher type PE 4060 (150 characters/s) IBM 735 teleprinter (88 character set, 7-bit data + 1 parity bit, printing speed: 15 characters/s) Fast line printer DP 2440 (up to 700 lines/min, 64-character set, 132 characters per line) Standard 80-column punched card reader DP SR300 (reading up to 300 cards/min) Interconnection hardware Interconnection hardware (called simply "Link") connects digital and analog components of HRS-100 into a single unified computer. It comprised: Control unit for exchange of logic signals Blocks of A/D and D/A converters 16-bit 100 μs clock generator Conversion channel relay block Power supply Link takes commands from a digital computer component and organizes their execution via 2 32-bit data channels, 11 control channels, synchronization signals via 3 channels and 9 interrupt channels. Connection between a digital and analog computers is established through a "common-control panel" and two separate consoles. Communicating digital data with analog consoles is done through 16 control, 16 sensitivity, 16 indicator and 10 functional "lines". Analog-to-digital conversion is achieved by a single signed 14-bit 70,000 samples/s A/D converter and a 32-channel multiplexer. Digital-to-analog conversion is achieved by 16 independent signed 14-bit D/A converters with double registers. Typical D/A conversion took 2 μs. Analog computer Analog component of HRS-100 system is composed of up to seven analog machines all connected to the common-control panel. It contains all elements required to independently solve linear and non-linear differential equations, both directly and iteratively. Units of analog computer: linear analog calculation elements non-linear analog calculation elements parallel logic elements electronic potentiometer system calculation module and parallel logic control system periodic block control system address system measurement system exchangeable program board (analog and digital) reference voltage supply Linear analog computer elements were designed to facilitate 0.01% precision in static mode and 0.1% in dynamic mode, for signals up to 1kHz. Non-linear elements precision was not required to be better than 0.1%. Analog component of HRS-100 has its own peripheral units: multi-channel ultraviolet writer three-colour oscilloscope X-Y writer Development team HRS-100 was designed and developed by the following team (see Ref.#1, #4, #5, and #6): Principal Science Researchers: Prof. Boris Yakovlevich Kogan (Institute of Control Sciences - IPU AN.USSR, Moscow), Petar Vrbavac and Georgi Konstantinov (Mihajlo Pupin Institute, Belgrade). Chief designers: Digital part: Svetomir Ojdanić, Dušan Hristović (SFRY), A. Volkov, V. Lisikov (USSR) Analogue part: B.J.Kogan, N. N. Mihaylov (USSR), Slavoljub Marjanović, Pavle Pejović (SFRY) Link: Milan Hruška, Čedomir Milenković (SFRY), A. G. Spiro (USSR) Software: E. A. Trahtengerc, S.J.Vilenkin, V. L. Arlazarov (USSR), Nedeljko Parezanović (SFRY). See also History of computer hardware in the SFRY Mihajlo Pupin Institute List of Soviet computer systems Reference literature HRS-100 (Hardware and Design Principles), pp. 3–52, by prof. Boris J.Kogan(Ed), IPU AN.USSR, Moscow, 1974 (in Russian). HRS-100, Proceedings of Intern. Congress AICA-1973, Prague, pp. 305–324, 27–31.August 1973. Analog Computing in the Soviet Union, by D. Abramovitch, IEEE Control Systems Magazine, pp. 52–62, June 2005. Hybrid Computing System HRS-100, by P.Vrbavac, S.Ojdanic, D.Hristovic, M.Hruska, S.Marjanovic, Proc. of the 6. Int. Symp. on Electronics and Automation, pp. 347–356, Herceg Novi, Yugoslavia, 21–27.June 1971. Development of the Computing Technology in Serbia (Razvoj Racunarstva u Srbiji), by Dušan Hristović, Phlogiston journal, No 18/19, pp. 89–105, Museum of the science and technology (MNT-SANU), Belgrade 2010/2011. "50 Years of Computing in Serbia"(50 Godina Racunarstva u Srbiji), by D.B.Vujaklija and N.Markovic(Ed),pp. 37–44, DIS,IMP and PC Press, Belgrade 2011. (In Serbian). Mihajlo Pupin Institute Analog computers Soviet Union–Yugoslavia relations One-of-a-kind computers Soviet computer systems 1960s in Belgrade 1970s in Belgrade
Operating System (OS)
966
History of software Software is a set of programmed instructions stored in the memory of stored-program digital computers for execution by the processor. Software is a recent development in human history, and it is fundamental to the Information Age. Ada Lovelace's programs for Charles Babbage's Analytical Engine in the 19th century is often considered the founder of the discipline, though the mathematician's efforts remained theoretical only, as the technology of Lovelace and Babbage's day proved insufficient to build his computer. Alan Turing is credited with being the first person to come up with a theory for software in 1935, which led to the two academic fields of computer science and software engineering. The first generation of software for early stored-program digital computers in the late 1940s had its instructions written directly in binary code, generally written for mainframe computers. Later, the development of modern programming languages alongside the advancement of the home computer would greatly widen the scope and breadth of available software, beginning with assembly language, and continuing on through functional programming and object-oriented programming paradigms. Before stored-program digital computers Origins of computer science Computing as a concept goes back to ancient times, with devices such as the abacus, the Antikythera mechanism, and Al-Jazari's programmable castle clock. However, these devices were pure hardware and had no software - their computing powers were directly tied to their specific form and engineering. Software requires the concept of a general-purpose processor - what is now described as a Turing machine - as well as computer memory in which reusable sets of routines and mathematical functions comprising programs can be stored, started, and stopped individually, and only appears recently in human history. The first known computer algorithm was written by Ada Lovelace in the 19th century for the Analytical Engine, to translate Luigi Menabrea's work on Bernoulli numbers for machine instruction. However, this remained theoretical only - the lesser state of engineering in the lifetime of these two mathematicians proved insufficient to construct the Analytical Engine. The first modern theory of software was proposed by Alan Turing in his 1935 essay Computable numbers with an application to the Entscheidungsproblem (decision problem). This eventually led to the creation of the twin academic fields of computer science and software engineering, which both study software and its creation. Computer science is more theoretical (Turing's essay is an example of computer science), whereas software engineering is focused on more practical concerns. However, prior to 1946, software as we now understand it programs stored in the memory of stored-program digital computers did not yet exist. The very first electronic computing devices were instead rewired in order to "reprogram" them. The ENIAC, one of the first electronic computers, was programmed largely by women who had been previously working as human computers. Engineers would give the programmers blueprints of the ENIAC wiring and expected them to figure out how to program the machine. The women who worked as programmers prepped the ENIAC for its first public reveal, wiring the patch panels together for the demonstrations. Kathleen Booth developed Assembly Language in 1950 to make it easier to program the computers she worked on at Birkbeck College. Grace Hopper worked as one of the first programmers of the Harvard Mark I. She later created a 500-page manual for the computer. Hopper is often falsely credited with coining the terms "bug" and "debugging," when she found a moth in the Mark II, causing a malfunction; however, the term was in fact already in use when she found the moth. Hopper developed the first compiler and brought her idea from working on the Mark computers to working on UNIVAC in the 1950s. Hopper also developed the programming language FLOW-MATIC to program the UNIVAC. Frances E. Holberton, also working at UNIVAC, developed a code, C-10, which let programmers use keyboard inputs and created the Sort-Merge Generator in 1951. Adele Mildred Koss and Hopper also created the precursor to a report generator. Early days of computer software (1948–1979) In his manuscript "A Mathematical Theory of Communication", Claude Shannon (1916–2001) provided an outline for how binary logic could be implemented to program a computer. Subsequently, the first computer programmers used binary code to instruct computers to perform various tasks. Nevertheless, the process was very arduous. Computer programmers had to provide long strings of binary code to tell the computer what data to store. Code and data had to be loaded onto computers using various tedious mechanisms, including flicking switches or punching holes at predefined positions in cards and loading these punched cards into a computer. With such methods, if a mistake was made, the whole program might have to be loaded again from the beginning. The very first time a stored-program computer held a piece of software in electronic memory and executed it successfully, was 11 am 21 June 1948, at the University of Manchester, on the Manchester Baby computer. It was written by Tom Kilburn, and calculated the highest factor of the integer 2^18 = 262,144. Starting with a large trial divisor, it performed division of 262,144 by repeated subtraction then checked if the remainder was zero. If not, it decremented the trial divisor by one and repeated the process. Google released a tribute to the Manchester Baby, celebrating it as the "birth of software". FORTRAN was developed by a team led by John Backus at IBM in the 1950s. The first compiler was released in 1957. The language proved so popular for scientific and technical computing that by 1963 all major manufacturers had implemented or announced FORTRAN for their computers. COBOL was first conceived of when Mary K. Hawes convened a meeting (which included Grace Hopper) in 1959 to discuss how to create a computer language to be shared between businesses. Hopper's innovation with COBOL was developing a new symbolic way to write programming. Her programming was self-documenting. Betty Holberton helped edit the language which was submitted to the Government Printing Office in 1960. FORMAC was developed by Jean E. Sammet in the 1960s. Her book, Programming Languages: History and Fundamentals (1969), became an influential text. Apollo Mission The Apollo Mission to the moon depended on software to program the computers in the landing modules. The computers were programmed with a language called "Basic" (no relation with the BASIC programming language developed at Dartmouth at about the same time). The software also had an interpreter which was made up of a series of routines and an executive (like a modern-day operating system), which specified which programs to run and when. Both were designed by Hal Laning. Margaret Hamilton, who had previously been involved with software reliability issues when working on the US SAGE air defense system, was also part of the Apollo software team. Hamilton was in charge of the onboard flight software for the Apollo computers. Hamilton felt that software operations were not just part of the machine, but also intricately involved with the people who operated the software. Hamilton also coined the term "software engineering" while she was working at NASA. The actual "software" for the computers in the Apollo missions was made up of wires that were threaded through magnetic cores. Where the wire went through a magnetic core, that represented a "1" and where the wire went around the core, that represented a "0." Each core stored 64 bits of information. Hamilton and others would create the software by punching holes in punch cards, which were then later processed on a Honeywell mainframe where the software could be simulated. When the code was "solid," then it was sent to be woven into the magnetic cores at Raytheon, where women known as "Little Old Ladies" worked on the wires. The program itself was "indestructible" and could even withstand lightning strikes, which happened to Apollo 12. Wiring the computers took several weeks to do, freezing software development during that time. While using the simulators to test the programming, Hamilton discovered ways that code could produce dangerous errors when human mistakes were made while using it. NASA believed that the astronauts would not make mistakes due to their training. Hamilton was not allowed to program code to prevent errors that would lead to system crash, so she annotated the code in the program documentation. Her ideas to add error-checking code was rejected as "excessive." However, exactly what Hamilton predicted would happen occurred on the Apollo 8 flight, when human error caused the computer to wipe out all of the navigational data. Bundling of software with hardware and its legal issues Later, software was sold to multiple customers by being bundled with the hardware by original equipment manufacturers (OEMs) such as Data General, Digital Equipment and IBM. When a customer bought a minicomputer, at that time the smallest computer on the market, the computer did not come with pre-installed software, but needed to be installed by engineers employed by the OEM. This bundling attracted the attention of US antitrust regulators, who sued IBM for improper "tying" in 1969, alleging that it was an antitrust violation that customers who wanted to obtain its software had to also buy or lease its hardware in order to do so. However, the case was dropped by the US Justice Department, after many years of attrition, as it concluded it was "without merit". Data General also encountered legal problems related to bundling although in this case, it was due to a civil suit from a would-be competitor. When Data General introduced the Data General Nova, a company called Digidyne wanted to use its RDOS operating system on its own hardware clone. Data General refused to license their software and claimed their "bundling rights". The US Supreme Court set a precedent called Digidyne v. Data General in 1985 by letting a 9th circuit appeal court decision on the case stand, and Data General was eventually forced into licensing the operating system because it was ruled that restricting the license to only DG hardware was an illegal tying arrangement. Even though the District Court noted that "no reasonable juror could find that within this large and dynamic market with much larger competitors", Data General "had the market power to restrain trade through an illegal tie-in arrangement", the tying of the operating system to the hardware was ruled as per se illegal on appeal. In 2008, Psystar Corporation was sued by Apple Inc. for distributing unauthorized Macintosh clones with OS X preinstalled, and countersued. One of the arguments in the countersuit - citing the Data General case - was that Apple dominates the market for OS X compatible computers by illegally tying the operating system to Apple computers. District Court Judge William Alsup rejected this argument, saying, as the District Court had ruled in the Data General case over 20 years prior, that the relevant market was not simply one operating system (Mac OS) but all PC operating systems, including Mac OS, and noting that Mac OS did not enjoy a dominant position in that broader market. Alsup's judgement also noted that the surprising Data General precedent that tying of copyrighted products was always illegal had since been "implicitly overruled" by the verdict in the Illinois Tool Works Inc. v. Independent Ink, Inc. case. Packaged software (Late 1960s-present) An industry producing independently packaged software - software that was neither produced as a "one-off" for an individual customer, nor "bundled" with computer hardware - started to develop in the late 1960s. Unix (1970s–present) Unix was an early operating system which became popular and very influential, and still exists today. The most popular variant of Unix today is macOS (previously called OS X and Mac OS X), while Linux is closely related to Unix. The rise of Microcomputers In January 1975, Micro Instrumentation and Telemetry Systems began selling its Altair 8800 microcomputer kit by mail order. Microsoft released its first product Altair BASIC later that year, and hobbyists began developing programs to run on these kits. Tiny BASIC was published as a type-in program in Dr. Dobb's Journal, and developed collaboratively. In 1976, Peter R. Jennings for instance created his Microchess program for MOS Technology's KIM-1 kit, but since it did not come with a tape drive, he would send the source code in a little booklet to his mail-order customers, and they would have to type the whole program in by hand. In 1978, Kathe and Dan Spracklen released the source of their Sargon (chess) program in a computer magazine. Jennings later switched to selling paper tape, and eventually compact cassettes with the program on it. It was an inconvenient and slow process to type in source code from a computer magazine, and a single mistyped or worse, misprinted character could render the program inoperable, yet people still did so. (Optical character recognition technology, which could theoretically have been used to scan in the listings rather than transcribe them by hand, was not yet in wide use.) Even with the spread of cartridges and cassette tapes in the 1980s for distribution of commercial software, free programs (such as simple educational programs for the purpose of teaching programming techniques) were still often printed, because it was cheaper than making and attaching cassette tapes to magazines. However, eventually a combination of four factors brought this practice of printing complete source code listings of entire programs in computer magazines to an end: programs started to become very large floppy discs started to be used for distributing software, and then came down in price regular people started to use computers and wanted a simple way to run a program computer magazines started to include cassette tapes or floppy discs with free or trial versions of software on them Very quickly, commercial software started to be pirated, and commercial software producers were very unhappy at this. Bill Gates, cofounder of Microsoft, was an early moraliser against software piracy with his famous Open Letter to Hobbyists in 1976. 1980s–present Before the microcomputer, a successful software program typically sold up to 1,000 units at $50,000–60,000 each. By the mid-1980s, personal computer software sold thousands of copies for $50–700 each. Companies like Microsoft, MicroPro, and Lotus Development had tens of millions of dollars in annual sales. They similarly dominated the European market with localized versions of already successful products. A pivotal moment in computing history was the publication in the 1980s of the specifications for the IBM Personal Computer published by IBM employee Philip Don Estridge, which quickly led to the dominance of the PC in the worldwide desktop and later laptop markets a dominance which continues to this day. Microsoft, by successfully negotiating with IBM to develop the first operating system for the PC (MS-DOS), profited enormously from the PC's success over the following decades, via the success of MS-DOS and its add-on-cum-successor, Microsoft Windows. Winning the negotiation was a pivotal moment in Microsoft's history. Free and open source software Recent developments App stores Applications for mobile devices (cellphones and tablets) have been termed "apps" in recent years. Apple chose to funnel iPhone and iPad app sales through their App Store, and thus both vet apps, and get a cut of every paid app sold. Apple does not allow apps which could be used to circumvent their app store (e.g. virtual machines such as the Java or Flash virtual machines). The Android platform, by contrast, has multiple app stores available for it, and users can generally select which to use (although Google Play requires a compatible or rooted device). This move was replicated for desktop operating systems with GNOME Software (for Linux), the Mac App Store (for macOS), and the Windows Store (for Windows). All of these platforms remain, as they have always been, non-exclusive: they allow applications to be installed from outside the app store, and indeed from other app stores. The explosive rise in popularity of apps, for the iPhone in particular but also for Android, led to a kind of "gold rush", with some hopeful programmers dedicating a significant amount of time to creating apps in the hope of striking it rich. As in real gold rushes, not all of these hopeful entrepreneurs were successful. Formalization of software development The development of curricula in computer science has resulted in improvements in software development. Components of these curricula include: Structured and Object Oriented programming Data structures Analysis of Algorithms Formal languages and compiler construction Computer Graphics Algorithms Sorting and Searching Numerical Methods, Optimization and Statistics Artificial Intelligence and Machine Learning How software has affected hardware As more and more programs enter the realm of firmware, and the hardware itself becomes smaller, cheaper and faster as predicted by Moore's law, an increasing number of types of functionality of computing first carried out by software, have joined the ranks of hardware, as for example with graphics processing units. (However, the change has sometimes gone the other way for cost or other reasons, as for example with softmodems and microcode.) Most hardware companies today have more software programmers on the payroll than hardware designers, since software tools have automated many tasks of printed circuit board (PCB) engineers. Computer software and programming language timeline The following tables include year by year development of many different aspects of computer software including: High level languages Operating systems Networking software and applications Computer graphics hardware, algorithms and applications Spreadsheets Word processing Computer aided design 1971–1974 1975–1978 1979–1982 1983–1986 1987–1990 1991–1994 1995–1998 1999–2002 2003–2006 2007–2010 2011–2014 See also Forensic software engineering History of computing hardware History of operating systems History of software engineering List of failed and overbudget custom software projects Women in computing Timeline of women in computing References Sources External links History of computer science History of computing
Operating System (OS)
967
Singer System 10 The Singer System 10 was a small-business computer manufactured by the Singer Corporation. The System 10, introduced in 1970, featured an early form of logical partitioning. The System 10 was a character-oriented computer, using 6-bit BCD characters and decimal arithmetic. In the early 1960s, The Singer Sewing Machine Company had a dominant share of the world market in domestic and small industrial sewing machines. By 1962, its chain of retail stores were selling their machines, fabrics, haberdashery and patterns – everything for the housewife who made clothes and furnishings. There were 175 retail stores in the U.S., and many in Europe as well. Like many chains of small retail stores with a wide product range, stock control and stock swapping were critical to cash flow and profits. Under the leadership of its CEO, Donald P Kircher, Singer therefore approached several computer manufacturers, inviting them to bid for the design and manufacture of computers which could connect to the several tills in each store, and act as the central point for collecting real-time information on stocks and sales. IBM and NCR, then the world’s largest computer companies, rejected the offer to bid, and so did some others. The only company to take up the challenge was Friden, an American company based in San Leandro, California which made desktop calculators and accounting machines based on punched paper tape. Singer accepted Friden’s bid. In 1965, Singer bought out Friden, setting it up as Singer Business Machines. It then designed a computer, originally called the Business Data Processor (BDP) and soon renamed the System 10. In 1969, Singer Business Machines created a subsidiary, the Advanced Systems Division, in each Western European country to launch and market the Singer System 10. Newly appointed Managers and Directors were trained in the technology and the marketing strategy, and the Singer System 10 was launched throughout Europe on April 2, 1970. The design of the System 10 was revolutionary, because of the special requirements of what are now called "point of sale" systems. The machine had no operating system that scheduled the use of the processor: instead, it would have up to 20 'partitions' each of which had dedicated memory of up to 10 kilobytes, and a common area that all partitions could access, limited initially to 10K in the earlier models but expanded up to 100K in later ones. The system was called the System 10 because it performed all of its computations in decimal, as opposed to its counterparts which operated in binary. (It was never called “System 10”, with or without a hyphen, although many countries tried to rename it. In Spain, the complaint was that "System 10" means "Hold the system!"). Each partition in turn would handle up to 10 I/O devices, depending on the partition type. For devices such as terminals, printers, card readers and punches, a Multi-Terminal IOC (input-output channel) was installed, which ran at about 20 kbit/s. The partition would respond to CPU I/O instructions to retrieve and transfer data in bursts from terminal devices to main memory, there were no small or single character transfers of data to reduce the demand for access to the processor memory. The processor would cycle through each partition in turn, bypassing those that had an I/O instruction in progress, and executing instructions in the others until either a new I/O was posted or 16.7ms (20ms in Europe) had elapsed and a successful branch instruction was encountered. Theoretically it was possible to "hog" the processor if a successful branch or I/O instruction was never encountered. There were several other types of partitions that could be installed, for the retail terminals an MD (multi-data IOC) was used, which could control up to 10 or them. These devices buffered an entire transaction which was sent in a burst as a speed of 1200 bits per second. As all transfers were made directly from the partition into memory, it was only possible to have one active transmission per terminal at a time, which could cause some devices to 'time out' during transmission on a busy system. In addition, three types of serial communications controllers were available, a synchronous communications adapter, which was capable of emulating the IBM 2780 terminal of the day, however in ASCII, not in EBCDIC, and an asynchronous version of the same was also available, but both were limited to line speeds of no more than 2400 bit/s, the maximum dial-up rate of the day. Another serial controller called the Asynchronous Terminal Adapter (ATA), enable a character-oriented terminal to be interconnect, at a maximum speed of 300 bit/s. Strictly speaking, it was not 10K, but 10,000 characters per partition as the System 10’s memory consisted of 6-bit characters. It took 10 characters to make up one instruction, so each partition was only able to accommodate 1,000 instructions. The instruction set was extremely small, simple and powerful. The original processor, the model 20, had only 13 instructions, but its successor, the model 21 had 16 instructions, and it was mostly programmed in assembler language. Although relatively simplistic in its syntax, the assembler had a built-in macro language that was extremely powerful and complex, based on a string matching and parsing language. No machine language translator since has come anywhere close to this level of complexity, probably as few understood it, and the processing time for even the smallest programs could be prohibitively long. The machine had a longer history in North America than in the UK, which started when the Singer Business Machines division was bought by ICL in 1976. At the time of the sale ICL estimated that there were 8,000 System 10s in use around the world. ICL continued to market the system as the ICL System 10, but also attempted to wean customers off it and onto their mainstream product offering, the 2900 series, by introducing a transition machine called the ME/29. When this strategy failed, they turned to a division of Singer which made intelligent terminals, to re-engineer the system and bring it up the then modern day standards and considerable reduce its size and power consumption, and the model 25, the last iteration of the machine, was then created. In the UK, the marketing strategy was that customers would be trained in the assembler, and would write their own programs. This was the only serious strategic error Singer made. Some European Singer Business Machines companies ignored this strategy, and set up small internal software houses to write customers' applications. Within two years, some of these software houses were independent of Singer, and specialized in supporting their national System 10 customers. In North America, several other languages had appeared, including a "table processor" approach to computing which was simple to learn, and an RPG/RPGII compiler which was later added with the advent of a second generation assembler that included a linker, a program which could bind several assembled modules together into a single executable. There were also tools called lpgc and Super Opus (from Safe Computing Ltd.), which used a data layout from the ICL tool for updating the files to define the layout of the data. LPGC was a report tool mostly though you could accept data at the start or if you patched the machine code you could do it in flight. Singer also created software packages for retail applications, which grew out of its installed customer base, the largest of which was at the Wanamaker's department store in Philadelphia. New installations were facilitated by only having to make customizations to the original code rather than having to re-write it from scratch each time, enabling larger installations to be turned up quickly. In England, Welwyn Department Store in Welwyn Garden City (now a branch of John Lewis & Partners) was the first to implement the System 10 as originally planned, and this became a flagship installation. Despite its major thrust as a retail backroom machine, it was still sold as a general purpose business computer, as it did support the common peripherals of the day such as video terminals, punched cards, printers and, later, disk and magnetic tape storage for sales, stock and accounting applications. It eventually faded into history with the end of the minicomputer era, when the PC became the more popular computing platform. References External links Pages from the System 10 Programmer's Reference System 10 description Minicomputers ICL minicomputers
Operating System (OS)
968
Ibus Ibus or IBus may refer to: Intelligent Input Bus (IBus), an input-method framework for Unix-like computer operating-systems iBus (London), an Automatic Vehicle Location system used on London's buses iBus (Indore), BRT buses of Indore International Bitterness Units scale (IBUs), a measure of the bitterness of beer iBUS (device), a bus-monitoring and -management device for road transport systems See also Ibis
Operating System (OS)
969
PureOS PureOS is a Linux distribution focusing on privacy and security, using the GNOME desktop environment. It is maintained by Purism for use in the company's Librem laptop computers as well as the Librem 5 smartphone. PureOS is designed to include only free/libre and open-source software (FOSS/FLOSS), and is included in the list of Free Linux distributions published by the Free Software Foundation. PureOS is a Debian-based Linux distribution, merging open-source software packages from the Debian “testing” main archive using a hybrid point release and rolling release model. The default web browser in PureOS is called PureBrowser, a variant of GNOME Web focusing on privacy. The default search engine in PureBrowser is DuckDuckGo. See also Librem (computer) Librem 5 (phone) Purism (company) GNU Free System Distribution Guidelines List of Linux distributions based on Debian testing branch References External links PureOS at DistroWatch Debian-based distributions Mobile operating systems ARM operating systems GNOME Mobile Mobile Linux Mobile/desktop convergence Free mobile software Free software only Linux distributions Linux distributions
Operating System (OS)
970
Linux-libre Linux-libre is a modified version of the Linux kernel that contains no binary blobs, obfuscated code, or code released under proprietary licenses. Binary blobs are software components with no available source code. In the Linux kernel, they are mostly used for proprietary firmware images. While generally redistributable, binary blobs do not give the user the freedom to audit, modify, or, consequently, redistribute their modified versions. The GNU Project keeps Linux-libre in synchronization with the mainline Linux kernel. History The Linux kernel started to include binary blobs in 1996. The work to clear out the binary blobs began in 2006 with gNewSense's find-firmware and gen-kernel. This work was taken further by the BLAG Linux distribution in 2007 when deblob and Linux-libre was born. Linux-libre was first released by the Free Software Foundation Latin America (FSFLA), then endorsed by the Free Software Foundation (FSF) as a valuable component for the totally free Linux distributions. It became a GNU package in March 2012. Alexandre Oliva is the project maintainer. Proprietary firmware removal Methods The removal process is achieved by using a script called deblob-main. This script is inspired by the one used for gNewSense. Jeff Moe made subsequent modifications to meet certain requirements for its use with the BLAG Linux and GNU distribution. There is another script called deblob-check, which is used to check if a kernel source file, a patch or a compressed sources file still contains software which is suspected of being proprietary. Benefits Aside from the primary intended effect of running a system with only free software, the practical consequences of removing device firmware that a user is not allowed to study or modify has both positive and negative effects. Removal of device firmware can be considered an advantage for security and stability, when the firmware cannot be audited for bugs, for security problems, and for malicious operations such as backdoors, or when the firmware cannot be fixed by the Linux kernel maintainers themselves, even if they know of problems. It is possible for the entire system to be compromised by a malicious firmware, and without the ability to perform a security audit on manufacturer-provided firmware, even an innocent bug could undermine the safety of the running system. Side effects The downside of removing proprietary firmware from the kernel is that it will cause loss of functionality of certain hardware that does not have a free software replacement available. This affects certain sound, video, TV tuner, and network (especially wireless) cards, as well as some other devices. When possible, free software replacement firmware is provided as a substitute, such as the openfwwf for b43, carl9170 and ath9k_htc wireless card drivers. Availability The source code and precompiled packages of the deblobbed Linux kernel are available directly from the distributions which use the Linux-libre scripts. Freed-ora is a subproject which prepares and maintains RPM packages based on Fedora. There are also precompiled packages for Debian and derived distributions such as Ubuntu. Distributions Distributions in which Linux-libre is the default kernel Dragora GNU/Linux-Libre dyne:bolic GNU Guix System Parabola GNU/Linux-libre Considered small distributions libreCMC ProteanOS (If the underlying hardware is not supported, it must be ported.) Historical Hyperbola GNU/Linux-libre Musix GNU+Linux Distributions that compile a free Linux kernel These distros do not use the packaged Linux-libre but instead completely remove binary blobs from the mainline Linux kernel, to make Linux-libre. The source is then compiled and the resulting free Linux kernel is used by default in these systems: Debian PureOS Trisquel (The Linux-libre deblob script is used during its development). Uruk GNU/Linux Ututo Historical BLAG gNewSense (It was based on Debian.) Canaima (It was based on Debian.) Linux-libre as an alternative kernel Distributions in which Linux is the default kernel used and which propose Linux-libre as an alternative kernel: Arch Linux Fedora Gentoo Linux Mandriva-derived (PCLinuxOS, Mageia, OpenMandrivaLx, ROSA Fresh) openSUSE Tumbleweed (via OpenBuildService) Slackware See also GNU Hurd, an operating system kernel developed by GNU, which follows the microkernel paradigm Libreboot LibrePlanet List of computing mascots Open-source hardware :Category:Computing mascots References External links 2008 software Free software programmed in C GNU Project software Linux kernel Operating system kernels
Operating System (OS)
971
PROMAL PROMAL (PROgrammer's Microapplication Language) is a structured programming language from Systems Management Associates for MS-DOS, Commodore 64, and Apple II. PROMAL features simple syntax, no line numbers, long variable names, functions and procedures with argument passing, real number type, arrays, strings, pointer, and a built-in I/O library. Like ABC and Python, indentation is part of the language syntax. The language uses a single-pass compiler to generate byte code that is interpreted when the program is run. Since the memory is very limited on these early home computers, the compiler can compile to/from disk and memory. The software package for C64 includes a full-screen editor and command shell. See also [Computer Language, Mar 1986, pp. 128–134]. Reception Ahoy! called PROMAL for the Commodore 64 "one of the best" structured languages. It concluded "As an introduction to structured programming languages and as an alternative to BASIC, PROMAL is well worth the time needed to learn it and the $49.95 to purchase it". Example Code From the PROMAL program disk: PROGRAM SIEVE ; Sieve of Eratosthenes Benchmark ; test (BYTE magazine) ; 10 iterations, 1800 element array. INCLUDE LIBRARY CON SIZE=1800 WORD I WORD J WORD PRIME WORD K WORD COUNT BYTE FLAGS[SIZE] BEGIN OUTPUT "10 ITERATIONS" FOR J= 1 TO 10 COUNT=0 FILL FLAGS, SIZE, TRUE FOR I= 0 TO SIZE IF FLAGS[I] PRIME=I+I+3 K=I+PRIME WHILE K <= SIZE FLAGS[K]=FALSE K=K+PRIME COUNT=COUNT+1 OUTPUT "#C#I PRIMES", COUNT END References External links PROMAL for the Commodore 64 PROMAL - Trademark Details C programming language family
Operating System (OS)
972
AmigaOne X1000 AmigaOne X1000 is a PowerPC-based personal computer intended as a high-end platform for AmigaOS 4. It was announced by A-Eon Technology CVBA in partnership with Hyperion Entertainment and released in 2011. Its name pays homage to the Amiga 1000 released by Commodore in 1985. It is, however, not hardware compatible with the original Commodore Amiga system. History A-Eon Technology is a privately funded company co-founded by Trevor Dickinson. The focus of A-Eon was on the high-end and a partnership with Hyperion Entertainment was formed to allow discussion with key AmigaOS 4 developers about what such a next generation AmigaOS 4 computer would need. With the end of the AmigaONE line from Eyetech and lack of success porting the OS to third party machines, A-Eon decided to continue the AmigaONE line themselves with more up to date technology. One important decision made during this early phase was that the AmigaOne X1000 should be a complete system built around a bespoke motherboard with a customised case and peripherals. This contrasts with the adapted reference design strategy used by Eyetech for the original AmigaOne series. Even before the 'wish list' was completed, hardware design company Varisys had been chosen as a partner based on their track record both with the PowerPC architecture and with parallel computing. The decision to form a partnership with Varisys had the consequence of bringing XMOS chips to the AmigaOne X1000, as it is the connection between XMOS and the Varisys team, dating back to earlier work on the Inmos Transputer, that led to the suggestion of including an XMOS XCore chip on the X1000 motherboard. This XCore chip is referred to by A-Eon as the 'Xena' Coprocessor. The first prototype machines were manufactured during mid-2009 and Hyperion Entertainment began the process of porting the AmigaOS to the X1000 in late 2009. By mid-June 2010, the X1000 was booting AmigaOS from hard disk and the machine made its debut at the Vintage Computer festival at Bletchley Park on the weekend on the 19 and 20 June 2010. Release The original intention was that the machine would be available before Summer 2010, but A-Eon Technology announced at the Vintage Computer festival that the release had been delayed. By August 2011, hardware designer and manufacturer Varisys had begun the first production run of revision 2.1 boards destined for the AmigaOne X1000 beta test team. In January 2012, A-Eon announced that Amiga Kit would start shipping the AmigaOne X1000 to customers with AmigaOS4.1 Update 5. It would also be supplied with a license for AmigaOS4.2 which could be downloaded when it is released by Hyperion Entertainment some years after it was announced. Specifications The specifications from A-EON Technology's website: Black Pearl PC case, white case also available ATX form factor Dual-core PWRficient PA6T-1682M 1.8 GHz PowerISA v2.04+ CPU Co-processor: "Xena" 500 MHz XCore XS1-L2 124 SDS ATI Radeon R700 or AMD Radeon HD 6000 Series graphics card Audio: 7.1 channel HD audio Memory: 4× DDR2 SDRAM slots 10× USB 2.0 1× Gigabit Ethernet 2× PCIe ×16 slots (1×16 or 2×8) 2× PCIe ×1 slots 2× PCI slots 1× Xorro slot 1× Compact Flash 2× RS-232 4× SATA Revision 2.0 connectors 1× PATA connector 1× JTAG connector Xena coprocessor The 'Xena' coprocessor is an XMOS XCore MCU integrated on the X1000 motherboard and connected a non-standard expansion slot, called the 'Xorro' slot, named after the Zorro bus used in the Commodore Amiga. However, it uses a PCI-express like connector, using a custom I/O layout. The XMOS processor is connected to the local CPU bus in addition to the Xorro slot, allowing timing critical custom hardware to be controlled by the I/O optimized XCore without loading the main processor. Xorro Xorro is a new slot using an industry standard PCIe ×8 form factor to give access to the 'Xena' I/O. This will be the route to Xena's 64 I/O lines, which are dynamically configurable as input, output, or bidirectional. 'Xorro' will allow bridging Xena to external hardware for control purposes, to internal systems, or to other XCore processors. This last point is worth more exploration; XCore is a parallel processing architecture; more XCores can be chained together if more computing power is required.(XK-XMP-64 Development Board). Reference boards have been made with up to 256 cores, offering a theoretical 102400 MIPS. Reception A-Eon were able to successfully produce a system that did not have hardware issues (unlike previous AmigaONE boards), and in shipping it in volumes that its detractors claimed would be unlikely. It also took significantly longer to arrive than announced. The Inquirer criticized its very high price, comparing the X1000 to a golden chocolate teapot with Amiga fanatics as the target market. Amiga Future magazine also mentioned high price along with lack of drivers as main weak points of the fastest hardware available for AmigaOS 4. Also there was criticism that the Xena chip was simply a 'gimmick' that delivered little to the system, despite many optimistic forecasts of what it could be used for. First use of the new chip is for debugging output of the same computer (no need for second computer). Finally, the choice of the PA-6T processor has proven controversial. A dual core processor, the second core is not used by AmigaOS. With the PA6T being an end-of-line CPU, A-EON have announced three new boards, the X3500, X5000/20 and X5000/40 to continue the AmigaOne line using Freescale PowerPC SoC chips. The contract price for developing new motherboard is £1.2 Million See also AmigaOne PWRficient Common Firmware Environment XMOS References External links A-Eon Technology homepage Amiga computers PowerPC mainboards 64-bit computers
Operating System (OS)
973
Fuse (emulator) The Free Unix Spectrum Emulator (Fuse) is an emulator of the 1980s ZX Spectrum home computer and its various clones for Unix, Windows and macOS. Fuse is free software, released under the GNU General Public License. There are ports of Fuse to several platforms including GP2X, PlayStation 3, PlayStation Portable, Wii, the Nokia N810, and Android (as the Spectacol project). The project was started in 1999 and is still under development . It has been recognised as one of the most full-featured and accurate Spectrum emulators available for Linux, and portions of its code have been ported and adapted for use in other free software projects such as the Sprinter emulator SPRINT and the ZX81 emulator EightyOne. Development of Fuse places high importance on accurately emulating the timings of the Spectrum to recreate such effects as multicolour graphics, and this effort has in turn resulted in previously unknown hardware behaviour becoming documented for the first time. References External links GP2X Port PSP Port Wii Port Maemo (Nokia N810) Port ZX Spectrum Free emulation software GP2X emulation software MacOS emulation software MorphOS emulation software Unix emulation software Linux emulation software Windows emulation software Amiga emulation software AmigaOS 4 software Free software programmed in C
Operating System (OS)
974
Linux for PlayStation 2 Linux for PlayStation 2 (or PS2 Linux) is a kit released by Sony Computer Entertainment in 2002 that allows the PlayStation 2 console to be used as a personal computer. It included a Linux-based operating system, a USB keyboard and mouse, a VGA adapter, a PS2 network adapter (Ethernet only), and a 40 GB hard disk drive (HDD). An 8 MB memory card is required; it must be formatted during installation, erasing all data previously saved on it, though afterwards the remaining space may be used for savegames. It is strongly recommended that a user of Linux for PlayStation 2 have some basic knowledge of Linux before installing and using it, due to the command-line interface for installation. The official site for the project was closed at the end of October 2009 and communities like ps2dev are no longer active. Capabilities The Linux Kit turns the PlayStation 2 into a full-fledged computer system, but it does not allow for use of the DVD-ROM drive except to read PS1 and PS2 discs due to piracy concerns by Sony. Although the HDD included with the Linux Kit is not compatible with PlayStation 2 games, reformatting the HDD with the utility disc provided with the retail HDD enables use with PlayStation 2 games but erases PS2 Linux, though there is a driver that allows PS2 Linux to operate once copied onto the APA partition created by the utility disc. The Network Adapter included with the kit only supports Ethernet; a driver is available to enable modem support if the retail Network Adapter (which includes a built-in V.90 modem) is used. The kit supports display on RGB monitors (with sync-on-green) using a VGA cable provided with the Linux Kit, or television sets with the normal cable included with the PlayStation 2 unit. The PS2 Linux distribution is based on Kondara MNU/Linux, a Japanese distribution itself based on Red Hat Linux. PS2 Linux is similar to Red Hat Linux 6, and has most of the features one might expect in a Red Hat Linux 6 system. The stock kernel is Linux 2.2.1 (although it includes the USB drivers from Linux 2.2.18 to support the keyboard and mouse), but it can be upgraded to a newer version such as 2.2.21, 2.2.26 or 2.4.17. Open-source applications The Linux kit's primary purpose is amateur software development, but it can be used as one would use any other computer, although the small amount of memory in the PS2 (32MB) limits its applications. Noted open source software that compiles on the kit includes Mozilla Suite, XChat, and Pidgin. Lightweight applications better suited to the PS2's 32MB of RAM include xv, Dillo, Ted, and AbiWord. The default window manager is Window Maker, but it is possible to install and use Fluxbox and FVWM. The USB ports of the console can be connected to external devices, such as printers, cameras, flash drives, and CD drives. With PS2 Linux, a user can program their own games that will work under PS2 Linux, but not on an unmodified PlayStation 2. Free open source code for games are available for download from PS2 Linux support sites. There is little difference between PS2 Linux and the Linux software used on the more expensive system ("Tool", DTL-T10000) used by professional licensed PlayStation game programmers. Some amateur-created games are submitted to a competition such as the Independent Games Festival's annual competition. It is possible for an amateur to sell games or software that they develop using PS2 Linux, with certain restrictions detailed in the End User License Agreement. The amateur cannot make and sell game CDs and DVDs, but can sell the game through an online download. Distribution This kit has stopped being officially sold in the US as of 2003 due to the entire allocation of NTSC kits selling out. However, it is still available through some second-hand markets, such as eBay. Some incorrectly speculate that it was used as an attempt to help classify the PS2 as a computer in order to achieve tax exempt status from certain EU taxes that apply to game consoles and not computers (It was the Yabasic included with EU units that was intended to do that). Despite this, Sony lost the case in June 2006. The kit was released in the spirit of the earlier Net Yaroze. PlayStation and Sony ended their support of hobbyist programmers with the support of Linux on the PlayStation 3 being discontinued. Model compatibility The original version of the PS2 Linux kit worked on only the Japanese SCPH-10000, SCPH-15000 and SCPH-18000 PlayStation 2 models. It came with a PCMCIA interface card which had a 10/100 Ethernet port and an external IDE hard drive enclosure (as there is no room inside the unit). This kit cannot be used with any later model PS2 (which includes all non-Japanese models) because these models removed the PCMCIA port. Later versions of the PS2 Linux kit use an interface very similar to the HDD interface/Ethernet sold later for network play (the later released Network adaptor was also usable with the kit, including the built-in 56k modem.) This kit locates the hard drive internal to the PS2, in the MultiBay. With this kit, only the SCPH-30000 model of PlayStation 2 is officially supported. The kit does though work equally well with models newer than SCPH-30000 with the exception that the Ethernet connection tended to freeze after a short period of use. Thus the newer SCPH-50000 PlayStation 2 model will only work correctly with PS2 Linux with an updated network adapter driver, which must be transferred to the PlayStation 2 HDD by using either an older model PlayStation 2 to transfer the driver or a Linux PC with an IDE port. Both methods involve swapping HDDs. This is due to the inability to use USB Mass Storage devices with the relatively old kernel (version 2.2.1) shipped with the kit. The slim SCPH-70000 PlayStation 2 model does not work with PS2 Linux at all, due to the lack of a hard drive interface, though a very few early models in this revision had solder pads of an IDE interface on the motherboard that could be used (but required modding of the console, thereby voiding its warranty.) Even so, it is possible to network boot from a PXE server. PS2 Linux installation DVDs are region encoded, as are all other PS2 game discs. A European/PAL disc will be rejected by an NTSC PlayStation 2 game system; however this is only at boot time: if the user has a mod that allows them to load a PAL disk, then the PS2 Linux boot loader supports both PAL and NTSC Linux (read the documentation to determine the button presses), so once they are past the "DVD not supported", they can boot Linux and then later start X Window in NTSC mode. Unofficial support Ever since the discontinuation of the PS2 Linux Kit and some time before that there has been a large, less active group who have tried and succeeded to run the Linux operating system through other methods, most notably using the KernelLoader Linux loader developed by Mega Man since 2008 where they have copied the necessary kernel files onto removable storage or DVDs formatted as Video DVDs due to Sony's anti-piracy efforts which restrict any data DVDs and loaded them through the program. Through this method it has become possible to use custom Linux distros and other UNIX-like operating systems compiled for the PlayStation 2 and this has enabled users to use more compatible Linux kernels with smaller footprints and programs specially designed for the console. These methods often require the use of PS2 exploits such as Free MCBoot which allows the end user to boot from the PlayStation 2 memory card and launch custom made homebrew applications packaged as ELF files and other exploits such as SwapMagic etc. however these tend to void the warranty as some require the opening of the PlayStation 2 console itself. See also Linux on the PlayStation 3 Linux for gaming References External links Sony's PlayStation 2 Linux Community Archived From The Original Open source PlayStation Linux kernel loader PlayStation 2 active Linux community PlayStation 2 PlayStation 2 accessories Platform-specific Linux distributions Discontinued Linux distributions Game console operating systems Linux distributions
Operating System (OS)
975
History of computer hardware in Yugoslavia The Socialist Federal Republic of Yugoslavia (SFRY) was a socialist country that existed in the second half of the 20th century. Being socialist meant that strict technology import rules and regulations shaped the development of computer history in the country, unlike in the Western world. However, since it was a non-aligned country, it had no ties to the Soviet Bloc either. One of the major ideas contributing to the development of any technology in SFRY was the apparent need to be independent of foreign suppliers for spare parts, fueling domestic computer development. Development Early computers In former Yugoslavia, at the end of 1962 there were 30 installed electronic computers, in 1966, there were 56, and in 1968 there were 95. Having received training in the European computer centres (Paris 1954 and 1955, Darmstadt 1959, Wien 1960, Cambridge 1961 and London 1964), engineers from the BK.Institute-Vinča and the Mihailo Pupin Institute- Belgrade, led by Prof. dr Tihomir Aleksić, started a project of designing the first "domestic" digital computer at the end of the 1950s. This was to become a line of CER (Serbian Cifarski Elektronski Računar, Cyrillic ЦЕР - Цифарски Електронски Рачунар - Digital Electronic Computer), starting with the model CER-10 in 1960, a primarily vacuum tube and electronic relays-based computer. By 1964, CER-20 computer was designed and completed as "electronic bookkeeping machine", as the manufacturer recognized increasing need in accounting market. This special-purpose trend continued with the release of CER-22 in 1967, which was intended for on-line "banking" applications. There were more CER models, such as CER-11, CER-12, and CER-200, but there is currently little information here available on them. In the late 1970s, "Ei-Niš Računarski Centar" from Niš, Serbia, started assembling Mainframe computers H6000 under Honeywell license, mainly for banking businesses. Computer initially had a great success that later led into local limited parts production. In addition, the company produced models such as H6 and H66 and was alive as late as early 2000s under name "Bull HN". Models H6 were installed in enterprises (e.g., telecom) for business applications and ran the GCOS operating system. Also, they were used in education. E.g., one of the built Honeywell H6 was installed in local electronics engineering and trade school "Nikola Tesla" in Niš and was used for training and educational purposes until late 80s and dawn of personal computers. Imports Eventually, the socialist government of SFRY allowed foreign computers to be imported under strict conditions. This led to the increasing dominance of foreign mainframes and a continuous reduction of relative market share for domestic products. Despite this, since the interest in computer technology grew overall, systems built by the Mihailo Pupin Institute (first CER, then TIM lines) and Iskra Delta (e.g. model 800, derivative of PDP-11/34) continued to evolve through the 1970s and even the 1980s. Early 1980s: Home computer era Many companies attempted to produce microcomputers similar to 1980s home computers, such as Ivo Lola Ribar Institute's Lola 8, M.Pupin Institute's TIM-001, EI's Pecom 32 and 64, PEL Varaždin's Galeb (computer) and Orao, Ivel Ultra and Ivel Z3, etc. Jožef Stefan Institute in Ljubljana made first 16-bit microcomputer PMP-11 under the leadership of Marijan Miletić, former technical director of Iskra-Delta in 1984. It had 8 MHz DEC T-11 CPU, maximum of 64 kB RAM, 10 MB hard disk, 8" diskette and two RS-232 ports for VT-100 video terminal and COM. Branko Jevtić modified RT-11 operating system so plenty of DEC-11 applications were available. Some 50 machines were made before IBM AT became widely available. Many factors caused them to fail or not even attempt to enter the home computer market: they were prohibitively expensive for individuals (especially when compared to popular foreign ZX Spectrum, Commodore 64, etc.); lack of entertainment and other software meant they were not appealing to majority of contemporary computer enthusiasts; they were not available in stores. The end result was that domestic computers were predominantly used in government institutions that were prohibited from purchasing imported equipment. Those computers that could have been connected to existing mainframes and used as terminals were more successful in business environments, while others were used as educational tools in schools. Given that all medium and large enterprises in the country were government-owned, this was still a significant part of the domestic market which explains both the unnatural, relative success of domestic business computers, as well as why IBM PC/AT and compatibles had a low influx in the local business market. However, while the government tried to proliferate domestic home computers by introducing the cost and memory size limitations for imports, many people imported them nevertheless either illegally or by dividing a single computer into pieces that separately fit within prescribed restrictions. Lack of proper legislation and such grey market activity only helped the demise of domestic home computer production. By the middle of the decade home computer market was, much like in the rest of the Europe, dominated by Commodore 64 and ZX Spectrum as a runner up. One domestic microcomputer model managed to stand out - Galaksija. Created by Voja Antonić, the entire do-it-yourself diagrams and instructions were published in the special issue of popular science magazine "Galaksija" called Računari u vašoj kući (Computers in your home) in January 1984. Although initially unavailable for purchase in assembled form, more than 1,000 enthusiasts built the microcomputer for games. Many were later produced for use in some schools. Home computers were widely popular in SFRY - so much so that software (otherwise recorded on Compact Cassette) was broadcast by radio stations (e.g. Ventilator 202, Radio Študent Ljubljana etc.). Due to lack of regulation, copyright infringement of software was common and unlicensed copies for sale were freely advertised in popular computer magazines of the time, such as Računari, Svet kompjutera, Moj Mikro and Revija za mikroračunala. This distribution led to essentially every home computer owner having access to hundreds, if not thousands of commercial software titles. This would later cause benefits and drawbacks for the economy. Several student developers became computer experts since cheap and unauthorized development tools were common. However, they found themselves still competing with these warez domestically after trying to find a market for their skills. Late 1980s: PC era The second half of the 1980s saw the rise of popularity of IBM AT compatible among business users, and a slow movement towards 16-bits like Amiga and Atari ST computers in the enthusiast market, while mainstream home computing was still largely dominated by the ubiquitous C-64. Domestic computer hardware manufacturers produced a number of different IBM AT compatibles, such as TIM-microcomputers and Lira, and the first domestic Unix workstation (in one of the configurations, Iskra Delta's Triglav was shipped with Microsoft's Xenix) but their success was again limited to government-controlled companies that were required to purchase only domestic or legally imported technology. Timeline 1959 Branko Souček leads a team from 1955 to 1959 to create the '256 channel analyzer' digital computer at the Ruđer Bošković Institute 1960 Mihajlo Pupin Institute releases first digital computer in SFRY - CER-10. 1964 Mihajlo Pupin Institute releases CER-20 - "electronic bookkeeping machine" model. 1966 Mihajlo Pupin Institute releases a serie of minicomputers CER-200. 1967 Mihajlo Pupin Institute releases CER-22 - "digital computer for on-line banking applications". 1971 Mihajlo Pupin Institute releases hybrid computer systems HRS-100 for AN.USSR, Moscow. Mihajlo Pupin Institute releases CER-12 computer system for business data processing in ERCs. Mihajlo Pupin Institute releases CER-203. 1979 Iskradata releases Iskradata 1680 1980 Ivo Lola Ribar Institute releases industrial programmable logic controller PA512 1983 Mihajlo Pupin Institute releases "computer system for real-time generation of images" and a model TIM-001 Iskra Delta releases Iskra Delta Partner Z80A-based computer Complete build-it-yourself new instructions for Galaksija (en. Galaxy) computer are published in Racunari u vašoj kući magazine. 1984 Iskra Delta releases Iskra Delta 800 computer derived from Digital PDP-11/34 Institute Jozef Stefan releases PMP-11 16-bit microcomputer compatible with DEC RT-11 OS PEL Varaždin releases Galeb (en. seagull) computer later to be replaced by Orao 1985 Mihajlo Pupin Institute releases "Microprocessor post-office computers" serie TIM-100. Mihajlo Pupin Institute releases an application development microcomputer model TIM-001. PEL Varaždin releases Orao (en. eagle) computer for use in schools Galaksija Plus (enhanced version of Galaksija) is released. Elektronska Industrija Niš releases Pecom 32 and Pecom 64 also for use in some schools. Ivo Lola Ribar Institute announced official release of Lola 8 for an exhibition in 1985. 1986 Ivo Lola Ribar Institute releases industrial programmable logic controller LPA512. 1988 Mihajlo Pupin Institute releases 32-bit microcomputer systems TIM-600. Mihajlo Pupin Institute releases HD64180-based TIM-011 microcomputer integrated with green monochrome monitor, for use in many Serbian secondary schools. See also List of computer systems from SFRY History of computer hardware in Soviet Bloc countries Notes and references SFRY Socialist Federal Republic of Yugoslavia Computer companies of Yugoslavia
Operating System (OS)
976
Canaima (operating system) Canaima GNU/Linux is a free and open-source Linux distribution that is based on the architecture of Debian. It was created as a solution to cover the needs of the Venezuelan Government as a response to presidential decree 3,390 that prioritizes the use of free and open source technologies in the public administration. On 14 March 2011, Canaima was officially established as the default operating system for the Venezuelan public administration. The operating system has gained a strong foothold and is one of the most used Linux distributions in Venezuela, largely because of its incorporation in public schools. It is being used in large scale projects as "Canaima Educativo", a project aimed at providing school children with a basic laptop computer with educational software nicknamed Magallanes. Use of Canaima has been presented on international congresses about the use of open standards, Despite being a young development, it has been used on the Festival Latinoamericano de Instalación de Software Libre (FLISOL). In February 2013 DistroWatch ranked it the 185th most popular Linux distribution among 319 for the last 12 months. Features Some of the major features of Canaima GNU/Linux are: Easy installation Software license cost is free. Free distribution and use. The Free Software Foundation (FSF) states that Canaima GNU/Linux is not 100% free software. This is because some of its components are nonfree software, in particular some firmware needed for graphic cards, sound cards, printers, etc. Canaima creators opted to include these nonfree drivers in order to support as many computers being used by the Venezuelan government as possible, and to facilitate the migration from a closed source operating system to an open source but nonfree one. It is expected that Canaima, in its upcoming releases, offers an option in the installation process for nonfree drivers to be optional, being able to install a 100% free software image of the distribution if the user choose to. Included Software Canaima includes applications for training, development and system configuration. The Graphical User Interface (GUI) and desktop environment by default is GNOME. There are other desktop environments and GUIs maintained by the community for the system, like Xfce. Productivity: The office software suite LibreOffice, with word processor, spreadsheet, presentation program, it includes other more specific programs like project management software Planner and a HTML editor. Internet: Includes the Cunaguaro browser, a web browser based on Iceweasel and adapted especially for Canaima 3.0 and onwards. Canaima Curiara, is a light web browser based on Cunaguaro, developed in python-webkit for specific applications on the distribution. Graphics: Includes GIMP, Inkscape, desktop publishing software Scribus and gLabels labels designer. The full list of included software can be found at here. Releases Canaima has been releasing stable versions periodically since the last couple of years. Development Cycle Canaima GNU / Linux has a development model based on Debian but with some modifications to adapt it to the specific needs of Venezuela; In this sense, a development cycle has been defined as follows: The rolling release development model is used. 1.- The socio-productive Community, APN and Universities: They provide packages, proposals or, failing that, the corrections of some flaws (bugs), in order to raise the community requirements for the next version of the distribution. 2.- Building your own packages: For building packages, the debmake and debuild tools native to Debian are used. 3.- Alpha Version: It is built and subjected to tests where the correct interaction between the Canaima repository and the distribution is evaluated. The theming of the new version is displayed, performance tests are carried out on high, medium and low-end machines, the performance and management of the most used applications in the National Public Administration (APN) are evaluated. 4.- Evaluation: The Alpha version is evaluated by a group of workers from Venezuelan public institutions, related to the area of Science and Technology and some members of the Venezuelan Free Software Community. In parallel, the socio-productive community will carry out the packaging of its own packages to be added within the Beta 1 version. 5.- Beta version 1: The previously selected packages are added to the system directly or to the repository together with the project packages. At this stage of the development cycle, the distribution is published for Project users. 6.- Beta version 2: At this stage of the development cycle, more packages are added and errors are corrected, resulting in the correction of bugs previously reported during the community review of the Beta 1 version. 7.- Publication: At this stage, after the evaluation and corrections of errors found, the new version is published for the use of the Canaima Project community. Cayapa Canaima One of the community activities that has been generated around Canaima is the Cayapa. Cayapa is a Venezuelan term that stands as a form of cooperative work made by several people to reach one goal. On these meetings, free software developers get together to propose upgrades and fix bugs among other things; this activity is called a Bug Squash Party in other projects. The 6th Cayapa was conducted from 14 May until 15 May 2012 in the city of Barinas. The last Cayapa was conducted from 12 November until 14 November 2014 in the city of Mérida, Venezuela. OEMs Being a distribution promoted by the Venezuelan Government, a certain number of strategic agreements have been generated with several countries and manufacturing hardware companies: Portugal: Agreement for the manufacturing of 250,000 "Magalhães" computers to be distributed on public schools Sun Microsystems: for the certification of Canaima devices from this manufacturer. VIT, C.A.: Venezolana de Industria Tecnológica, mixed-enterprise between the Venezuelan state and Chinese entrepreneurs in which it established the use of Canaima on the devices that are manufactured. Lenovo: For the certification of devices from the manufacturer for the use of Canaima. Siragon, C.A.: Venezuelan manufacturer of computer equipment, an agreement from which Canaima is certified for use on their devices. Use of Canaima The most successful instances of the use and adoption of Canaima: Canaima Educativo It is a project initiated in 2009 by the Venezuelan Ministry of Education (Ministerio del Poder Popular para la Educación) that provides students in primary education with a laptop computer, known as Canaimitas, with free software, using the Canaima operating system and a series of educational content created by the Ministry of Education. In 2017, 6000.000 laptops were acknowledged as being delivered. CANTV The national telephone company, CANTV, uses the operating system to a certain extent according to their Equipped Internet Plan. Variants There are a number of Canaima editions, maintained and recognized by community activists, that are not released at the same time as the official distribution and do not take part in the project schedule. The most significant ones are: Canaima Colibri, a Venezuelan distribution with the goals of being friendly, light and functional for computers with low resources. Canaima Comunal, the idea behind this edition is that it can be extended by community councils, a form of community government called "Consejos Comunales". The main aim is to deliver an operating system to the people in these councils for their everyday work, including tools for surveys among others. Canaima Caribay, aimed at community media that has flourished because of government support, since the Venezuelan government sees most private media outlets as being heavily biased. GeoCanaima contains free Geomatics applications and data to perform various practices and interact with desktop applications, web servers and mapping generators. Canaima Forense, a new user-friendly environment containing a variety of useful tools for computer forensics. See also GendBuntu Huayra GNU/Linux Inspur LiMux Nova (operating system) Ubuntu Kylin VIT, C.A. References External links Decree 3.390 VIT computers preloaded with Canaima DistroWatch popularity rankings Debian-based distributions Government of Venezuela State-sponsored Linux distributions History of computing in South America Linux distributions
Operating System (OS)
977
X86-64 x86-64 (also known as x64, x86_64, AMD64, and Intel 64) is a 64-bit version of the x86 instruction set, first released in 1999. It introduced two new modes of operation, 64-bit mode and compatibility mode, along with a new 4-level paging mode. With 64-bit mode and the new paging mode, it supports vastly larger amounts of virtual memory and physical memory than was possible on its 32-bit predecessors, allowing programs to store larger amounts of data in memory. x86-64 also expands general-purpose registers to 64-bit, and expands the number of them from 8 (some of which had limited or fixed functionality, e.g. for stack management) to 16 (fully general), and provides numerous other enhancements. Floating-point arithmetic is supported via mandatory SSE2-like instructions, and x87/MMX style registers are generally not used (but still available even in 64-bit mode); instead, a set of 16 vector registers, 128 bits each, is used. (Each register can store one or two double-precision numbers or one to four single-precision numbers, or various integer formats.) In 64-bit mode, instructions are modified to support 64-bit operands and 64-bit addressing mode. The compatibility mode defined in the architecture allows 16- and 32-bit user applications to run unmodified, coexisting with 64-bit applications if the 64-bit operating system supports them. As the full x86 16-bit and 32-bit instruction sets remain implemented in hardware without any intervening emulation, these older executables can run with little or no performance penalty, while newer or modified applications can take advantage of new features of the processor design to achieve performance improvements. Also, a processor supporting x86-64 still powers on in real mode for full backward compatibility with the 8086, as x86 processors supporting protected mode have done since the 80286. The original specification, created by AMD and released in 2000, has been implemented by AMD, Intel, and VIA. The AMD K8 microarchitecture, in the Opteron and Athlon 64 processors, was the first to implement it. This was the first significant addition to the x86 architecture designed by a company other than Intel. Intel was forced to follow suit and introduced a modified NetBurst family which was software-compatible with AMD's specification. VIA Technologies introduced x86-64 in their VIA Isaiah architecture, with the VIA Nano. The x86-64 architecture is distinct from the Intel Itanium architecture (formerly IA-64). The architectures are not compatible on the native instruction set level, and operating systems and applications compiled for one cannot be run on the other. AMD64 History AMD64 (also variously referred to by AMD in their literature and documentation as “AMD 64-bit Technology” and “AMD x86-64 Architecture”) was created as an alternative to the radically different IA-64 architecture designed by Intel and Hewlett-Packard, which was backward-incompatible with IA-32, the 32-bit version of the x86 architecture. Originally announced in 1999 with a full specification available in August 2000, the AMD64 architecture was positioned by AMD from the beginning as an evolutionary way to add 64-bit computing capabilities to the existing x86 architecture while supporting legacy 32-bit x86 code, as opposed to Intel's approach of creating an entirely new 64-bit architecture with IA-64. The first AMD64-based processor, the Opteron, was released in April 2003. Implementations AMD's processors implementing the AMD64 architecture include Opteron, Athlon 64, Athlon 64 X2, Athlon 64 FX, Athlon II (followed by "X2", "X3", or "X4" to indicate the number of cores, and XLT models), Turion 64, Turion 64 X2, Sempron ("Palermo" E6 stepping and all "Manila" models), Phenom (followed by "X3" or "X4" to indicate the number of cores), Phenom II (followed by "X2", "X3", "X4" or "X6" to indicate the number of cores), FX, Fusion/APU and Ryzen/Epyc. Architectural features The primary defining characteristic of AMD64 is the availability of 64-bit general-purpose processor registers (for example, ), 64-bit integer arithmetic and logical operations, and 64-bit virtual addresses. The designers took the opportunity to make other improvements as well. Notable changes in the 64-bit extensions include: 64-bit integer capability All general-purpose registers (GPRs) are expanded from 32 bits to 64 bits, and all arithmetic and logical operations, memory-to-register and register-to-memory operations, etc., can operate directly on 64-bit integers. Pushes and pops on the stack default to 8-byte strides, and pointers are 8 bytes wide. Additional registers In addition to increasing the size of the general-purpose registers, the number of named general-purpose registers is increased from eight (i.e. , , , , , , , ) in x86 to 16 (i.e. , , , , , , , , , , , , , , , ). It is therefore possible to keep more local variables in registers rather than on the stack, and to let registers hold frequently accessed constants; arguments for small and fast subroutines may also be passed in registers to a greater extent. AMD64 still has fewer registers than many RISC instruction sets (e.g. PA-RISC, Power ISA, and MIPS have 32 GPRs; Alpha, 64-bit ARM, and SPARC have 31) or VLIW-like machines such as the IA-64 (which has 128 registers). However, an AMD64 implementation may have far more internal registers than the number of architectural registers exposed by the instruction set (see register renaming). (For example, AMD Zen cores have 168 64-bit integer and 160 128-bit vector floating-point physical internal registers.) Additional XMM (SSE) registers Similarly, the number of 128-bit XMM registers (used for Streaming SIMD instructions) is also increased from 8 to 16. The traditional x87 FPU register stack is not included in the register file size extension in 64-bit mode, compared with the XMM registers used by SSE2, which did get extended. The x87 register stack is not a simple register file although it does allow direct access to individual registers by low cost exchange operations. Larger virtual address space The AMD64 architecture defines a 64-bit virtual address format, of which the low-order 48 bits are used in current implementations. This allows up to 256 TiB (248 bytes) of virtual address space. The architecture definition allows this limit to be raised in future implementations to the full 64 bits, extending the virtual address space to 16 EiB (264 bytes). This is compared to just 4 GiB (232 bytes) for the x86. This means that very large files can be operated on by mapping the entire file into the process's address space (which is often much faster than working with file read/write calls), rather than having to map regions of the file into and out of the address space. Larger physical address space The original implementation of the AMD64 architecture implemented 40-bit physical addresses and so could address up to 1 TiB (240 bytes) of RAM. Current implementations of the AMD64 architecture (starting from AMD 10h microarchitecture) extend this to 48-bit physical addresses and therefore can address up to 256 TiB (248 bytes) of RAM. The architecture permits extending this to 52 bits in the future (limited by the page table entry format); this would allow addressing of up to 4 PiB of RAM. For comparison, 32-bit x86 processors are limited to 64 GiB of RAM in Physical Address Extension (PAE) mode, or 4 GiB of RAM without PAE mode. Larger physical address space in legacy mode When operating in legacy mode the AMD64 architecture supports Physical Address Extension (PAE) mode, as do most current x86 processors, but AMD64 extends PAE from 36 bits to an architectural limit of 52 bits of physical address. Any implementation, therefore, allows the same physical address limit as under long mode. Instruction pointer relative data access Instructions can now reference data relative to the instruction pointer (RIP register). This makes position-independent code, as is often used in shared libraries and code loaded at run time, more efficient. SSE instructions The original AMD64 architecture adopted Intel's SSE and SSE2 as core instructions. These instruction sets provide a vector supplement to the scalar x87 FPU, for the single-precision and double-precision data types. SSE2 also offers integer vector operations, for data types ranging from 8bit to 64bit precision. This makes the vector capabilities of the architecture on par with those of the most advanced x86 processors of its time. These instructions can also be used in 32-bit mode. The proliferation of 64-bit processors has made these vector capabilities ubiquitous in home computers, allowing the improvement of the standards of 32-bit applications. The 32-bit edition of Windows 8, for example, requires the presence of SSE2 instructions. SSE3 instructions and later Streaming SIMD Extensions instruction sets are not standard features of the architecture. No-Execute bit The No-Execute bit or NX bit (bit 63 of the page table entry) allows the operating system to specify which pages of virtual address space can contain executable code and which cannot. An attempt to execute code from a page tagged "no execute" will result in a memory access violation, similar to an attempt to write to a read-only page. This should make it more difficult for malicious code to take control of the system via "buffer overrun" or "unchecked buffer" attacks. A similar feature has been available on x86 processors since the 80286 as an attribute of segment descriptors; however, this works only on an entire segment at a time. Segmented addressing has long been considered an obsolete mode of operation, and all current PC operating systems in effect bypass it, setting all segments to a base address of zero and (in their 32-bit implementation) a size of 4 GiB. AMD was the first x86-family vendor to implement no-execute in linear addressing mode. The feature is also available in legacy mode on AMD64 processors, and recent Intel x86 processors, when PAE is used. Removal of older features A few "system programming" features of the x86 architecture were either unused or underused in modern operating systems and are either not available on AMD64 in long (64-bit and compatibility) mode, or exist only in limited form. These include segmented addressing (although the FS and GS segments are retained in vestigial form for use as extra-base pointers to operating system structures), the task state switch mechanism, and virtual 8086 mode. These features remain fully implemented in "legacy mode", allowing these processors to run 32-bit and 16-bit operating systems without modifications. Some instructions that proved to be rarely useful are not supported in 64-bit mode, including saving/restoring of segment registers on the stack, saving/restoring of all registers (PUSHA/POPA), decimal arithmetic, BOUND and INTO instructions, and "far" jumps and calls with immediate operands. Virtual address space details Canonical form addresses Although virtual addresses are 64 bits wide in 64-bit mode, current implementations (and all chips that are known to be in the planning stages) do not allow the entire virtual address space of 264 bytes (16 EiB) to be used. This would be approximately four billion times the size of the virtual address space on 32-bit machines. Most operating systems and applications will not need such a large address space for the foreseeable future, so implementing such wide virtual addresses would simply increase the complexity and cost of address translation with no real benefit. AMD, therefore, decided that, in the first implementations of the architecture, only the least significant 48 bits of a virtual address would actually be used in address translation (page table lookup). In addition, the AMD specification requires that the most significant 16 bits of any virtual address, bits 48 through 63, must be copies of bit 47 (in a manner akin to sign extension). If this requirement is not met, the processor will raise an exception. Addresses complying with this rule are referred to as "canonical form." Canonical form addresses run from 0 through 00007FFF'FFFFFFFF, and from FFFF8000'00000000 through FFFFFFFF'FFFFFFFF, for a total of 256 TiB of usable virtual address space. This is still 65,536 times larger than the virtual 4 GiB address space of 32-bit machines. This feature eases later scalability to true 64-bit addressing. Many operating systems (including, but not limited to, the Windows NT family) take the higher-addressed half of the address space (named kernel space) for themselves and leave the lower-addressed half (user space) for application code, user mode stacks, heaps, and other data regions. The "canonical address" design ensures that every AMD64 compliant implementation has, in effect, two memory halves: the lower half starts at 00000000'00000000 and "grows upwards" as more virtual address bits become available, while the higher half is "docked" to the top of the address space and grows downwards. Also, enforcing the "canonical form" of addresses by checking the unused address bits prevents their use by the operating system in tagged pointers as flags, privilege markers, etc., as such use could become problematic when the architecture is extended to implement more virtual address bits. The first versions of Windows for x64 did not even use the full 256 TiB; they were restricted to just 8 TiB of user space and 8 TiB of kernel space. Windows did not support the entire 48-bit address space until Windows 8.1, which was released in October 2013. Page table structure The 64-bit addressing mode ("long mode") is a superset of Physical Address Extensions (PAE); because of this, page sizes may be 4 KiB (212 bytes) or 2 MiB (221 bytes). Long mode also supports page sizes of 1 GiB (230 bytes). Rather than the three-level page table system used by systems in PAE mode, systems running in long mode use four levels of page table: PAE's Page-Directory Pointer Table is extended from four entries to 512, and an additional Page-Map Level 4 (PML4) Table is added, containing 512 entries in 48-bit implementations. A full mapping hierarchy of 4 KiB pages for the whole 48-bit space would take a bit more than 512 GiB of memory (about 0.195% of the 256 TiB virtual space). Intel has implemented a scheme with a 5-level page table, which allows Intel 64 processors to support a 57-bit virtual address space. Further extensions may allow full 64-bit virtual address space and physical memory by expanding the page table entry size to 128-bit, and reduce page walks in the 5-level hierarchy by using a larger 64 KiB page allocation size that still supports 4 KiB page operations for backward compatibility. Operating system limits The operating system can also limit the virtual address space. Details, where applicable, are given in the "Operating system compatibility and characteristics" section. Physical address space details Current AMD64 processors support a physical address space of up to 248 bytes of RAM, or 256 TiB. However, , there were no known x86-64 motherboards that support 256 TiB of RAM. The operating system may place additional limits on the amount of RAM that is usable or supported. Details on this point are given in the "Operating system compatibility and characteristics" section of this article. Operating modes The architecture has two primary modes of operation: long mode and legacy mode. Long mode Long mode is the architecture's intended primary mode of operation; it is a combination of the processor's native 64-bit mode and a combined 32-bit and 16-bit compatibility mode. It is used by 64-bit operating systems. Under a 64-bit operating system, 64-bit programs run under 64-bit mode, and 32-bit and 16-bit protected mode applications (that do not need to use either real mode or virtual 8086 mode in order to execute at any time) run under compatibility mode. Real-mode programs and programs that use virtual 8086 mode at any time cannot be run in long mode unless those modes are emulated in software. However, such programs may be started from an operating system running in long mode on processors supporting VT-x or AMD-V by creating a virtual processor running in the desired mode. Since the basic instruction set is the same, there is almost no performance penalty for executing protected mode x86 code. This is unlike Intel's IA-64, where differences in the underlying instruction set mean that running 32-bit code must be done either in emulation of x86 (making the process slower) or with a dedicated x86 coprocessor. However, on the x86-64 platform, many x86 applications could benefit from a 64-bit recompile, due to the additional registers in 64-bit code and guaranteed SSE2-based FPU support, which a compiler can use for optimization. However, applications that regularly handle integers wider than 32 bits, such as cryptographic algorithms, will need a rewrite of the code handling the huge integers in order to take advantage of the 64-bit registers. Legacy mode Legacy mode is the mode that the processor is in when it is not in long mode. In this mode, the processor acts like an older x86 processor, and only 16-bit and 32-bit code can be executed. Legacy mode allows for a maximum of 32 bit virtual addressing which limits the virtual address space to 4 GiB. 64-bit programs cannot be run from legacy mode. Protected mode Protected mode is made into a submode of legacy mode. It is the submode that 32-bit operating systems and 16-bit protected mode operating systems operate in when running on an x86-64 CPU. Real mode Real mode is the initial mode of operation when the processor is initialized, and is a submode of legacy mode. It is backwards compatible with the original Intel 8086 and Intel 8088 processors. Real mode is primarily used today by operating system bootloaders, which are required by the architecture to configure virtual memory details before transitioning to higher modes. This mode is also used by any operating system that needs to communicate with the system firmware with a traditional BIOS-style interface. Intel 64 Intel 64 is Intel's implementation of x86-64, used and implemented in various processors made by Intel. History Historically, AMD has developed and produced processors with instruction sets patterned after Intel's original designs, but with x86-64, roles were reversed: Intel found itself in the position of adopting the ISA that AMD created as an extension to Intel's own x86 processor line. Intel's project was originally codenamed Yamhill (after the Yamhill River in Oregon's Willamette Valley). After several years of denying its existence, Intel announced at the February 2004 IDF that the project was indeed underway. Intel's chairman at the time, Craig Barrett, admitted that this was one of their worst-kept secrets. Intel's name for this instruction set has changed several times. The name used at the IDF was CT (presumably for Clackamas Technology, another codename from an Oregon river); within weeks they began referring to it as IA-32e (for IA-32 extensions) and in March 2004 unveiled the "official" name EM64T (Extended Memory 64 Technology). In late 2006 Intel began instead using the name Intel 64 for its implementation, paralleling AMD's use of the name AMD64. The first processor to implement Intel 64 was the multi-socket processor Xeon code-named Nocona in June 2004. In contrast, the initial Prescott chips (February 2004) did not enable this feature. Intel subsequently began selling Intel 64-enabled Pentium 4s using the E0 revision of the Prescott core, being sold on the OEM market as the Pentium 4, model F. The E0 revision also adds eXecute Disable (XD) (Intel's name for the NX bit) to Intel 64, and has been included in then current Xeon code-named Irwindale. Intel's official launch of Intel 64 (under the name EM64T at that time) in mainstream desktop processors was the N0 stepping Prescott-2M. The first Intel mobile processor implementing Intel 64 is the Merom version of the Core 2 processor, which was released on July 27, 2006. None of Intel's earlier notebook CPUs (Core Duo, Pentium M, Celeron M, Mobile Pentium 4) implement Intel 64. Implementations Intel's processors implementing the Intel64 architecture include the Pentium 4 F-series/5x1 series, 506, and 516, Celeron D models 3x1, 3x6, 355, 347, 352, 360, and 365 and all later Celerons, all models of Xeon since "Nocona", all models of Pentium Dual-Core processors since "Merom-2M", the Atom 230, 330, D410, D425, D510, D525, N450, N455, N470, N475, N550, N570, N2600 and N2800, all versions of the Pentium D, Pentium Extreme Edition, Core 2, Core i9, Core i7, Core i5, and Core i3 processors, and the Xeon Phi 7200 series processors. VIA's x86-64 implementation VIA Technologies introduced their first implementation of the x86-64 architecture in 2008 after five years of development by its CPU division, Centaur Technology. Codenamed "Isaiah", the 64-bit architecture was unveiled on January 24, 2008, and launched on May 29 under the VIA Nano brand name. The processor supports a number of VIA-specific x86 extensions designed to boost efficiency in low-power appliances. It is expected that the Isaiah architecture will be twice as fast in integer performance and four times as fast in floating-point performance as the previous-generation VIA Esther at an equivalent clock speed. Power consumption is also expected to be on par with the previous-generation VIA CPUs, with thermal design power ranging from 5 W to 25 W. Being a completely new design, the Isaiah architecture was built with support for features like the x86-64 instruction set and x86 virtualization which were unavailable on its predecessors, the VIA C7 line, while retaining their encryption extensions. Microarchitecture levels In 2020, through a cross-vendor collaboration, a few microarchitecture levels were defined, x86-64-v2, x86-64-v3 and x86-64-v4. These levels define specific features that can be targeted by programmers to provide compile-time optimizations. The features exposed by each level are as follows: All levels include features found in the previous levels. Instruction set extensions not concerned with general-purpose computation, including AES-NI and RDRAND, are excluded from the level requirements. Differences between AMD64 and Intel 64 Although nearly identical, there are some differences between the two instruction sets in the semantics of a few seldom used machine instructions (or situations), which are mainly used for system programming. Compilers generally produce executables (i.e. machine code) that avoid any differences, at least for ordinary application programs. This is therefore of interest mainly to developers of compilers, operating systems and similar, which must deal with individual and special system instructions. Recent implementations Intel 64's BSF and BSR instructions act differently than AMD64's when the source is zero and the operand size is 32 bits. The processor sets the zero flag and leaves the upper 32 bits of the destination undefined. Note that Intel documents that the destination register has an undefined value in this case, but in practice in silicon implements the same behaviour as AMD (destination unmodified). The separate claim about maybe not preserving bits in the upper 32 hasn't been verified, but has only been ruled out for Core 2 and Skylake, not all Intel microarchitectures like 64-bit Pentium 4 or low-power Atom. AMD64 requires a different microcode update format and control MSRs (model-specific registers) while Intel 64 implements microcode update unchanged from their 32-bit only processors. Intel 64 lacks some MSRs that are considered architectural in AMD64. These include SYSCFG, TOP_MEM, and TOP_MEM2. Intel 64 allows SYSCALL/SYSRET only in 64-bit mode (not in compatibility mode), and allows SYSENTER/SYSEXIT in both modes. AMD64 lacks SYSENTER/SYSEXIT in both sub-modes of long mode. In 64-bit mode, near branches with the 66H (operand size override) prefix behave differently. Intel 64 ignores this prefix: the instruction has 32-bit sign extended offset, and instruction pointer is not truncated. AMD64 uses 16-bit offset field in the instruction, and clears the top 48 bits of instruction pointer. AMD processors raise a floating-point Invalid Exception when performing an FLD or FSTP of an 80-bit signalling NaN, while Intel processors do not. Intel 64 lacks the ability to save and restore a reduced (and thus faster) version of the floating-point state (involving the FXSAVE and FXRSTOR instructions). AMD processors ever since Opteron Rev. E and Athlon 64 Rev. D have reintroduced limited support for segmentation, via the Long Mode Segment Limit Enable (LMSLE) bit, to ease virtualization of 64-bit guests. When returning to a non-canonical address using SYSRET, AMD64 processors execute the general protection fault handler in privilege level 3, while on Intel 64 processors it is executed in privilege level 0. Older implementations Early AMD64 processors (typically on Socket 939 and 940) lacked the CMPXCHG16B instruction, which is an extension of the CMPXCHG8B instruction present on most post-80486 processors. Similar to CMPXCHG8B, CMPXCHG16B allows for atomic operations on octa-words (128-bit values). This is useful for parallel algorithms that use compare and swap on data larger than the size of a pointer, common in lock-free and wait-free algorithms. Without CMPXCHG16B one must use workarounds, such as a critical section or alternative lock-free approaches. Its absence also prevents 64-bit Windows prior to Windows 8.1 from having a user-mode address space larger than 8 TiB. The 64-bit version of Windows 8.1 requires the instruction. Early AMD64 and Intel 64 CPUs lacked LAHF and SAHF instructions in 64-bit mode. AMD introduced these instructions (also in 64-bit mode) with their Athlon 64, Opteron and Turion 64 revision D processors in March 2005 while Intel introduced the instructions with the Pentium 4 G1 stepping in December 2005. The 64-bit version of Windows 8.1 requires this feature. Early Intel CPUs with Intel 64 also lack the NX bit of the AMD64 architecture. This feature is required by all versions of Windows 8.x. Early Intel 64 implementations (Prescott and Cedar Mill) only allowed access to 64 GiB of physical memory while original AMD64 implementations allowed access to 1 TiB of physical memory. Recent AMD64 implementations provide 256 TiB of physical address space (and AMD plans an expansion to 4 PiB), while some Intel 64 implementations could address up to 64 TiB. Physical memory capacities of this size are appropriate for large-scale applications (such as large databases), and high-performance computing (centrally oriented applications and scientific computing). Adoption In supercomputers tracked by TOP500, the appearance of 64-bit extensions for the x86 architecture enabled 64-bit x86 processors by AMD and Intel to replace most RISC processor architectures previously used in such systems (including PA-RISC, SPARC, Alpha and others), as well as 32-bit x86, even though Intel itself initially tried unsuccessfully to replace x86 with a new incompatible 64-bit architecture in the Itanium processor. , a Fujitsu A64FX-based supercomputer called Fugaku is number one. The first ARM-based supercomputer appeared on the list in 2018 and, in recent years, non-CPU architecture co-processors (GPGPU) have also played a big role in performance. Intel's Xeon Phi "Knights Corner" coprocessors, which implement a subset of x86-64 with some vector extensions, are also used, along with x86-64 processors, in the Tianhe-2 supercomputer. Operating system compatibility and characteristics The following operating systems and releases support the x86-64 architecture in long mode. BSD DragonFly BSD Preliminary infrastructure work was started in February 2004 for a x86-64 port. This development later stalled. Development started again during July 2007 and continued during Google Summer of Code 2008 and SoC 2009. The first official release to contain x86-64 support was version 2.4. FreeBSD FreeBSD first added x86-64 support under the name "amd64" as an experimental architecture in 5.1-RELEASE in June 2003. It was included as a standard distribution architecture as of 5.2-RELEASE in January 2004. Since then, FreeBSD has designated it as a Tier 1 platform. The 6.0-RELEASE version cleaned up some quirks with running x86 executables under amd64, and most drivers work just as they do on the x86 architecture. Work is currently being done to integrate more fully the x86 application binary interface (ABI), in the same manner as the Linux 32-bit ABI compatibility currently works. NetBSD x86-64 architecture support was first committed to the NetBSD source tree on June 19, 2001. As of NetBSD 2.0, released on December 9, 2004, NetBSD/amd64 is a fully integrated and supported port. 32-bit code is still supported in 64-bit mode, with a netbsd-32 kernel compatibility layer for 32-bit syscalls. The NX bit is used to provide non-executable stack and heap with per-page granularity (segment granularity being used on 32-bit x86). OpenBSD OpenBSD has supported AMD64 since OpenBSD 3.5, released on May 1, 2004. Complete in-tree implementation of AMD64 support was achieved prior to the hardware's initial release because AMD had loaned several machines for the project's hackathon that year. OpenBSD developers have taken to the platform because of its support for the NX bit, which allowed for an easy implementation of the W^X feature. The code for the AMD64 port of OpenBSD also runs on Intel 64 processors which contains cloned use of the AMD64 extensions, but since Intel left out the page table NX bit in early Intel 64 processors, there is no W^X capability on those Intel CPUs; later Intel 64 processors added the NX bit under the name "XD bit". Symmetric multiprocessing (SMP) works on OpenBSD's AMD64 port, starting with release 3.6 on November 1, 2004. DOS It is possible to enter long mode under DOS without a DOS extender, but the user must return to real mode in order to call BIOS or DOS interrupts. It may also be possible to enter long mode with a DOS extender similar to DOS/4GW, but more complex since x86-64 lacks virtual 8086 mode. DOS itself is not aware of that, and no benefits should be expected unless running DOS in an emulation with an adequate virtualization driver backend, for example: the mass storage interface. Linux Linux was the first operating system kernel to run the x86-64 architecture in long mode, starting with the 2.4 version in 2001 (preceding the hardware's availability). Linux also provides backward compatibility for running 32-bit executables. This permits programs to be recompiled into long mode while retaining the use of 32-bit programs. Several Linux distributions currently ship with x86-64-native kernels and userlands. Some, such as Arch Linux, SUSE, Mandriva, and Debian allow users to install a set of 32-bit components and libraries when installing off a 64-bit DVD, thus allowing most existing 32-bit applications to run alongside the 64-bit OS. Other distributions, such as Fedora, Slackware and Ubuntu, are available in one version compiled for a 32-bit architecture and another compiled for a 64-bit architecture. Fedora and Red Hat Enterprise Linux allow concurrent installation of all userland components in both 32 and 64-bit versions on a 64-bit system. x32 ABI (Application Binary Interface), introduced in Linux 3.4, allows programs compiled for the x32 ABI to run in the 64-bit mode of x86-64 while only using 32-bit pointers and data fields. Though this limits the program to a virtual address space of 4 GiB it also decreases the memory footprint of the program and in some cases can allow it to run faster. 64-bit Linux allows up to 128 TiB of virtual address space for individual processes, and can address approximately 64 TiB of physical memory, subject to processor and system limitations. macOS Mac OS X 10.4.7 and higher versions of Mac OS X 10.4 run 64-bit command-line tools using the POSIX and math libraries on 64-bit Intel-based machines, just as all versions of Mac OS X 10.4 and 10.5 run them on 64-bit PowerPC machines. No other libraries or frameworks work with 64-bit applications in Mac OS X 10.4. The kernel, and all kernel extensions, are 32-bit only. Mac OS X 10.5 supports 64-bit GUI applications using Cocoa, Quartz, OpenGL, and X11 on 64-bit Intel-based machines, as well as on 64-bit PowerPC machines. All non-GUI libraries and frameworks also support 64-bit applications on those platforms. The kernel, and all kernel extensions, are 32-bit only. Mac OS X 10.6 is the first version of macOS that supports a 64-bit kernel. However, not all 64-bit computers can run the 64-bit kernel, and not all 64-bit computers that can run the 64-bit kernel will do so by default. The 64-bit kernel, like the 32-bit kernel, supports 32-bit applications; both kernels also support 64-bit applications. 32-bit applications have a virtual address space limit of 4 GiB under either kernel. The 64-bit kernel does not support 32-bit kernel extensions, and the 32-bit kernel does not support 64-bit kernel extensions. OS X 10.8 includes only the 64-bit kernel, but continues to support 32-bit applications; it does not support 32-bit kernel extensions, however. macOS 10.15 includes only the 64-bit kernel and no longer supports 32-bit applications. This removal of support has presented a problem for WineHQ (and the commercial version CrossOver), as it needs to still be able to run 32-bit Windows applications. The solution, termed wine32on64, was to add thunks that bring the CPU in and out of 32-bit compatibility mode in the nominally 64-bit application. macOS uses the universal binary format to package 32- and 64-bit versions of application and library code into a single file; the most appropriate version is automatically selected at load time. In Mac OS X 10.6, the universal binary format is also used for the kernel and for those kernel extensions that support both 32-bit and 64-bit kernels. Solaris Solaris 10 and later releases support the x86-64 architecture. For Solaris 10, just as with the SPARC architecture, there is only one operating system image, which contains a 32-bit kernel and a 64-bit kernel; this is labeled as the "x64/x86" DVD-ROM image. The default behavior is to boot a 64-bit kernel, allowing both 64-bit and existing or new 32-bit executables to be run. A 32-bit kernel can also be manually selected, in which case only 32-bit executables will run. The isainfo command can be used to determine if a system is running a 64-bit kernel. For Solaris 11, only the 64-bit kernel is provided. However, the 64-bit kernel supports both 32- and 64-bit executables, libraries, and system calls. Windows x64 editions of Microsoft Windows client and server—Windows XP Professional x64 Edition and Windows Server 2003 x64 Edition—were released in March 2005. Internally they are actually the same build (5.2.3790.1830 SP1), as they share the same source base and operating system binaries, so even system updates are released in unified packages, much in the manner as Windows 2000 Professional and Server editions for x86. Windows Vista, which also has many different editions, was released in January 2007. Windows 7 was released in July 2009. Windows Server 2008 R2 was sold in only x64 and Itanium editions; later versions of Windows Server only offer an x64 edition. Versions of Windows for x64 prior to Windows 8.1 and Windows Server 2012 R2 offer the following: 8 TiB of virtual address space per process, accessible from both user mode and kernel mode, referred to as the user mode address space. An x64 program can use all of this, subject to backing store limits on the system, and provided it is linked with the "large address aware" option. This is a 4096-fold increase over the default 2 GiB user-mode virtual address space offered by 32-bit Windows. 8 TiB of kernel mode virtual address space for the operating system. As with the user mode address space, this is a 4096-fold increase over 32-bit Windows versions. The increased space primarily benefits the file system cache and kernel mode "heaps" (non-paged pool and paged pool). Windows only uses a total of 16 TiB out of the 256 TiB implemented by the processors because early AMD64 processors lacked a CMPXCHG16B instruction. Under Windows 8.1 and Windows Server 2012 R2, both user mode and kernel mode virtual address spaces have been extended to 128 TiB. These versions of Windows will not install on processors that lack the CMPXCHG16B instruction. The following additional characteristics apply to all x64 versions of Windows: Ability to run existing 32-bit applications (.exe programs) and dynamic link libraries (.dlls) using WoW64 if WoW64 is supported on that version. Furthermore, a 32-bit program, if it was linked with the "large address aware" option, can use up to 4 GiB of virtual address space in 64-bit Windows, instead of the default 2 GiB (optional 3 GiB with /3GB boot option and "large address aware" link option) offered by 32-bit Windows. Unlike the use of the /3GB boot option on x86, this does not reduce the kernel mode virtual address space available to the operating system. 32-bit applications can, therefore, benefit from running on x64 Windows even if they are not recompiled for x86-64. Both 32- and 64-bit applications, if not linked with "large address aware," are limited to 2 GiB of virtual address space. Ability to use up to 128 GiB (Windows XP/Vista), 192 GiB (Windows 7), 512 GiB (Windows 8), 1 TiB (Windows Server 2003), 2 TiB (Windows Server 2008/Windows 10), 4 TiB (Windows Server 2012), or 24 TiB (Windows Server 2016/2019) of physical random access memory (RAM). LLP64 data model: "int" and "long" types are 32 bits wide, long long is 64 bits, while pointers and types derived from pointers are 64 bits wide. Kernel mode device drivers must be 64-bit versions; there is no way to run 32-bit kernel mode executables within the 64-bit operating system. User mode device drivers can be either 32-bit or 64-bit. 16-bit Windows (Win16) and DOS applications will not run on x86-64 versions of Windows due to the removal of the virtual DOS machine subsystem (NTVDM) which relied upon the ability to use virtual 8086 mode. Virtual 8086 mode cannot be entered while running in long mode. Full implementation of the NX (No Execute) page protection feature. This is also implemented on recent 32-bit versions of Windows when they are started in PAE mode. Instead of FS segment descriptor on x86 versions of the Windows NT family, GS segment descriptor is used to point to two operating system defined structures: Thread Information Block (NT_TIB) in user mode and Processor Control Region (KPCR) in kernel mode. Thus, for example, in user mode GS:0 is the address of the first member of the Thread Information Block. Maintaining this convention made the x86-64 port easier, but required AMD to retain the function of the FS and GS segments in long mode – even though segmented addressing per se is not really used by any modern operating system. Early reports claimed that the operating system scheduler would not save and restore the x87 FPU machine state across thread context switches. Observed behavior shows that this is not the case: the x87 state is saved and restored, except for kernel mode-only threads (a limitation that exists in the 32-bit version as well). The most recent documentation available from Microsoft states that the x87/MMX/3DNow! instructions may be used in long mode, but that they are deprecated and may cause compatibility problems in the future. Some components like Jet Database Engine and Data Access Objects will not be ported to 64-bit architectures such as x86-64 and IA-64. Microsoft Visual Studio can compile native applications to target either the x86-64 architecture, which can run only on 64-bit Microsoft Windows, or the IA-32 architecture, which can run as a 32-bit application on 32-bit Microsoft Windows or 64-bit Microsoft Windows in WoW64 emulation mode. Managed applications can be compiled either in IA-32, x86-64 or AnyCPU modes. Software created in the first two modes behave like their IA-32 or x86-64 native code counterparts respectively; When using the AnyCPU mode, however, applications in 32-bit versions of Microsoft Windows run as 32-bit applications, while they run as a 64-bit application in 64-bit editions of Microsoft Windows. Video game consoles Both PlayStation 4 and Xbox One and their variants incorporate AMD x86-64 processors, based on the Jaguar microarchitecture. Firmware and games are written in x86-64 code; no legacy x86 code is involved. Their next generations, the PlayStation 5 and the Xbox Series X and Series S respectively, also incorporate AMD x86-64 processors, based on the Zen 2 microarchitecture. Industry naming conventions Since AMD64 and Intel 64 are substantially similar, many software and hardware products use one vendor-neutral term to indicate their compatibility with both implementations. AMD's original designation for this processor architecture, "x86-64", is still sometimes used for this purpose, as is the variant "x86_64". Other companies, such as Microsoft and Sun Microsystems/Oracle Corporation, use the contraction "x64" in marketing material. The term IA-64 refers to the Itanium processor, and should not be confused with x86-64, as it is a completely different instruction set. Many operating systems and products, especially those that introduced x86-64 support prior to Intel's entry into the market, use the term "AMD64" or "amd64" to refer to both AMD64 and Intel 64. amd64 Most BSD systems such as FreeBSD, MidnightBSD, NetBSD and OpenBSD refer to both AMD64 and Intel 64 under the architecture name "amd64". Some Linux distributions such as Debian, Ubuntu, Gentoo Linux refer to both AMD64 and Intel 64 under the architecture name "amd64". Microsoft Windows's x64 versions use the AMD64 moniker internally to designate various components which use or are compatible with this architecture. For example, the environment variable PROCESSOR_ARCHITECTURE is assigned the value "AMD64" as opposed to "x86" in 32-bit versions, and the system directory on a Windows x64 Edition installation CD-ROM is named "AMD64", in contrast to "i386" in 32-bit versions. Sun's Solaris's isalist command identifies both AMD64- and Intel 64-based systems as "amd64". Java Development Kit (JDK): the name "amd64" is used in directory names containing x86-64 files. x86_64 The Linux kernel and the GNU Compiler Collection refers to 64-bit architecture as "x86_64". Some Linux distributions, such as Fedora, openSUSE, Arch Linux, Gentoo Linux refer to this 64-bit architecture as "x86_64". Apple macOS refers to 64-bit architecture as "x86-64" or "x86_64", as seen in the Terminal command arch and in their developer documentation. Breaking with most other BSD systems, DragonFly BSD refers to 64-bit architecture as "x86_64". Haiku refers to 64-bit architecture as "x86_64". Licensing x86-64/AMD64 was solely developed by AMD. AMD holds patents on techniques used in AMD64; those patents must be licensed from AMD in order to implement AMD64. Intel entered into a cross-licensing agreement with AMD, licensing to AMD their patents on existing x86 techniques, and licensing from AMD their patents on techniques used in x86-64. In 2009, AMD and Intel settled several lawsuits and cross-licensing disagreements, extending their cross-licensing agreements. See also AMD Generic Encapsulated Software Architecture (AGESA) IA-32 x86 Notes References External links AMD Developer Guides, Manuals & ISA Documents x86-64: Extending the x86 architecture to 64-bits – technical talk by the architect of AMD64 (video archive), and second talk by the same speaker (video archive) AMD's "Enhanced Virus Protection" Intel tweaks EM64T for full AMD64 compatibility Analyst: Intel Reverse-Engineered AMD64 Early report of differences between Intel IA32e and AMD64 Porting to 64-bit GNU/Linux Systems, by Andreas Jaeger from GCC Summit 2003. An excellent paper explaining almost all practical aspects for a transition from 32-bit to 64-bit. Intel 64 Architecture Intel Software Network: "64 bits" TurboIRC.COM tutorials, including examples of how to of enter protected and long mode the raw way from DOS Seven Steps of Migrating a Program to a 64-bit System Memory Limits for Windows Releases Computer-related introductions in 2003 X86 architecture 64-bit computers Advanced Micro Devices technologies
Operating System (OS)
978
Telesoftware The term telesoftware was coined by W.J.G. Overington who first proposed the idea; it literally means “software at a distance” and it often refers to the transmission of programs for a microprocessor or home computer via broadcast teletext, though the use of teletext was just a convenient way to implement the invention, which had been invented as a theoretical broadcasting concept previously. The concept being of producing local interactivity without the need for a return information link to a central computer. The invention arose as spin-off from research on function generators for a hybrid computer system for use in simulation of heat transfer in food preservation, and thus from outside of the broadcasting research establishments. Software bytes are presented to a terminal as pairs of standard teletext characters, thus utilizing an existing and well-proven broadcasting system. History Telesoftware was pioneered in the UK during the 1970s and 1980s, and a paper on the subject was presented by R.H. Vivian (IBA) and W.J.G. Overington at the 1978 International Broadcasting Convention. The world first test broadcast took place on ITV Oracle in February 1977, though there was no equipment available to use the software at that time. The broadcast simply produced a display of the encoded software, for a Signetics 2650 microprocessor, on a teletext television. However, the fact that the broadcast took place gave the concept practical credibility of something that was realistically possible for the future. At the 1978 International Broadcasting Convention a demonstration of telesoftware working from a live feed of ITV Oracle teletext was presented on an exhibition stand by Mr Hedger. The Oracle signal being carried within the ITV signal. At one stage the ITV signal was routed via a communication satellite as part of a television demonstration, and the opportunity was used to test telesoftware using that signal that had been routed via the communication satellite, and it worked well. Also, a display maquette, with the title Telesoftware Tennis had been broadcast live for a few minutes on ITV Oracle in November or December 1976. Although that was just during a discussion of the future possibilities for telesoftware, the development in the 21st century of retrieving teletext pages from super-VHS recordings means that if anyone was recording the ITV television broadcast on super-VHS videotape at that time, then that maquette page could potentially be recovered from the tape by teletext archaeologists, as potentially could the broadcasts from 1977 mentioned above and the broadcasts made in 1978 at the time of the International Broadcasting Convention. Such technique has already been used to recover and archive telesoftware broadcasts made in the 1980s by the BBC. During that time, software was broadcast at various times on all of the (then) four terrestrial TV channels. Telesoftware and tutorials were available on Ceefax (BBC teletext service) for the BBC Micro via its teletext adapter between 1983 and 1989 and was generally transmitted for a period of one week. The BBC Ceefax Telesoftware service was managed by Jeremy Brayshaw. Most of the Telesoftware programming tutorials were written by Gordon Horsington and they, as well as most of the software, are still available from the online Telesoftware archives (see the external links below). Downloading could take place from Friday evening to the following Thursday evening. As the updating took place on a Friday, it was advised not to attempt to download software between 9am and 7pm on Fridays. Other channels provided for several other computers via a range of adapters and set-top boxes. The same delivery system was also used to deliver satellite weather images from the Meteosat satellite for download. Although none of the early telesoftware initiatives survived, many of the techniques are now at the heart of the latest digital television systems. Various archives of BBC Ceefax Telesoftware are preserved on the internet. See also Multimedia Home Platform References BBC computer literacy projects Computer-related introductions in 1983
Operating System (OS)
979
System X (telephony) System X is the digital switching system installed in telephone exchanges throughout the United Kingdom, from 1980 onwards. History Development System X was developed by the Post Office (later to become British Telecom), GEC, Plessey, and Standard Telephones and Cables (STC), and was first shown in public in 1979 at the Telecom 79 exhibition in Geneva, Switzerland. In 1982, STC withdrew from System X and, in 1988, the telecommunications divisions of GEC and Plessey merged to form GPT, with Plessey subsequently being bought out by GEC and Siemens. In the late 1990s, GEC acquired Siemens' 40% stake in GPT and, in 1999, the parent company of GPT, GEC, renamed itself Marconi. When Marconi was sold to Ericsson in January 2006, Telent plc retained System X and continues to support and develop it as part of its UK services business. Implementation The first System X unit to enter public service was in September 1980 and was installed in Baynard House, London and was a 'tandem junction unit' which switched telephone calls amongst some 40 local exchanges. The first local digital exchange started operation in 1981 in Woodbridge, Suffolk (near BT's Research HQ at Martlesham Heath). The last electromechanical trunk exchange (in Thurso, Scotland) was closed in July 1990—completing the UK's trunk network transition to purely digital operation and becoming the first national telephone system to achieve this. The last electromechanical local exchanges, Crawford, Crawfordjohn and Elvanfoot, all in Scotland, were changed over to digital on 23 June 1995 and the last electronic analogue exchanges, Selby, Yorkshire and Leigh on Sea, Essex were changed to digital on 11 March 1998. In addition to the UK, System X was installed in the Channel Islands and several systems were installed in other countries, although it never achieved significant export sales. Small exchanges: UXD5 Separately from System X, BT developed the UXD5 ("unit exchange digital"), a small digital exchange which was cost-effective for small and remote communities. Developed by BT at Martlesham Heath and based on the Monarch PABX, the first example was put into service at Glenkindie, Scotland, in 1979, the year before the first System X. Several hundred of these exchanges were manufactured by Plessey and installed in rural areas, largely in Scotland and Wales. The UXD5 was included as part of the portfolio when System X was marketed to other countries. System X units System X covers three main types of telephone switching equipment. Many of these switches reside all over the United Kingdom. Concentrators are usually kept in local telephone exchanges but can be housed remotely in less populated areas. DLEs and DMSUs operate in major towns and cities and provide call routing functions. The BT network architecture designated exchanges as DLEs / DMSUs / DJSUs etc. but other operators configured their exchanges differently depending on their network architecture. With the focus of the design being on reliability, the general architectural principle of System X hardware is that all core functionality is duplicated across two 'sides' (side 0 and side 1). Either side of a functional resource can be the 'worker' with the other being an in-service 'standby'. Resources continually monitor themselves and should a fault be detected the associated resource will mark itself as 'faulty' and the other side will take the load instantaneously. This resilient configuration allows for hardware changes to fix faults or perform upgrades without interruption to service. Some critical hardware such as switchplanes and waveform generators are triplicated and work on an 'any 2 out of 3' basis. The CPUs in an R2PU processing cluster are quadruplicated to retain 75% performance capability with one out of service instead of 50% if they were simply duplicated. The SystemX processing system was the first multi-processor cluster in the world [requires confirmation]. Line cards providing customer line ports or the 2Mbps E1 terminations on the switch have no 'second side' redundancy, but of course a customer can have multiple lines or an interconnect have multiple E1s to provide resilience. Concentrator unit The concentrator unit consists of four main sub-systems: line modules, digital concentrator switch, digital line termination (DLT) units and control unit. Its purpose is to convert speech from analogue signals to digital format, and concentrate the traffic for onward transmission to the digital local exchange (DLE). It also receives dialled information from the subscriber and passes this to the exchange processors so that the call can be routed to its destination. In normal circumstances, it does not switch signals between subscriber lines but has limited capacity to do this if the connection to the exchange switch is lost. Each analogue line module unit converts analogue signals from a maximum of 64 subscriber lines in the access network to the 64 kilobit/s digital binary signals used in the core network. This is done by sampling the incoming signal at a rate of 8 kS/s and coding each sample into an 8-bit word using pulse-code modulation (PCM) techniques. The line module also strips out any signalling information from the subscriber line, e.g., dialled digits, and passes this to the control unit. Up to 32 line modules are connected to a digital concentrator switch unit using 2 Mbit/s paths, giving each concentrator a capacity of up to 2048 subscriber lines. The digital concentrator switch multiplexes the signals from the line modules using time-division multiplexing and concentrates the signals onto up to 480 time slots on E1s up to the exchange switch via the digital line termination units. The other two time slots on each channel are used for synchronisation and signalling. These are timeslots 0 and 16 respectively. Depending on hardware used, concentrators support the following line types: analogue lines (either single or multiple line groups), ISDN2 (basic rate ISDN) and ISDN30 (primary rate ISDN). ISDN can run either UK-specific DASS2 or ETSI(euro) protocols. Subject to certain restrictions a concentrator can run any mix of line types, this allows operators to balance business ISDN users with residential users to give a better service to both and efficiency for the operator. Concentrator units can either stand alone as remote concentrators or be co-located with the exchange core (switch and processors). Digital local exchange The Digital Local Exchange (DLE) hosts a number of concentrators and routes calls to different DLEs or DMSUs depending on the destination of the call. The heart of the DLE is the Digital Switching Subsystem (DSS) which consists of Time Switches and a Space Switch. Incoming traffic on the 30 channel PCM highways from the Concentrator Units is connected to Time Switches. The purpose of these is to take any incoming individual Time Slot and connect it to an outgoing Time Slot and so perform a switching and routing function. To allow access to a large range of outgoing routes, individual Time Switches are connected to each other by a Space Switch. The Time Slot inter-connections are held in Switch Maps which are updated by Software running on the Processor Utility Subsystem (PUS). The nature of the Time Switch-Space Switch architecture is such that the system is very unlikely to be affected by a faulty time or space switch, unless many faults are present. The switch is a 'non-blocking' switch. Digital main switching unit The Digital Main Switching Unit (DMSU) deals with calls that have been routed by DLEs or another DMSU and is a 'trunk / transit switch', i.e. it does not host any concentrators. As with DLEs, DMSUs are made up of a Digital Switching Subsystem and a Processor Utility Subsystem, amongst other things. In the British PSTN network, each DMSU is connected to every other DMSU in the country, enabling almost congestion-proof connectivity for calls through the network. In inner London, specialised versions of the DMSU exist and are known as DJSU's - they are practically identical in terms of hardware - both being fully equipped switches, the DJSU has the distinction of carrying inter-London traffic only. The DMSU network in London has been gradually phased out and moved onto more modern "NGS" switches over the years as the demand for PSTN phone lines has decreased as BT has sought to reclaim some of its floor-space. The NGS switch referred to is a version of Ericsson's AXE10 product line, phased in between the late '90s and early '00s. It is common to find multiple exchanges (switches) within the same exchange building in large UK cities: DLEs for the directly-connected customers and a DMSU to provide the links to the rest of the UK. Processor utility subsystem The Processor Utility Subsystem (PUS) controls the switching operations and is the brain of the DLE or DMSU. It hosts the Call Processing, Billing, Switching and Maintenance applications Software (amongst other software subsystems). The PUS is divided into up to eight 'clusters' depending on the amount of telephony traffic dealt with by the exchange. Each of the first four clusters of processors contains four central processing units (CPUs), the main memory stores (STRs) and the two types of backing store (primary (RAM) and secondary (hard disk)) memory. The PUS was coded with a version of the CORAL66 programming language known as PO CORAL (Post Office CORAL) later known as BTCORAL. The original processor that went into service at Baynard house, London, was known as the MK2 BL processor. It was replaced in 1980 by the POPUS1 (Post Office Processor Utility Subsystem). POPUS1 processors were later installed in Lancaster House in Liverpool and also, in Cambridge. Later, these too were replaced with a much smaller system known as R2PU or Release 2 Processor Utility. This was the four CPU per cluster and up to 8-cluster system, as described above. Over time, as the system was developed, additional "CCP / Performance 3" clusters were added (clusters 5, 6, 7 and 8) using more modern hardware, akin to late-1990s computer technology, while the original processing clusters 0 to 3 were upgraded with, for example, larger stores (more RAM). There were many very advanced features with this fault-tolerant system which helps explain why these are still in use today – like self fault detection and recovery, battery-backed RAM, mirrored disk storage, auto replacement of a failed memory unit, the ability to trial new software (and roll back, if necessary) to the previous version. In recent times, the hard disks on the CCP clusters have been replaced by with solid-state drives to improve reliability. SystemX was the first multi-processor cluster system in the world [requires confirmation]. In modern times, all System X switches show a maximum of 12 processing clusters; 0–3 are the four-CPU System X-based clusters and the remaining eight positions can be filled with CCP clusters which deal with all traffic handling. Whilst the status quo for a large System X switch is to have four main and four CCP clusters, there are one or two switches that have four main and six CCP clusters. The CCP clusters are limited to call handling only, there was the potential for the exchange software to be re-written to accept the CCP clusters, but this was scrapped as being too costly of a solution to replace a system that was already working well. Should a CCP cluster fail, System X will automatically re-allocate its share of the call handling to another CCP cluster, if no CCP clusters are available then the exchange's main clusters will begin to take over the work of call handling as well as running the exchange. In terms of structure, the System X processor is a "one master, many slaves" configuration – cluster 0 is referred to as the base cluster and all other clusters are effectively dependent to it. If a slave cluster is lost, then call handling for any routes or concentrators dependent to it is also lost; however, if the base cluster is lost then the entire exchange ceases to function. This is a very rare occurrence, as due to the design of System X it will isolate problematic hardware and raise a fault report. During normal operation, the highest level of disruption is likely to be a base cluster restart, all exchange functions are lost for 2–5 minutes while the base cluster and its slaves come back online, but afterwards the exchange will continue to function with the defective hardware isolated. The exchange can and will restart ('rollback') individual processes if it detects problems with them. If that doesn't work then a cluster restart can be performed. Should the base cluster or switch be irrecoverable via restarts, the latest archive configuration can be manually reloaded using the restoration procedure. This can take hours to bring everything fully back into service as the switch has to reload all its semi-permanent paths and the concentrators have to download their configurations. Post-2020, exchange software is being modified to reduce the restoration time significantly. During normal operation, the exchange's processing clusters will sit between 5-15% usage, with the exception of the base cluster which will usually sit between 15-25% usage, spiking as high as 45% - this is due to the base cluster handling far more operations and processes than any other cluster on the switch. Editions of System X System X has gone through two major editions, Mark 1 and Mark 2, referring to the switch matrix used. The Mark 1 Digital Subscriber Switch (DSS) was the first to be introduced. It is a time-space-time switch setup with a theoretical maximum matrix of 96x96 Time Switches. In practice, the maximum size of switch is a 64x64 Time Switch matrix. Each time switch is duplicated into two security planes, 0 and 1. This allows for error checking between the planes and multiple routing options if faults are found. Every timeswitch on a single plane can be out of service and full function of the switch can be maintained, however, if one timeswitch on plane 0 is out, and another on plane 1 is out, then links between the two are lost. Similarly, if a timeswitch has both plane 0 and 1 out, then the timeswitch is isolated. Each plane of the timeswitch occupies one shelf in a three-shelf group – the lower shelf is plane 0, the upper shelf is plane 1 and the middle shelf is occupied by up to 32 DLTs (Digital Line Terminations). The DLT is a 2048 kb/s 32-channel PCM link in and out of the exchange. The space switch is a more complicated entity, but is given a name ranging from AA to CC (or BB within general use), a plane of 0 or 1 and, due to the way it is laid out, an even or odd segment, designated by another 0 and 1. The name of a space switch in software, then, can look like this. SSW H'BA-0-1. The space switch is the entity that provides the logical cross connection of traffic across the switch, and the time switches are dependent to it. When working on a space switch it is imperative to make sure the rest of the switch is healthy as, due to its layout, powering off either the odd or even segment of a space switch will "kill" all of its dependent time switches for that plane. Mark 1 DSS is controlled by a triplicated set of Connection Control Units (CCU's) which run in a 2/3 majority for error checking, and is monitored constantly by a duplicated Alarm Monitoring Unit (AMU) which reports faults back to the DSS Handler process for appropriate action to be taken. The CCU and AMU also play part in diagnostic testing of Mark 1 DSS. A Mark 1 System X unit is built in suites, each 8 racks in length, and there can be 15 or more suites. Considerations of space, power demand and cooling demand led to development of the Mark 2. Mark 2 DSS ("DSS2") is the later revision, which continues to use the same processor system as Mark 1, but made serious and much needed revisions to both the physical size of the switch and the way that the switch functions. It is an optical fibre-based time-space-time-space-time switching matrix, connecting a maximum of 2048 2Mbps PCM systems, much like Mark 1; however the hardware is much more compact. The four-rack group of the Mk1 CCU and AMU is gone, and replaced neatly by a single connection control rack, comprising the Outer Switch Modules (OSMs), Central Switch Modules (CSMs) and the relevant switch/processor interface hardware. The Timeswitch shelves are replaced with Digital Line Terminator Group (DLTG) shelves, which each contain two DLTGs, comprising 16 Double Digital Line Termination boards (DDLTs) and two Line Communication Multiplexors (LCMs), one for each security plane. The LCMs are connected by optical fibre over a forty megabit link to the OSMs. In total, there are 64 DLTG's in a fully sized Mk2 DSS unit, which is analogous to the 64 Time Switches of the Mk1 DSS unit. The Mk2 DSS unit is a lot smaller than the Mk1, and as such consumes less power and also generates less heat to be dealt with as a result. It is also possible to interface directly with SDH transmission over fibre at 40Mbps, thus reducing the amount of 2Mbps DDF and SDH tributary usage. Theoretically, a transit switch (DMSU) could purely interface with the SDH over fibre with no DDF at all. Further to this, due to the completely revised switch design and layout, the Mk2 switch manages to be somewhat faster than the Mk1 (although the actual difference is negligible in practice). It is also far more reliable, having many less discrete components in each of its sections means there is much less to go wrong, and when something does go wrong it is usually a matter of replacing the card tied to the software entity that has failed, rather than needing to run diagnostics to determine possible locations for the point of failure as is the case with Mk1 DSS. Message Transmission Subsystem A System X exchange's processors communicate with its concentrators and other exchanges using its Message Transmission subsystem (MTS). MTS links are 'nailed up' between nodes by re-purposing individual 64kbps digital speech channels across the switch into permanent paths for the signalling messages to route over. Messaging to and from concentrators is done using proprietary messaging, messaging between exchanges is done using C7 / SS7 messaging. UK-specific and ETSI variant protocols are supported. It was also possible to use channel associated signalling, but as the UK and Europe's exchanges went digital in the same era this was hardly used. Replacement system Many of the system X exchanges installed during the 1980s continue in service into the 2020s. System X was scheduled for replacement with Next Generation softswitch equipment as part of the BT 21st Century Network (21CN) programme. Some other users of System X – in particular Jersey Telecom and Kingston Communications – replaced their circuit-switched System X equipment with Marconi XCD5000 softswitches (which were intended as the NGN replacement for System X) and Access Hub multiservice access nodes. However, the omission of Marconi from BT's 21CN supplier list, the lack of a suitable replacement softswitch to match System X's reliability, and the shift in focus away from telephony to broadband all led to much of the System X estate being maintained. Later software versions allow more concentrators to be connected to the core of the exchange, and thus BT are rationalising their SystemX estate by re-parenting concentrators from old exchanges with Mk1 DSS onto newer exchanges with Mk2 DSS, often converting DMSUs into CTLEs (combined trunk & local exchanges). The need for these CTLEs to host large numbers of concentrators (90+) has resulted in unacceptably long restoration times. As such, the exchange software has been heavily re-written to speed up the restoration times. Previously, the exchange would bring resources into service in a fairly random fashion, whereas the new software will concentrate on getting service back as fast as possible - reduced downloads, faster downloads, bringing subsystems and concentrators back up single-sided etc. Closing the subsequently redundant Mk1 exchanges enables a saving in floorspace, power and cooling costs, with some buildings given up entirely. See also System Y AXE telephone exchange TXE PRX References BT Group Plessey Telephone exchange equipment Telecommunications-related introductions in 1980 Telecommunications in the United Kingdom General Electric Company
Operating System (OS)
980
CP/M-86 CP/M-86 was a version of the CP/M operating system that Digital Research (DR) made for the Intel 8086 and Intel 8088. The system commands are the same as in CP/M-80. Executable files used the relocatable .CMD file format. Digital Research also produced a multi-user multitasking operating system compatible with CP/M-86, MP/M-86, which later evolved into Concurrent CP/M-86. When an emulator was added to provide PC DOS compatibility, the system was renamed Concurrent DOS, which later became Multiuser DOS, of which REAL/32 is the latest incarnation. The FlexOS, DOS Plus, and DR DOS families of operating systems started as derivations of Concurrent DOS as well. History Digital Research's CP/M-86 was originally announced to be released in November 1979, but was delayed repeatedly. When IBM contacted other companies to obtain components for the IBM PC, the as-yet unreleased CP/M-86 was its first choice for an operating system because CP/M had the most applications at the time. Negotiations between Digital Research and IBM quickly deteriorated over IBM's non-disclosure agreement and its insistence on a one-time fee rather than DRI's usual royalty licensing plan. After discussions with Microsoft, IBM decided to use 86-DOS (QDOS), a CP/M-like operating system that Microsoft bought from Seattle Computer Products renaming it MS-DOS. Microsoft adapted it for PC, and licensed it to IBM. It was sold by IBM under the name of PC DOS. After learning about the deal, Digital Research founder Gary Kildall threatened to sue IBM for infringing DRI's intellectual property, and IBM agreed to offer CP/M-86 as an alternative operating system on the PC to settle the claim. Most of the BIOS drivers for CP/M-86 for the IBM PC were written by Andy Johnson-Laird. The IBM PC was announced on 12 August 1981, and the first machines began shipping in October the same year, ahead of schedule. CP/M-86 was one of three operating systems available from IBM, with PC DOS and UCSD p-System. Digital Research's adaptation of CP/M-86 for the IBM PC was released six months after PC DOS in spring 1982, and porting applications from CP/M-80 to either operating system was about equally difficult. In November 1981, Digital Research also released a version for the proprietary IBM Displaywriter. On some dual-processor 8-bit/16-bit computers special versions of CP/M-86 could natively run CP/M-86 and CP/M-80 applications. A version for the DEC Rainbow was named CP/M-86/80, whereas the version for the was named CP/M 8-16 (see also: MP/M 8-16). The version of CP/M-86 for the 8085/8088-based Zenith Z-100 supported running programs for both processors as well. When PC clones came about, Microsoft licensed MS-DOS to other companies as well. Experts found that the two operating systems were technically comparable, with CP/M-86 having better memory management but DOS being faster. BYTE speculated that Microsoft reserving multitasking for Xenix "appears to leave a big opening" for Concurrent CP/M-86. On the IBM PC, however, at per copy for IBM's version, CP/M-86 sold poorly compared to the PC DOS; one survey found that 96.3% of IBM PCs were ordered with DOS, compared to 3.4% with CP/M-86 or Concurrent CP/M-86. In mid-1982 Lifeboat Associates, perhaps the largest CP/M software vendor, announced its support for DOS over CP/M-86 on the IBM PC. BYTE warned that IBM, Microsoft, and Lifeboat's support for DOS "poses a serious threat to" CP/M-86, and Jerry Pournelle stated in the magazine that "it is clear that Digital Research made some terrible mistakes in the marketing". By early 1983 DRI began selling CP/M-86 1.1 to end users for . Advertisements called CP/M-86 a "terrific value", with "instant access to the largest collection of applications software in existence … hundreds of proven, professional software programs for every business and education need"; it also included Graphics System Extension (GSX), formerly . In May 1983 the company announced that it would offer DOS versions of all of its languages and utilities. It stated that "obviously, PC DOS has made great market penetration on the IBM PC; we have to admit that", but claimed that "the fact that CP/M-86 has not done as well as DRI had hoped has nothing to do with our decision". By early 1984 DRI gave free copies of Concurrent CP/M-86 to those who purchased two CP/M-86 applications as a limited time offer, and advertisements stated that the applications were booters, which did not require loading CP/M-86 first. In January 1984, DRI also announced Kanji CP/M-86, a Japanese version of CP/M-86, for nine Japanese companies including Mitsubishi Electric Corporation, Sanyo Electric Co. Ltd., Sord Computer Corp. In December 1984 Fujitsu announced a number of FM-16-based machines using Kanji CP/M-86. CP/M-86 and DOS had very similar functionality, but were not compatible because the system calls for the same functions and program file formats were different, so two versions of the same software had to be produced and marketed to run under both operating systems. The command interface again had similar functionality but different syntax; where CP/M-86 (and CP/M) copied file SOURCE to TARGET with the command PIP TARGET=SOURCE, DOS used COPY SOURCE TARGET. Initially MS-DOS and CP/M-86 also ran on computers not necessarily hardware-compatible with the IBM PC such as the Apricot and Sirius, the intention being that software would be independent of hardware by making standardised operating system calls to a version of the operating system custom tailored to the particular hardware. However, writers of software which required fast performance accessed the IBM PC hardware directly instead of going through the operating system, resulting in PC-specific software which performed better than other MS-DOS and CP/M-86 versions; for example, games would display fast by writing to video memory directly instead of suffering the delay of making a call to the operating system, which would then write to a hardware-dependent memory location. Non-PC-compatible computers were soon replaced by models with hardware which behaved identically to the PC's. A consequence of the universal adoption of detailed PC architecture was that no more than 640 kilobytes of memory were supported; early machines running MS-DOS and CP/M-86 did not suffer from this restriction, and some could make use of nearly one megabyte of RAM. Reception PC Magazine wrote that CP/M-86 "in several ways seems better fitted to the PC" than DOS; however, for those who did not plan to program in assembly language, because it cost six times more "CP/M seems a less compelling purchase". It stated that CP/M-86 was strong in areas where DOS was weak, and vice versa, and that the level of application support for each operating system would be most important, although CP/M-86's lack of a run-time version for applications was a weakness. Versions A given version of CP/M-86 has two version numbers. One applies to the whole system and is usually displayed at startup; the other applies to the BDOS kernel. Versions known to exist include: All known Personal CP/M-86 versions contain references to CP/M-86 Plus, suggesting that they are derived from the CP/M-86 Plus codebase. A number of 16-bit CP/M-86 derivatives existed in the former East-bloc under the names SCP1700 (), CP/K, and K8918-OS. They were produced by the East-German VEB Robotron Dresden and Berlin. Legacy Caldera permitted the redistribution and modification of all original Digital Research files, including source code, related to the CP/M family through Tim Olmstead's "The Unofficial CP/M Web site" since 1997. After Olmstead's death on 12 September 2001, the free distribution license was refreshed and expanded by Lineo, who had meanwhile become the owner of those Digital Research assets, on 19 October 2001. See also History of computing hardware (1960s-present) SpeedStart CP/M-86 DOS Plus Notes References Further reading External links The Unofficial CP/M Website, which has a licence from the copyright holder to distribute original Digital Research software. The comp.os.cpm FAQ Intel iPDS-100 Using CP/M-Video CP/M variants IBM PC compatibles Microcomputer software Digital Research operating systems Discontinued operating systems Floppy disk-based operating systems Free software operating systems X86 operating systems 1981 software
Operating System (OS)
981
Hardware virtualization Hardware virtualization is the virtualization of computers as complete hardware platforms, certain logical abstractions of their componentry, or only the functionality required to run various operating systems. Virtualization hides the physical characteristics of a computing platform from the users, presenting instead an abstract computing platform. At its origins, the software that controlled virtualization was called a "control program", but the terms "hypervisor" or "virtual machine monitor" became preferred over time. Concept The term "virtualization" was coined in the 1960s to refer to a virtual machine (sometimes called "pseudo machine"), a term which itself dates from the experimental IBM M44/44X system. The creation and management of virtual machines has been called "platform virtualization", or "server virtualization", more recently. Platform virtualization is performed on a given hardware platform by host software (a control program), which creates a simulated computer environment, a virtual machine (VM), for its guest software. The guest software is not limited to user applications; many hosts allow the execution of complete operating systems. The guest software executes as if it were running directly on the physical hardware, with several notable caveats. Access to physical system resources (such as the network access, display, keyboard, and disk storage) is generally managed at a more restrictive level than the host processor and system-memory. Guests are often restricted from accessing specific peripheral devices, or may be limited to a subset of the device's native capabilities, depending on the hardware access policy implemented by the virtualization host. Virtualization often exacts performance penalties, both in resources required to run the hypervisor, and as well as in reduced performance on the virtual machine compared to running native on the physical machine. Reasons for virtualization In the case of server consolidation, many small physical servers are replaced by one larger physical server to decrease the need for more (costly) hardware resources such as CPUs, and hard drives. Although hardware is consolidated in virtual environments, typically OSs are not. Instead, each OS running on a physical server is converted to a distinct OS running inside a virtual machine. Thereby, the large server can "host" many such "guest" virtual machines. This is known as Physical-to-Virtual (P2V) transformation. In addition to reducing equipment and labor costs associated with equipment maintenance, consolidating servers can also have the added benefit of reducing energy consumption and the global footprint in environmental-ecological sectors of technology. For example, a typical server runs at 425 W and VMware estimates a hardware reduction ratio of up to 15:1. A virtual machine (VM) can be more easily controlled and inspected from a remote site than a physical machine, and the configuration of a VM is more flexible. This is very useful in kernel development and for teaching operating system courses, including running legacy operating systems that do not support modern hardware. A new virtual machine can be provisioned as required without the need for an up-front hardware purchase. A virtual machine can easily be relocated from one physical machine to another as needed. For example, a salesperson going to a customer can copy a virtual machine with the demonstration software to their laptop, without the need to transport the physical computer. Likewise, an error inside a virtual machine does not harm the host system, so there is no risk of the OS crashing on the laptop. Because of this ease of relocation, virtual machines can be readily used in disaster recovery scenarios without concerns with impact of refurbished and faulty energy sources. However, when multiple VMs are concurrently running on the same physical host, each VM may exhibit varying and unstable performance which highly depends on the workload imposed on the system by other VMs. This issue can be addressed by appropriate installation techniques for temporal isolation among virtual machines. There are several approaches to platform virtualization. Examples of virtualization use cases: Running one or more applications that are not supported by the host OS: A virtual machine running the required guest OS could permit the desired applications to run, without altering the host OS. Evaluating an alternate operating system: The new OS could be run within a VM, without altering the host OS. Server virtualization: Multiple virtual servers could be run on a single physical server, in order to more fully utilize the hardware resources of the physical server. Duplicating specific environments: A virtual machine could, depending on the virtualization software used, be duplicated and installed on multiple hosts, or restored to a previously backed-up system state. Creating a protected environment: If a guest OS running on a VM becomes damaged in a way that is not cost-effective to repair, such as may occur when studying malware or installing badly behaved software, the VM may simply be discarded without harm to the host system, and a clean copy used upon rebooting the guest . Full virtualization In full virtualization, the virtual machine simulates enough hardware to allow an unmodified "guest" OS designed for the same instruction set to be run in isolation. This approach was pioneered in 1966 with the IBM CP-40 and CP-67, predecessors of the VM family. Hardware-assisted virtualization In hardware-assisted virtualization, the hardware provides architectural support that facilitates building a virtual machine monitor and allows guest OSs to be run in isolation. Hardware-assisted virtualization was first introduced on the IBM System/370 in 1972, for use with VM/370, the first virtual machine operating system. In 2005 and 2006, Intel and AMD provided additional hardware to support virtualization. Sun Microsystems (now Oracle Corporation) added similar features in their UltraSPARC T-Series processors in 2005. In 2006, first-generation 32- and 64-bit x86 hardware support was found to rarely offer performance advantages over software virtualization. Paravirtualization In paravirtualization, the virtual machine does not necessarily simulate hardware, but instead (or in addition) offers a special API that can only be used by modifying the "guest" OS. For this to be possible, the "guest" OS's source code must be available. If the source code is available, it is sufficient to replace sensitive instructions with calls to VMM APIs (e.g.: "cli" with "vm_handle_cli()"), then re-compile the OS and use the new binaries. This system call to the hypervisor is called a "hypercall" in TRANGO and Xen; it is implemented via a DIAG ("diagnose") hardware instruction in IBM's CMS under VM (which was the origin of the term hypervisor).. Operating-system-level virtualization In operating-system-level virtualization, a physical server is virtualized at the operating system level, enabling multiple isolated and secure virtualized servers to run on a single physical server. The "guest" operating system environments share the same running instance of the operating system as the host system. Thus, the same operating system kernel is also used to implement the "guest" environments, and applications running in a given "guest" environment view it as a stand-alone system. Hardware virtualization disaster recovery A disaster recovery (DR) plan is often considered good practice for a hardware virtualization platform. DR of a virtualization environment can ensure high rate of availability during a wide range of situations that disrupt normal business operations. In situations where continued operations of hardware virtualization platforms is important, a disaster recovery plan can ensure hardware performance and maintenance requirements are met. A hardware virtualization disaster recovery plan involves both hardware and software protection by various methods, including those described below. Tape backup for software data long-term archival needs This common method can be used to store data offsite, but data recovery can be a difficult and lengthy process. Tape backup data is only as good as the latest copy stored. Tape backup methods will require a backup device and ongoing storage material. Whole-file and application replication The implementation of this method will require control software and storage capacity for application and data file storage replication typically on the same site. The data is replicated on a different disk partition or separate disk device and can be a scheduled activity for most servers and is implemented more for database-type applications. Hardware and software redundancy This method ensures the highest level of disaster recovery protection for a hardware virtualization solution, by providing duplicate hardware and software replication in two distinct geographic areas. See also Application virtualization Comparison of platform virtualization software Desktop virtualization Dynamic infrastructure Hardware emulation Hyperjacking Instruction set simulator Popek and Goldberg virtualization requirements Physicalization Thin provisioning Virtual appliance Virtualization for aggregation Workspace virtualization References External links An Introduction to Virtualization, by Amit Singh Xen and the Art of Virtualization, ACM, 2003, by a group of authors Linux Virtualization Software
Operating System (OS)
982
Open system (systems theory) An open system is a system that has external interactions. Such interactions can take the form of information, energy, or material transfers into or out of the system boundary, depending on the discipline which defines the concept. An open system is contrasted with the concept of an isolated system which exchanges neither energy, matter, nor information with its environment. An open system is also known as a flow system. The concept of an open system was formalized within a framework that enabled one to interrelate the theory of the organism, thermodynamics, and evolutionary theory. This concept was expanded upon with the advent of information theory and subsequently systems theory. Today the concept has its applications in the natural and social sciences. In the natural sciences an open system is one whose border is permeable to both energy and mass. By contrast, a closed system is permeable to energy but not to matter. The definition of an open system assumes that there are supplies of energy that cannot be depleted; in practice, this energy is supplied from some source in the surrounding environment, which can be treated as infinite for the purposes of study. One type of open system is the radiant energy system, which receives its energy from solar radiation – an energy source that can be regarded as inexhaustible for all practical purposes. Social sciences In the social sciences an open system is a process that exchanges material, energy, people, capital and information with its environment. French/Greek philosopher Kostas Axelos argued that seeing the "world system" as inherently open (though unified) would solve many of the problems in the social sciences, including that of praxis (the relation of knowledge to practice), so that various social scientific disciplines would work together rather than create monopolies whereby the world appears only sociological, political, historical, or psychological. Axelos argues that theorizing a closed system contributes to making it closed, and is thus a conservative approach. The Althusserian concept of overdetermination (drawing on Sigmund Freud) posits that there are always multiple causes in every event. David Harvey uses this to argue that when systems such as capitalism enter a phase of crisis, it can happen through one of a number of elements, such as gender roles, the relation to nature/the environment, or crises in accumulation. Looking at the crisis in accumulation, Harvey argues that phenomena such as foreign direct investment, privatization of state-owned resources, and accumulation by dispossession act as necessary outlets when capital has overaccumulated too much in private hands and cannot circulate effectively in the marketplace. He cites the forcible displacement of Mexican and Indian peasants since the 1970s and the Asian and South-East Asian financial crisis of 1997-8, involving "hedge fund raising" of national currencies, as examples of this. Structural functionalists such as Talcott Parsons and neofunctionalists such as Niklas Luhmann have incorporated system theory to describe society and its components. The sociology of religion finds both open and closed systems within the field of religion. Thermodynamics See the book Systems engineering See also Business process Complex system Dynamical system Glossary of systems theory Ludwig von Bertalanffy Maximum power principle Non-equilibrium thermodynamics Open system (computing) Open System Environment Reference Model Openness Open and Closed Systems in Social Science Phantom loop Thermodynamic system References Further reading Khalil, E.L. (1995). Nonlinear thermodynamics and social science modeling: fad cycles, cultural development and identificational slips. The American Journal of Economics and Sociology, Vol. 54, Issue 4, pp. 423–438. Weber, B.H. (1989). Ethical Implications Of The Interface Of Natural And Artificial Systems. Delicate Balance: Technics, Culture and Consequences: Conference Proceedings for the Institute of Electrical and Electronic Engineers. External links OPEN SYSTEM, Principia Cybernetica Web, 2007. Cybernetics Thermodynamic systems System
Operating System (OS)
983
Tailored Access Operations The Office of Tailored Access Operations (TAO), now Computer Network Operations, structured as S32 is a cyber-warfare intelligence-gathering unit of the National Security Agency (NSA). It has been active since at least 1998, possibly 1997, but was not named or structured as TAO until "the last days of 2000," according to General Michael Hayden. TAO identifies, monitors, infiltrates, and gathers intelligence on computer systems being used by entities foreign to the United States. History TAO is reportedly "the largest and arguably the most important component of the NSA's huge Signals Intelligence Directorate (SID), consisting of more than 1,000 military and civilian computer hackers, intelligence analysts, targeting specialists, computer hardware and software designers, and electrical engineers". Snowden leak A document leaked by former NSA contractor Edward Snowden describing the unit's work says TAO has software templates allowing it to break into commonly used hardware, including "routers, switches, and firewalls from multiple product vendor lines". TAO engineers prefer to tap networks rather than isolated computers, because there are typically many devices on a single network. Organization TAO's headquarters are termed the Remote Operations Center (ROC) and are based at the NSA headquarters at Fort Meade, Maryland. TAO also has expanded to NSA Hawaii (Wahiawa, Oahu), NSA Georgia (Fort Gordon, Georgia), NSA Texas (Joint Base San Antonio, Texas), and NSA Colorado (Buckley Space Force Base, Denver). S321 – Remote Operations Center (ROC) In the Remote Operations Center, 600 employees gather information from around the world. S323 – Data Network Technologies Branch (DNT) : develops automated spyware S3231 – Access Division (ACD) S3232 – Cyber Networks Technology Division (CNT) S3233 – S3234 – Computer Technology Division (CTD) S3235 – Network Technology Division (NTD) Telecommunications Network Technologies Branch (TNT) : improve network and computer hacking methods Mission Infrastructure Technologies Branch: operates the software provided above S328 – Access Technologies Operations Branch (ATO): Reportedly includes personnel seconded by the CIA and the FBI, who perform what are described as "off-net operations", which means they arrange for CIA agents to surreptitiously plant eavesdropping devices on computers and telecommunications systems overseas so that TAO's hackers may remotely access them from Fort Meade. Specially equipped submarines, currently the USS Jimmy Carter, are used to wiretap fibre optic cables around the globe. S3283 – Expeditionary Access Operations (EAO) S3285 – Persistence Division Virtual locations Details on a program titled QUANTUMSQUIRREL indicate NSA ability to masquerade as any routable IPv4 or IPv6 host. This enables an NSA computer to generate false geographical location and personal identification credentials when accessing the Internet utilizing QUANTUMSQUIRREL. NSA ANT catalog The NSA ANT catalog is a 50-page classified document listing technology available to the United States National Security Agency (NSA) Tailored Access Operations (TAO) by the Advanced Network Technology (ANT) Division to aid in cyber surveillance. Most devices are described as already operational and available to US nationals and members of the Five Eyes alliance. According to Der Spiegel, which released the catalog to the public on December 30, 2013, "The list reads like a mail-order catalog, one from which other NSA employees can order technologies from the ANT division for tapping their targets' data." The document was created in 2008. Security researcher Jacob Appelbaum gave a speech at the Chaos Communications Congress in Hamburg, Germany, in which he detailed techniques that the simultaneously published Der Spiegel article he coauthored disclosed from the catalog. QUANTUM attacks The TAO has developed an attack suite they call QUANTUM. It relies on a compromised router that duplicates internet traffic, typically HTTP requests, so that they go both to the intended target and to an NSA site (indirectly). The NSA site runs FOXACID software which sends back exploits that load in the background in the target web browser before the intended destination has had a chance to respond (it's unclear if the compromised router facilitates this race on the return trip). Prior to the development of this technology, FOXACID software made spear-phishing attacks the NSA referred to as spam. If the browser is exploitable, further permanent "implants" (rootkits etc.) are deployed in the target computer, e.g. OLYMPUSFIRE for Windows, which give complete remote access to the infected machine. This type of attack is part of the man-in-the-middle attack family, though more specifically it is called man-on-the-side attack. It is difficult to pull off without controlling some of the Internet backbone. There are numerous services that FOXACID can exploit this way. The names of some FOXACID modules are given below: alibabaForumUser doubleclickID rocketmail hi5 HotmailID LinkedIn mailruid msnMailToken64 qq Facebook simbarid Twitter Yahoo Gmail YouTube By collaboration with the British Government Communications Headquarters (GCHQ) (MUSCULAR), Google services could be attacked too, including Gmail. Finding machines that are exploitable and worth attacking is done using analytic databases such as XKeyscore. A specific method of finding vulnerable machines is interception of Windows Error Reporting traffic, which is logged into XKeyscore. QUANTUM attacks launched from NSA sites can be too slow for some combinations of targets and services as they essentially try to exploit a race condition, i.e. the NSA server is trying to beat the legitimate server with its response. As of mid-2011, the NSA was prototyping a capability codenamed QFIRE, which involved embedding their exploit-dispensing servers in virtual machines (running on VMware ESX) hosted closer to the target, in the so-called Special Collection Sites (SCS) network worldwide. The goal of QFIRE was to lower the latency of the spoofed response, thus increasing the probability of success. COMMENDEER is used to commandeer (i.e. compromise) untargeted computer systems. The software is used as a part of QUANTUMNATION, which also includes the software vulnerability scanner VALIDATOR. The tool was first described at the 2014 Chaos Communication Congress by Jacob Appelbaum, who characterized it as tyrannical. QUANTUMCOOKIE is a more complex form of attack which can be used against Tor users. Known targets and collaborations Suspected and confirmed targets of the Tailored Access Operations unit include national and international entities like China, OPEC, and Mexico's Secretariat of Public Security. The group has also targeted global communication networks via SEA-ME-WE 4 – an optical fibre submarine communications cable system that carries telecommunications between Singapore, Malaysia, Thailand, Bangladesh, India, Sri Lanka, Pakistan, United Arab Emirates, Saudi Arabia, Sudan, Egypt, Italy, Tunisia, Algeria and France. Additionally, Försvarets radioanstalt (FRA) in Sweden gives access to fiber optic links for QUANTUM cooperation. TAO's QUANTUM INSERT technology was passed to UK services, particularly to GCHQ's MyNOC, which used it to target Belgacom and GPRS roaming exchange (GRX) providers like the Comfone, Syniverse, and Starhome. Belgacom, which provides services to the European Commission, the European Parliament and the European Council discovered the attack. In concert with the CIA and FBI, TAO is used to intercept laptops purchased online, divert them to secret warehouses where spyware and hardware is installed, and send them on to customers. TAO has also targeted internet browsers Tor and Firefox. According to a 2013 article in Foreign Policy, TAO has become "increasingly accomplished at its mission, thanks in part to the high-level cooperation it secretly receives from the 'big three' American telecom companies (AT&T, Verizon and Sprint), most of the large US-based Internet service providers, and many of the top computer security software manufacturers and consulting companies." A 2012 TAO budget document claims that these companies, on TAO's behest, "insert vulnerabilities into commercial encryption systems, IT systems, networks and endpoint communications devices used by targets". A number of US companies, including Cisco and Dell, have subsequently made public statements denying that they insert such back doors into their products. Microsoft provides advance warning to the NSA of vulnerabilities it knows about, before fixes or information about these vulnerabilities is available to the public; this enables TAO to execute so-called zero-day attacks. A Microsoft official who declined to be identified in the press confirmed that this is indeed the case, but said that Microsoft cannot be held responsible for how the NSA uses this advance information. Leadership Since 2013, the head of TAO is Rob Joyce, a 25-plus year employee who previously worked in the NSA's Information Assurance Directorate (IAD). In January 2016, Joyce had a rare public appearance when he gave a presentation at the Usenix’s Enigma conference. See also Advanced persistent threat Cyberwarfare in the United States Equation Group Magic Lantern (software) MiniPanzer and MegaPanzer PLA Unit 61398 Stuxnet Syrian Electronic Army WARRIOR PRIDE References External links Inside TAO: Documents Reveal Top NSA Hacking Unit NSA 'hacking unit' infiltrates computers around the world – report NSA Tailored Access Operations https://www.wired.com/threatlevel/2013/09/nsa-router-hacking/ https://www.nytimes.com/2014/01/15/us/nsa-effort-pries-open-computers-not-connected-to-internet.html Getting the 'Ungettable' Intelligence: An Interview with TAO's Teresa Shea Computer surveillance Cyberwarfare in the United States Hacker groups Intelligence agency programmes revealed by Edward Snowden National Security Agency American advanced persistent threat groups
Operating System (OS)
984
Tolapai Tolapai is the code name of Intel's embedded system on a chip (SoC) which combines a Pentium M (Dothan) processor core, DDR2 memory controllers and input/output (I/O) controllers, and a QuickAssist integrated accelerator unit for security functions. Overview The Tolapai embedded processor has 148 million transistors on a 90 nm process technology, 1088-ball FCBGA with a 1.092mm pitch, and comes in a 37.5mm × 37.5mm package. It is also Intel's first integrated x86 processor, chipset and memory controller since 1994's 80386EX. Intel EP80579 integrated processor for embedded computing: CPU: Pentium M clocked from between 600 MHz and 1.2 GHz Cache: 256 KB Package: 1088-ball flip chip BGA Memory: DDR2 from 400- to 800 MHz; MCH supports DIMM or memory down with optional 32-/64-bit and ECC configurations Bus: One local expansion bus for general control or expanded peripheral connections PCI Express: PCIe root complex interface in 1 ×8, 2 ×4, or 2 ×1 configurations Storage: 2× SATA (Gen1 or Gen2) interfaces Networking: 3× 10/100/1000 Ethernet MACs supporting reduced gigabit media-independent interface (RGMII) or reduced media-independent interface (RMII), and Management Data Input/Output (MDIO) USB: 2× Universal Serial Bus (1.1 or 2.0) interfaces GPIO: 36× General Purpose Input/Output ports CAN: 2× Controller Area Network (CAN bus) 2.0b interfaces High Speed Serial (HSS): 3× ports for T1/E1 or FXS/FXO connections Serial: 1× synchronous serial port (SSP) Universal asynchronous receiver-transmitters (UARTs): 2× 16550-compatible SMB: 2× System Management Bus (SMBus) interfaces LPC: 1× Low Pin Count (LPC 1.1) interface SPI: 1× Serial Peripheral Interface Bus (SPI) boot interface RTC: Integrated real-time clock (RTC) support EDMA: Enhanced DMA (EDMA) engine with low latency memory transfers; supports multiple peer-to-peer configurations Operating temperature: 0 to 70 degrees C (most models); −40 to 85 degrees C (some models) List of Intel 80579 processors All models support: MMX, Streaming SIMD Extensions (SSE), SSE2, SSE3, XD bit (an NX bit implementation) Die size: Steppings: B1 See also Atom (system on chip) Intel Quark References External links Intel Introduces Future VPN Solution: Tolapai Ars Technica: Intel Confirms details of Tolapai, a SoC embedded processor Cnet News: Live from Hot Chips 19: Session 7, Networking Intel QuickAssist Acceleration Technology for Embedded Systems Embedded Intel EP80579 Integrated Processor Intel products Intel x86 microprocessors
Operating System (OS)
985
Dyne:bolic dyne:bolic GNU/Linux is a Live CD/DVD distribution based on the Linux kernel. It is shaped by the needs of media activists, artists and creators to be a practical tool with a focus on multimedia production, that delivers a large assortment of applications. It allows manipulation and broadcast of both sound and video with tools to record, edit, encode, and stream. In addition to multimedia specific programs, dyne:bolic also provides word processors and common desktop computing tools. Termed "Rastasoft" by its author, it is based entirely on free software, and as such is recommended and endorsed by the GNU Project. dyne:bolic is created by volunteers and the author and maintainer Jaromil, who also included multimedia tools such as MusE, HasciiCam, and FreeJ in the distribution. Live CD/DVD dyne:bolic is intended to be used as Live CD/DVD. It does not require installation to a hard drive, and attempts to recognize most devices and peripherals (sound, video, TV, etc.) automatically. It is designed to work with older and slower computers, its kernel optimized for low latency and for performance, making the distribution suitable for audio and video production and turning PCs into full media stations. For that reason software included is sometimes not at the newest version available. Modules dyne:bolic can be extended by downloading extra modules such as development tools or common software like OpenOffice.org. These are SquashFS files placed in the directory of a dock (see below) or a burnt CD and are automatically integrated at boot. System requirements Basic system requirements for version 1.x and 2.x were relatively low. A PC with Pentium or AMD K5 (i586) class CPU and 64 MB of RAM and an IDE CD-ROM drive is sufficient. Some versions of dyne:bolic 1.x were ported by co-developer Smilzo to be used on the Xbox game console, multiple Xbox installations could be clustered. Console installation and clustering is currently not supported by version 2.x and up. Version 3.0, codenamed MUNIR, has higher system requirements than former releases and is the first that comes as DVD image. A Pentium II or AMD K6-2 class processor, 256 MB RAM and an IDE/SATA DVD-ROM drive are recommended. Again, a harddisk is not needed. It was released on September 8, 2011. Installation The user copies the directory from CD/DVD to a suitably formatted partition or drive (this is called Docking). This file system will be recognised and booted by the CD or DVD. There is an option to install a GNU GRUB bootloader or edit an existing one. Booting from floppy disk is supported, too. User settings can be saved on disc or USB flash drive) in a writable image file containing and filesystem (described as Nesting), which can also be encrypted for better privacy. Release history dyne:bolic 3.x Version 3.0, currently dyne:bolic 3.0 Beta 4, uses Linux kernel 3.0.1 and is a DVD-ROM image of 1.65 GB. GNOME 2 is used as desktop interface, Grub2 as the boot loader. dyne:bolic 2.x This version, the latest being 2.5.2, uses Linux kernel 2.6 and is a CD-ROM image. Xfce is used as the desktop interface. dyne:bolic 1.x This version used Linux kernel 2.4 on a CD-ROM image, and brought ability to create "nests" and "docks" on hard disk or USB key. Features present in dyne:bolic 1.x, but dropped later on were: openMosix - a clustering software Ability to boot on Xbox game console WindowMaker - a fast and small memory footprint X window manager CIA World Factbook - as a local copy See also Comparison of Linux distributions GNU/Linux naming controversy List of Linux distributions based on Debian Musix GNU+Linux – another free distribution for multimedia enthusiasts References External links Free audio software Operating system distributions bootable from read-only media Linux media creation distributions Free software only Linux distributions 2005 software Linux distributions without systemd Linux distributions
Operating System (OS)
986
Select (Unix) select is a system call and application programming interface (API) in Unix-like and POSIX-compliant operating systems for examining the status of file descriptors of open input/output channels. The select system call is similar to the poll facility introduced in UNIX System V and later operating systems. However, with the c10k problem, both select and poll have been superseded by the likes of kqueue, epoll, /dev/poll and I/O completion ports. One common use of select outside of its stated use of waiting on filehandles is to implement a portable sub-second sleep. This can be achieved by passing NULL for all three fd_set arguments, and the duration of the desired sleep as the timeout argument. In the C programming language, the select system call is declared in the header file sys/select.h or unistd.h, and has the following syntax: int select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *errorfds, struct timeval *timeout); fd_set type arguments may be manipulated with four utility macros: FD_SET(), FD_CLR(), FD_ZERO(), and FD_ISSET(). Select returns the total number of bits set in readfds, writefds and errorfds, or zero if the timeout expired, and -1 on error. The sets of file descriptor used in select are finite in size, depending on the operating system. The newer system call poll provides a more flexible solution. Example #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <sys/select.h> #include <fcntl.h> #include <unistd.h> #include <err.h> #include <errno.h> #define PORT "9421" /* function prototypes */ void die(const char*); int main(int argc, char **argv) { int sockfd, new, maxfd, on = 1, nready, i; struct addrinfo *res0, *res, hints; char buffer[BUFSIZ]; fd_set master, readfds; int error; ssize_t nbytes; (void)memset(&hints, '\0', sizeof(struct addrinfo)); hints.ai_family = AF_INET; hints.ai_socktype = SOCK_STREAM; hints.ai_protocol = IPPROTO_TCP; hints.ai_flags = AI_PASSIVE; if (0 != (error = getaddrinfo(NULL, PORT, &hints, &res0))) errx(EXIT_FAILURE, "%s", gai_strerror(error)); for (res = res0; res; res = res->ai_next) { if (-1 == (sockfd = socket(res->ai_family, res->ai_socktype, res->ai_protocol))) { perror("socket()"); continue; } if (-1 == (setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, (char*)&on, sizeof(int)))) { perror("setsockopt()"); continue; } if (-1 == (bind(sockfd, res->ai_addr, res->ai_addrlen))) { perror("bind()"); continue; } break; } if (-1 == sockfd) exit(EXIT_FAILURE); freeaddrinfo(res0); if (-1 == (listen(sockfd, 32))) die("listen()"); if (-1 == (fcntl(sockfd, F_SETFD, O_NONBLOCK))) die("fcntl()"); FD_ZERO(&master); FD_ZERO(&readfds); FD_SET(sockfd, &master); maxfd = sockfd; while (1) { memcpy(&readfds, &master, sizeof(master)); (void)printf("running select()\n"); if (-1 == (nready = select(maxfd+1, &readfds, NULL, NULL, NULL))) die("select()"); (void)printf("Number of ready descriptor: %d\n", nready); for (i=0; i<=maxfd && nready>0; i++) { if (FD_ISSET(i, &readfds)) { nready--; if (i == sockfd) { (void)printf("Trying to accept() new connection(s)\n"); if (-1 == (new = accept(sockfd, NULL, NULL))) { if (EWOULDBLOCK != errno) die("accept()"); break; } else { if (-1 == (fcntl(new, F_SETFD, O_NONBLOCK))) die("fcntl()"); FD_SET(new, &master); if (maxfd < new) maxfd = new; } } else { (void)printf("recv() data from one of descriptors(s)\n"); nbytes = recv(i, buffer, sizeof(buffer), 0); if (nbytes <= 0) { if (EWOULDBLOCK != errno) die("recv()"); break; } buffer[nbytes] = '\0'; printf("%s", buffer); (void)printf("%zi bytes received.\n", nbytes); close(i); FD_CLR(i, &master); } } } } return 0; } void die(const char *msg) { perror(msg); exit(EXIT_FAILURE); } See also Berkeley sockets Polling epoll kqueue Input/output completion port (IOCP) References External links C POSIX library Events (computing) System calls Articles with example C code
Operating System (OS)
987
IBM Office/36 Office/36 was a suite of applications marketed by IBM from 1983 to 2000 for the IBM System/36 family of midrange computers. IBM announced its System/36 Office Automation (OA) strategy in 1985. Office/36 could be purchased in its entirety, or piecemeal. Components of Office/36 include: IDDU/36, the Interactive Data Definition Utility. Query/36, the Query utility. DisplayWrite/36, a word processing program. Personal Services/36, a calendaring system and an office messaging utility. Query/36 was not quite the same as SQL, but it had some similarities, especially the ability to very rapidly create a displayed recordset from a disk file. Note that SQL, also an IBM development, had not been standardized prior to 1986. DisplayWrite/36, in the same category as Microsoft Word, had online dictionaries and definition capabilities, and spell-check, and unlike the standard S/36 products, it would straighten spillover text and scroll in real time. Considerable changes were required to S/36 design to support Office/36 functionality, not the least of which was the capability to manage new container objects called "folders" and produce multiple extents to them on demand. Q/36 and DW/36 typically exceeded the 64K program limit of the S/36, both in editing and printing, so using Office products could heavily impact other applications. DW/36 allowed use of bold, underline, and other display formatting characteristics in real time. References Business software Office 36 Email systems Discontinued software
Operating System (OS)
988
ISO/IEC 15504 ISO/IEC 15504 Information technology – Process assessment, also termed Software Process Improvement and Capability Determination (SPICE), is a set of technical standards documents for the computer software development process and related business management functions. It is one of the joint International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) standards, which was developed by the ISO and IEC joint subcommittee, ISO/IEC JTC 1/SC 7. ISO/IEC 15504 was initially derived from process lifecycle standard ISO/IEC 12207 and from maturity models like Bootstrap, Trillium and the Capability Maturity Model (CMM). ISO/IEC 15504 has been superseded by ISO/IEC 33001:2015 Information technology – Process assessment – Concepts and terminology as of March, 2015. Overview ISO/IEC 15504 is the reference model for the maturity models (consisting of capability levels which in turn consist of the process attributes and further consist of generic practices) against which the assessors can place the evidence that they collect during their assessment, so that the assessors can give an overall determination of the organization's capabilities for delivering products (software, systems, and IT services). History A working group was formed in 1993 to draft the international standard and used the acronym SPICE. SPICE initially stood for Software Process Improvement and Capability Evaluation, but in consideration of French concerns over the meaning of evaluation, SPICE has now been renamed Software Process Improvement and Capability Determination. SPICE is still used for the user group of the standard, and the title for the annual conference. The first SPICE was held in Limerick, Ireland in 2000, SPICE 2003 was hosted by ESA in the Netherlands, SPICE 2004 was hosted in Portugal, SPICE 2005 in Austria, SPICE 2006 in Luxembourg, SPICE 2007 in South Korea, SPICE 2008 in Nuremberg, Germany and SPICE 2009 in Helsinki, Finland. The first versions of the standard focused exclusively on software development processes. This was expanded to cover all related processes in a software business, for example project management, configuration management, quality assurance, and so on. The list of processes covered grew to cover six areas: organizational, management, engineering, acquisition supply, support, and operations. In a major revision to the draft standard in 2004, the process reference model was removed and is now related to the ISO/IEC 12207 (Software Lifecycle Processes). The issued standard now specifies the measurement framework and can use different process reference models. There are five general and industry models in use. Part 5 specifies software process assessment and part 6 specifies system process assessment. The latest work in the ISO standards working group includes creation of a maturity model, which is planned to become ISO/IEC 15504 part 7. The standard The Technical Report (TR) document for ISO/IEC TR 15504 was divided into 9 parts. The initial International Standard was recreated in 5 parts. This was proposed from Japan when the TRs were published at 1997. The International Standard (IS) version of ISO/IEC 15504 now comprises 6 parts. The 7th part is currently in an advanced Final Draft Standard form and work has started on part 8. Part 1 of ISO/IEC TR 15504 explains the concepts and gives an overview of the framework. Reference model ISO/IEC 15504 contains a reference model. The reference model defines a process dimension and a capability dimension. The process dimension in the reference model is not the subject of part 2 of ISO/IEC 15504, but part 2 refers to external process lifecycle standards including ISO/IEC 12207 and ISO/IEC 15288. The standard defines means to verify conformity of reference models. Processes The process dimension defines processes divided into the five process categories of: customer-supplier engineering supporting management organization With new parts being published, the process categories will expand, particularly for IT service process categories and enterprise process categories. Capability levels and process attributes For each process, ISO/IEC 15504 defines a capability level on the following scale: The capability of processes is measured using process attributes. The international standard defines nine process attributes: 1.1 Process performance 2.1 Performance management 2.2 Work product management 3.1 Process definition 3.2 Process deployment 4.1 Process measurement 4.2 Process control 5.1 Process innovation 5.2 Process optimization Each process attribute consists of one or more generic practices, which are further elaborated into practice indicators to aid assessment performance. Rating scale of process attributes Each process attribute is assessed on a four-point (N-P-L-F) rating scale: Not achieved (0–15%) Partially achieved (>15–50%) Largely achieved (>50–85%) Fully achieved (>85–100%). The rating is based upon evidence collected against the practice indicators, which demonstrate fulfillment of the process attribute. Assessments ISO/IEC 15504 provides a guide for performing an assessment. This includes: the assessment process the model for the assessment any tools used in the assessment Assessment process Performing assessments is the subject of parts 2 and 3 of ISO/IEC 15504. Part 2 is the normative part and part 3 gives a guidance to fulfill the requirements in part 2. One of the requirements is to use a conformant assessment method for the assessment process. The actual method is not specified in the standard although the standard places requirements on the method, method developers and assessors using the method. The standard provides general guidance to assessors and this must be supplemented by undergoing formal training and detailed guidance during initial assessments. The assessment process can be generalized as the following steps: initiate an assessment (assessment sponsor) select assessor and assessment team plan the assessment, including processes and organizational unit to be assessed (lead assessor and assessment team) pre-assessment briefing data collection data validation process rating reporting the assessment result An assessor can collect data on a process by various means, including interviews with persons performing the process, collecting documents and quality records, and collecting statistical process data. The assessor validates this data to ensure it is accurate and completely covers the assessment scope. The assessor assesses this data (using his expert judgment) against a process's base practices and the capability dimension's generic practices in the process rating step. Process rating requires some exercising of expert judgment on the part of the assessor and this is the reason that there are requirements on assessor qualifications and competency. The process rating is then presented as a preliminary finding to the sponsor (and preferably also to the persons assessed) to ensure that they agree that the assessment is accurate. In a few cases, there may be feedback requiring further assessment before a final process rating is made. Assessment model The process assessment model (PAM) is the detailed model used for an actual assessment. This is an elaboration of the process reference model (PRM) provided by the process lifecycle standards. The process assessment model (PAM) in part 5 is based on the process reference model (PRM) for software: ISO/IEC 12207. The process assessment model in part 6 is based on the process reference model for systems: ISO/IEC 15288. The standard allows other models to be used instead, if they meet ISO/IEC 15504's criteria, which include a defined community of interest and meeting the requirements for content (i.e. process purpose, process outcomes and assessment indicators). Tools used in the assessment There exist several assessment tools. The simplest comprise paper-based tools. In general, they are laid out to incorporate the assessment model indicators, including the base practice indicators and generic practice indicators. Assessors write down the assessment results and notes supporting the assessment judgment. There are a limited number of computer based tools that present the indicators and allow users to enter the assessment judgment and notes in formatted screens, as well as automate the collated assessment result (i.e. the process attribute ratings) and creating reports. Assessor qualifications and competency For a successful assessment, the assessor must have a suitable level of the relevant skills and experience. These skills include: personal qualities such as communication skills. relevant education and training and experience. specific skills for particular categories, e.g. management skills for the management category. ISO/IEC 15504 related training and experience in process capability assessments. The competency of assessors is the subject of part 3 of ISO/IEC 15504. In summary, the ISO/IEC 15504 specific training and experience for assessors comprise: completion of a 5-day lead assessor training course performing at least one assessment successfully under supervision of a competent lead assessor performing at least one assessment successfully as a lead assessor under the supervision of a competent lead assessor. The competent lead assessor defines when the assessment is successfully performed. There exist schemes for certifying assessors and guiding lead assessors in making this judgement. Uses ISO/IEC 15504 can be used in two contexts: Process improvement, and Capability determination (= evaluation of supplier's process capability). Process improvement ISO/IEC 15504 can be used to perform process improvement within a technology organization. Process improvement is always difficult, and initiatives often fail, so it is important to understand the initial baseline level (process capability level), and to assess the situation after an improvement project. ISO 15504 provides a standard for assessing the organization's capacity to deliver at each of these stages. In particular, the reference framework of ISO/IEC 15504 provides a structure for defining objectives, which facilitates specific programs to achieve these objectives. Process improvement is the subject of part 4 of ISO/IEC 15504. It specifies requirements for improvement programmes and provides guidance on planning and executing improvements, including a description of an eight step improvement programme. Following this improvement programme is not mandatory and several alternative improvement programmes exist. Capability determination An organization considering outsourcing software development needs to have a good understanding of the capability of potential suppliers to deliver. ISO/IEC 15504 (Part 4) can also be used to inform supplier selection decisions. The ISO/IEC 15504 framework provides a framework for assessing proposed suppliers, as assessed either by the organization itself, or by an independent assessor. The organization can determine a target capability for suppliers, based on the organization's needs, and then assess suppliers against a set of target process profiles that specify this target capability. Part 4 of the ISO/IEC 15504 specifies the high level requirements and an initiative has been started to create an extended part of the standard covering target process profiles. Target process profiles are particularly important in contexts where the organization (for example, a government department) is required to accept the cheapest qualifying vendor. This also enables suppliers to identify gaps between their current capability and the level required by a potential customer, and to undertake improvement to achieve the contract requirements (i.e. become qualified). Work on extending the value of capability determination includes a method called Practical Process Profiles - which uses risk as the determining factor in setting target process profiles. Combining risk and processes promotes improvement with active risk reduction, hence reducing the likelihood of problems occurring. Acceptance of ISO/IEC 15504 ISO/IEC 15504 has been successful as: ISO/IEC 15504 is available through National Standards Bodies. It has the support of the international community. Over 4,000 assessments have been performed to date. Major sectors are leading the pace such as automotive, space and medical systems with industry relevant variants. Domain-specific models like Automotive SPICE and SPICE 4 SPACE can be derived from it. There have been many international initiatives to support take-up such as SPICE for small and very small entities. On the other hand, ISO/IEC 15504 may not be as popular as CMMI for the following reasons: ISO/IEC 15504 is not available as free download, but must be purchased from the ISO. (Automotive SPICE, on the other hand, can be freely downloaded from the link supplied below.) CMM, and later CMMI, were originally available as free downloads from the SEI website. However, beginning with CMMI v2.0 a license must now be purchased from SEI. The CMM, and later CMMI, were originally sponsored by the US Department of Defense (DoD). Now, however, DoD no longer funds CMMI or mandates its use. The CMM was created first, and reached critical 'market' share before ISO 15504 became available. The CMM has subsequently been replaced by the CMMI, which incorporates many of the ideas of ISO/IEC 15504, but also retains the benefits of the CMM. Like the CMM, ISO/IEC 15504 was created in a development context, making it difficult to apply in a service management context. But work has started to develop an ISO/IEC 20000-based process reference model (ISO/IEC 20000-4) that can serve as a basis for a process assessment model. This is planned to become part 8 to the standard (ISO/IEC 15504-8). In addition there are methods available that adapt its use to various contexts. See also ISO/IEC JTC 1/SC 7 Further reading Cass, A. et al. “SPiCE in Action - Experiences in Tailoring and Extension.” Proceedings. 28th Euromicro Conference. IEEE Comput. Soc, 2003. Print. Eito-Brun, Ricardo. “Comparing SPiCE for Space (S4S) and CMMI-DEV: Identifying Sources of Risk from Improvement Models.” Communications in Computer and Information Science. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. 84–94. Print. International Conference on Software Process Improvement and Capability Determination (2011-2018) Mesquida, Antoni Lluís, Antònia Mas, and Esperança Amengual. “An ISO/IEC 15504 Security Extension.” Communications in Computer and Information Science. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. 64–72. Print. Schlager, Christian et al. “Hardware SPICE Extension for Automotive SPICE 3.1.” Communications in Computer and Information Science. Cham: Springer International Publishing, 2018. 480–491. Print. External links ISO/IEC 33001:2015 - Information technology — Process assessment — Concepts and terminology VDA QMC Homepage for Automotive SPICE References Software engineering standards Software development process 15504
Operating System (OS)
989
System Manager (HP LX) The HP LX System Manager is the application manager and GUI for HP LX-series Palmtop computers. Overview The App Manager page is made up of 2 rows of 8 icons, with an additional shorter row on the next page down by default. (More applications can be added as the user wishes.) The menu bar options that are available can be opened (on a HP 200LX) by using the Menu key or the Alt key. These include task management, booting out of the GUI into DOS and opening help for the palmtop. Flaws One of the major flaws in the System Manager is its limited icon space in Application Manager. You can put only 32 icons there. You can delete some default icons to get space but some are undeletable. Another item of interest that some people have referred to as a flaw is that the HEXCALC built-in application is missing from the System Manager by default. To add the program to the list, it is necessary to manually add an entry with the following fields: Name: He&x Calc Path: D:\BIN\HEXCALC.EXM. See also HP 200LX References Microcomputers History of software HP palmtops
Operating System (OS)
990
Amiga Fast File System The Amiga Fast File System (abbreviated AFFS, or more commonly historically as FFS) is a file system used on the Amiga personal computer. The previous Amiga filesystem was never given a specific name and known originally simply as "DOS" or AmigaDOS. Upon the release of FFS, the original filesystem became known as Amiga Old File System (OFS). OFS, which was primarily designed for use with floppy disks, had been proving slow to keep up with hard drives of the era. FFS was designed as a full replacement for the original Amiga filesystem. FFS differs from its predecessor mainly in the removal of redundant information. Data blocks contain nothing but data, allowing the filesystem to manage the transfer of large chunks of data directly from the host adapter to the final destination. Characteristics OFS was the predecessor to FFS. Before FFS was released, AmigaOS had a single filesystem simply called AmigaDOS: this uses 24 bytes per sector for redundancy data, providing for reconstructing structural data on less reliable media such as floppy disks. When higher-speed media (i.e. hard disks) became more available to the Amiga, this redundant data posed a bottleneck as all data needed to be realigned to be passed to the application. The redundancy was removed with FFS and the data read in from media are passed to the application directly. The previous filesystem, AmigaDOS, was renamed OFS, Old File System, to differentiate between it and FFS. FFS was backward-compatible and could access devices formatted with OFS. Given these advantages, FFS was rapidly adopted as the most common filesystem used by almost all Amiga users, although OFS continued to be widely used on floppy disks from third-party software vendors. (This was purely for compatibility with pre-AmigaOS 2 systems in games and applications that did not actually require AmigaOS 2+, as machines running earlier versions of the OS without FFS in the ROM could not boot from these floppies, although they could still read them if they had FFS installed.) Amiga FFS is simple and efficient, and when introduced was more than adequate, and had many advantages compared to the file systems of other platforms. However, as OFS had done before it, it aged; as drives became larger and the number of files on them increased, its use as a day-to-day filesystem became more problematic in terms of difficulty of maintenance and competitiveness of general performance. Despite this, it is still used on AmigaOS systems and shipped with both MorphOS and AmigaOS 4. By the last Commodore release of AmigaOS, 3.1, FFS was still the only filesystem shipped as standard with the Amiga, but it was already showing its age as technology advanced. FFS (and OFS) stores a "bitmap" of the filesystem in a single sector. On write, this is first marked as invalid, then the write is completed, then the bitmap is updated and marked as valid. If a write operation is interrupted by a crash or disk removal, this then allows the 'disk-validator' program to undo the damage. This resembled a very simple form of filesystem journaling. To allow the disk to be used again with an invalidated OFS or FFS filesystem, the entire disk has to be completely scanned and the bitmap rebuilt, but only the data being modified during the write would be lost. During this scanning the disk cannot be written to (except by the disk-validator as it performs its function), and read access is very slow. AmigaOS originally included a disk-validator on every bootable disk, which was prone to being replaced by viruses to allow themselves to spread (for example the "Saddam Hussein" virus). Later it became part of the ROM from Kickstart 2.x onwards, protecting it from malicious replacement. The disk-validator attempted to repair the bitmap on an invalidated drive by write-protecting the drive and scanning it; this could take a long time and made it very slow to access the disk until it was finished, especially on slower media. As hard drives got larger and contained more files, the validation process could take many hours. In addition, files and directories could feasibly be lost (often without the user being notified or even aware) during the process if their data hashes were corrupted. In some cases the validator could fail and leave the disk in a non-validated state, requiring the user to use a third-party disk tool like DiskSalv to make the volume writable again, or simply save the files by copying them to a fresh partition—a very slow process. FFS was also originally limited to 32-bit addressing and therefore about 4 GB drives, or at least the first 4 GB on a larger drive. Attempting to use FFS partitions beyond this limit caused serious data corruption all through the drive. FFS belatedly got some third-party 64-bit patches and then official (but non-Commodore) updates to allow it to circumvent these limitations. The latter were supplied with AmigaOS 3.5 and 3.9, from Haage & Partner. The former often were supplied with third-party disk controllers, such as those from Phase5, where the ability to use large-capacity disks was a selling point. The two systems were not mutually compatible. In terms of support tools, although Commodore itself only shipped with an application called DiskDoctor (and later removed it from AmigaOS disks), FFS had a small selection of third-party tools—most notably DiskSalv—to maintain the file system and repair and validate it, undelete files, or reverse "quick formats" (filesystem initializations). An OFS or FFS volume had to be locked to defragment or convert to different FFS modes to prevent corruption and this made it inaccessible to everything but the tool defragmenting it. Most of these tools were not updated when FFS became capable of 64-bit addressing and could only operate on partitions smaller than 4 GB; they could not read partitions bigger than 4 GB, and would generally corrupt partitions "beyond" the 4 GB boundary. When hard drives in use by Amiga users were reaching 4 GB in size, this became a problem. For all of these reasons, FFS was often replaced by users in the mid-1990s with more up-to-date alternatives such as Smart File System (SFS) and Professional File System (PFS), which did not have these limitations and were considered safer, faster and more efficient. SFS in particular continued to be developed and is now as close to a generic AmigaOS filesystem as FFS. History FFS was introduced with version 1.3 of AmigaOS in 1988, and replaced both the Kickstart ROM (or Kickstart floppy for A1000s) and Workbench floppy with updated software. It carried the version number of v34, like the rest of the AmigaOS 1.3 components. Kickstart 1.3 provided autobooting support so that the machine could now be booted from hard disk or reset-proof RAM disk ("RAD:"), whereas earlier Kickstart releases could only be booted from floppy disk. Workbench 1.3 provided the FFS filesystem device driver on disk, which could be copied into the Rigid Disk Block (RDB) on hard disks. Compliant block devices would then load and install the filesystem driver before filesystems were mounted and thus make it possible to use loadable filesystems on hard disks. Kickstart 1.2 could boot Workbench 1.3 from floppy (and vice versa), but it needed both Kickstart and Workbench 1.3 to autoboot FFS-formatted hard disks. FFS support was merged into the ROM-based filesystem from Kickstart 2.0 onwards, and so it was no longer necessary to install FFS in the RDB. The ability to load filesystems from the RDB still remained available in case one wished to fix ROM bugs, get new FFS features, or use a third-party filesystem. Floppies are unpartitioned devices without a RDB and also do not use the autobooting mechanism, so were only bootable if the disk's dostype was one the ROM-based filesystem understood. As a result, FFS-formatted floppies were not bootable until the release of Kickstart 2.0, and mounting them under Workbench 1.3 involved some ugly unsupported hacks. Similarly, "Directory Cache" variants were not bootable or supported until Kickstart 3.0. The various FFS flavours did not have any compatibility problems with Amiga software, even ones that were considered "system-unfriendly". Software would either use the system calls and thus work with any filesystem, or be "trackloaders" and not use a filesystem at all. FFS operated in several modes, defined by "dostypes". AmigaOS filesystems are identified by a four letter descriptor which is specified either in the RDB or a mountlist or dosdriver; alternatively (as was the case in trackdisk-like devices like floppy disks), the disk itself could be formatted in any dostype specified. FFS dostypes were as follows: DOS\0: The original Amiga filesystem (OFS). This was left in for compatibility purposes, and the majority floppy disks shipped by software companies or as magazine coverdisks used this dostype so that they would boot on pre-2.x machines like the Amiga 500. It also meant that users with existing OFS-formatted drives could read them once they had installed FFS to the RDB. DOS\1: The new filesystem, FFS. The first, disk-based releases of FFS did not have any additional modes. AmigaOS 2.04 made FFS (now v37) part of the Kickstart ROM and introduced new modes for handling international characters in filenames, and for an on-disk directory cache. Each new mode was available with both OFS and FFS dostypes. This odd system was for parity: OFS modes apart from DOS\0 were almost never used but were available nonetheless. (Although OFS, they were still not compatible with Amiga systems without FFS). The four new dostypes introduced with v37 of FFS: DOS\2: "International" (OFS-INTL) mode allows OFS to handle filenames with "international characters" - i.e. those not found in English (Latin character set), such as ä and ê. DOS\3: International mode, for FFS (FFS-INTL). This was the most commonly used FFS mode. (All higher dostypes have international mode always enabled.) DOS\4: "Directory Cache" (OFS-DC) mode enabled to primitive cache by creating dedicated directory lists instead of having to pick up the linked directory/file entries that lie scattered over the disk. A certain (small) amount of disk space to store the directory data is allocated. The DirCache option improved directory reading speed drastically but creating, deleting and renaming files became slower. It did not increase the speed of reading individual files. It became a popular choice on Amiga hard drives, but according to Olaf Barthel, author of FFS2, the use of dircache modes was probably better for floppy disks than it was for hard drives, where it would cause an overall degradation in performance compared to lack of dircache. Despite this it was rarely used on floppies, particularly because of the cache eating precious space, and because of the limited space preventing a large number of files to cache in the first place. The dircache mode lacks a "garbage collection" mechanism, which means that partly filled cache blocks are never consolidated and will keep taking up space. DOS\5: Directory caching with FFS (FFS-DC). Both dircache modes were not backwards compatible with earlier versions of FFS. Version 40.1 was the last version of FFS released by Commodore, and came with AmigaOS 3.1, both on the OS disks and in the ROM. After this, several Unofficial patches appeared which allowed its use on drives after the first 2 GB of a hard disk using a 64-bit addressing system called TrackDisk64 or TD64 (although the 2 GB limit on file size and the 127 GB limit on partition sizes remained, as it was a limitation of AmigaOS dos.library and all then-current Amiga software) and carried the version number of v44. The version of FFS that came with AmigaOS 3.5 and 3.9 was v45 and differed in that it used a different 64-bit addressing system, New Style Device or NSD. More recently (from 2003), MorphOS and AmigaOS 4 have introduced support for a slightly updated "FFS2", by Olaf Barthel (FFS v46, v50 respectively). This is compatible with the older FFS. It is PowerPC native, and introduced two more dostypes (which are not capable of being read by the older FFS): DOS\6: "Long Filename" (OFS-LNFS). This allowed files to have a longer filename (up to 107 characters) than the usual Amiga limit, which was 31 characters. DOS\7: Long filenames for FFS (FFS-LNFS). There were no directory caching modes available for LNFS dostypes, and International Mode was always enabled. Despite the ability to use the long filenames, by this time FFS compared very poorly to other available filesystems on the platforms it was available for. Apart from these extra dostypes, there are little or no functional difference between FFS and FFS2 (although some older non-specified bugs may have been dealt with) and should still not be used except for legacy purposes. Disk validation is still necessary in FFS2 (and may still result in data loss) just as it was on FFS, despite early beliefs to the contrary. In September 2018, Hyperion Entertainment released Amiga OS 3.1.4 from the Amiga OS 3.1 source. It included an updated FastFileSystem V46 in the Kickstart ROM. The V46 FFS natively supported the APIs for TD_64, NSD, and/or the classic 32-bit TD_ storage calls. This lets the Amiga OS v3.x use and boot from large media (>4GB) natively, and support >2GB partition sizes. In July 2019, an additional file-based update to FFS was contained in the 3.1.4.1 update. In May 2021, an updated Amiga OS 3.2 was released and provided a matching ROM-based V47 FFS update which gained a few minor features and fixes. Other implementations There were few other implementations which are able to read FFS filesystems, which would normally require an Amiga emulator and a copy of the operating system ROMs to be read. Most notably, support for affs (Amiga Fast File System) can be compiled into Linux kernels, and offers full read, write and format support on FFS and OFS partitions of all dostypes except DOS\6 and DOS\7 (which are probably incredibly rare). On the Amiga, the freeware application xfs could, among many filesystems, read and write to devices formatted in OFS or FFS, and was probably the sole Amiga filesystem apart from FFS/FFS2 itself to do so. It did not support DOS\6 or DOS\7, which it predates, or formatting of devices. See also List of file systems OFS Professional File System Smart File System References External links The ADFlib Page and precisely ADF File specs The ADF specs, in LHA format from Aminet Disk file systems Amiga AmigaOS MorphOS File systems supported by the Linux kernel
Operating System (OS)
991
X86 assembly language x86 assembly language is a family of backward-compatible assembly languages, which provide some level of compatibility all the way back to the Intel 8008 introduced in April 1972. x86 assembly languages are used to produce object code for the x86 class of processors. Like all assembly languages, it uses short mnemonics to represent the fundamental instructions that the CPU in a computer can understand and follow. Compilers sometimes produce assembly code as an intermediate step when translating a high level program into machine code. Regarded as a programming language, assembly coding is machine-specific and low level. Assembly languages are more typically used for detailed and time critical applications such as small real-time embedded systems or operating system kernels and device drivers. Mnemonics and opcodes Each x86 assembly instruction is represented by a mnemonic which, often combined with one or more operands, translates to one or more bytes called an opcode; the NOP instruction translates to 0x90, for instance and the HLT instruction translates to 0xF4. There are potential opcodes with no documented mnemonic which different processors may interpret differently, making a program using them behave inconsistently or even generate an exception on some processors. These opcodes often turn up in code writing competitions as a way to make the code smaller, faster, more elegant or just show off the author's prowess. Syntax x86 assembly language has two main syntax branches: Intel syntax and AT&T syntax. Intel syntax is dominant in the DOS and Windows world, and AT&T syntax is dominant in the Unix world, since Unix was created at AT&T Bell Labs. Here is a summary of the main differences between Intel syntax and AT&T syntax: Many x86 assemblers use Intel syntax, including NASM, FASM, MASM, TASM, and YASM. GAS, which originally used AT&T syntax, has supported both syntaxes since version 2.10 via the .intel_syntax directive. A quirk in the AT&T syntax for x86 is that x87 operands are reversed, an inherited bug from the original AT&T assembler. The AT&T syntax is nearly universal to all other architectures with the same order; it was originally a syntax for PDP-11 assembly. The Intel syntax is specific to the x86 architecture, and is the one used in the x86 platform's documentation. Registers x86 processors have a collection of registers available to be used as stores for binary data. Collectively the data and address registers are called the general registers. Each register has a special purpose in addition to what they can all do: AX multiply/divide, string load & store BX index register for MOVE CX count for string operations & shifts DX port address for IN and OUT SP points to top of the stack BP points to base of the stack frame SI points to a source in stream operations DI points to a destination in stream operations Along with the general registers there are additionally the: IP instruction pointer FLAGS segment registers (CS, DS, ES, FS, GS, SS) which determine where a 64k segment starts (no FS & GS in 80286 & earlier) extra extension registers (MMX, 3DNow!, SSE, etc.) (Pentium & later only). The IP register points to the memory offset of the next instruction in the code segment (it points to the first byte of the instruction). The IP register cannot be accessed by the programmer directly. The x86 registers can be used by using the MOV instructions. For example, in Intel syntax: mov ax, 1234h ; copies the value 1234hex (4660d) into register AX mov bx, ax ; copies the value of the AX register into the BX register Segmented addressing The x86 architecture in real and virtual 8086 mode uses a process known as segmentation to address memory, not the flat memory model used in many other environments. Segmentation involves composing a memory address from two parts, a segment and an offset; the segment points to the beginning of a 64 KB (64×210) group of addresses and the offset determines how far from this beginning address the desired address is. In segmented addressing, two registers are required for a complete memory address. One to hold the segment, the other to hold the offset. In order to translate back into a flat address, the segment value is shifted four bits left (equivalent to multiplication by 24 or 16) then added to the offset to form the full address, which allows breaking the 64k barrier through clever choice of addresses, though it makes programming considerably more complex. In real mode/protected only, for example, if DS contains the hexadecimal number 0xDEAD and DX contains the number 0xCAFE they would together point to the memory address 0xDEAD * 0x10 + 0xCAFE = 0xEB5CE. Therefore, the CPU can address up to 1,048,576 bytes (1 MB) in real mode. By combining segment and offset values we find a 20-bit address. The original IBM PC restricted programs to 640 KB but an expanded memory specification was used to implement a bank switching scheme that fell out of use when later operating systems, such as Windows, used the larger address ranges of newer processors and implemented their own virtual memory schemes. Protected mode, starting with the Intel 80286, was utilized by OS/2. Several shortcomings, such as the inability to access the BIOS and the inability to switch back to real mode without resetting the processor, prevented widespread usage. The 80286 was also still limited to addressing memory in 16-bit segments, meaning only 216 bytes (64 kilobytes) could be accessed at a time. To access the extended functionality of the 80286, the operating system would set the processor into protected mode, enabling 24-bit addressing and thus 224 bytes of memory (16 megabytes). In protected mode, the segment selector can be broken down into three parts: a 13-bit index, a Table Indicator bit that determines whether the entry is in the GDT or LDT and a 2-bit Requested Privilege Level; see x86 memory segmentation. When referring to an address with a segment and an offset the notation of segment:offset is used, so in the above example the flat address 0xEB5CE can be written as 0xDEAD:0xCAFE or as a segment and offset register pair; DS:DX. There are some special combinations of segment registers and general registers that point to important addresses: CS:IP (CS is Code Segment, IP is Instruction Pointer) points to the address where the processor will fetch the next byte of code. SS:SP (SS is Stack Segment, SP is Stack Pointer) points to the address of the top of the stack, i.e. the most recently pushed byte. DS:SI (DS is Data Segment, SI is Source Index) is often used to point to string data that is about to be copied to ES:DI. ES:DI (ES is Extra Segment, DI is Destination Index) is typically used to point to the destination for a string copy, as mentioned above. The Intel 80386 featured three operating modes: real mode, protected mode and virtual mode. The protected mode which debuted in the 80286 was extended to allow the 80386 to address up to 4 GB of memory, the all new virtual 8086 mode (VM86) made it possible to run one or more real mode programs in a protected environment which largely emulated real mode, though some programs were not compatible (typically as a result of memory addressing tricks or using unspecified op-codes). The 32-bit flat memory model of the 80386's extended protected mode may be the most important feature change for the x86 processor family until AMD released x86-64 in 2003, as it helped drive large scale adoption of Windows 3.1 (which relied on protected mode) since Windows could now run many applications at once, including DOS applications, by using virtual memory and simple multitasking. Execution modes The x86 processors support five modes of operation for x86 code, Real Mode, Protected Mode, Long Mode, Virtual 86 Mode, and System Management Mode, in which some instructions are available and others are not. A 16-bit subset of instructions are available on the 16-bit x86 processors, which are the 8086, 8088, 80186, 80188, and 80286. These instructions are available in real mode on all x86 processors, and in 16-bit protected mode (80286 onwards), additional instructions relating to protected mode are available. On the 80386 and later, 32-bit instructions (including later extensions) are also available in all modes, including real mode; on these CPUs, V86 mode and 32-bit protected mode are added, with additional instructions provided in these modes to manage their features. SMM, with some of its own special instructions, is available on some Intel i386SL, i486 and later CPUs. Finally, in long mode (AMD Opteron onwards), 64-bit instructions, and more registers, are also available. The instruction set is similar in each mode but memory addressing and word size vary, requiring different programming strategies. The modes in which x86 code can be executed in are: Real mode (16-bit) 20-bit segmented memory address space (meaning that only 1 MB of memory can be addressed—actually, slightly more), direct software access to peripheral hardware, and no concept of memory protection or multitasking at the hardware level. Computers that use BIOS start up in this mode. Protected mode (16-bit and 32-bit) Expands addressable physical memory to 16 MB and addressable virtual memory to 1 GB. Provides privilege levels and protected memory, which prevents programs from corrupting one another. 16-bit protected mode (used during the end of the DOS era) used a complex, multi-segmented memory model. 32-bit protected mode uses a simple, flat memory model. Long mode (64-bit) Mostly an extension of the 32-bit (protected mode) instruction set, but unlike the 16–to–32-bit transition, many instructions were dropped in the 64-bit mode. Pioneered by AMD. Virtual 8086 mode (16-bit) A special hybrid operating mode that allows real mode programs and operating systems to run while under the control of a protected mode supervisor operating system System Management Mode (16-bit) Handles system-wide functions like power management, system hardware control, and proprietary OEM designed code. It is intended for use only by system firmware,. All normal execution, including the operating system, is suspended. An alternate software system (which usually resides in the computer's firmware, or a hardware-assisted debugger) is then executed with high privileges. Switching modes The processor runs in real mode immediately after power on, so an operating system kernel, or other program, must explicitly switch to another mode if it wishes to run in anything but real mode. Switching modes is accomplished by modifying certain bits of the processor's control registers after some preparation, and some additional setup may be required after the switch. Examples With a computer running legacy BIOS, the BIOS and the boot loader is running in Real mode, then the 64-bit operating system kernel checks and switches the CPU into Long mode and then starts new kernel-mode threads running 64-bit code. With a computer running UEFI, the UEFI firmware (except CSM and legacy Option ROM), the UEFI boot loader and the UEFI operating system kernel is all running in Long mode. Instruction types In general, the features of the modern x86 instruction set are: A compact encoding Variable length and alignment independent (encoded as little endian, as is all data in the x86 architecture) Mainly one-address and two-address instructions, that is to say, the first operand is also the destination. Memory operands as both source and destination are supported (frequently used to read/write stack elements addressed using small immediate offsets). Both general and implicit register usage; although all seven (counting ebp) general registers in 32-bit mode, and all fifteen (counting rbp) general registers in 64-bit mode, can be freely used as accumulators or for addressing, most of them are also implicitly used by certain (more or less) special instructions; affected registers must therefore be temporarily preserved (normally stacked), if active during such instruction sequences. Produces conditional flags implicitly through most integer ALU instructions. Supports various addressing modes including immediate, offset, and scaled index but not PC-relative, except jumps (introduced as an improvement in the x86-64 architecture). Includes floating point to a stack of registers. Contains special support for atomic read-modify-write instructions (xchg, cmpxchg/cmpxchg8b, xadd, and integer instructions which combine with the lock prefix) SIMD instructions (instructions which perform parallel simultaneous single instructions on many operands encoded in adjacent cells of wider registers). Stack instructions The x86 architecture has hardware support for an execution stack mechanism. Instructions such as push, pop, call and ret are used with the properly set up stack to pass parameters, to allocate space for local data, and to save and restore call-return points. The ret size instruction is very useful for implementing space efficient (and fast) calling conventions where the callee is responsible for reclaiming stack space occupied by parameters. When setting up a stack frame to hold local data of a recursive procedure there are several choices; the high level enter instruction (introduced with the 80186) takes a procedure-nesting-depth argument as well as a local size argument, and may be faster than more explicit manipulation of the registers (such as push bp ; mov bp, sp ; sub sp, size). Whether it is faster or slower depends on the particular x86-processor implementation as well as the calling convention used by the compiler, programmer or particular program code; most x86 code is intended to run on x86-processors from several manufacturers and on different technological generations of processors, which implies highly varying microarchitectures and microcode solutions as well as varying gate- and transistor-level design choices. The full range of addressing modes (including immediate and base+offset) even for instructions such as push and pop, makes direct usage of the stack for integer, floating point and address data simple, as well as keeping the ABI specifications and mechanisms relatively simple compared to some RISC architectures (require more explicit call stack details). Integer ALU instructions x86 assembly has the standard mathematical operations, add, sub, mul, with idiv; the logical operators and, or, xor, neg; bitshift arithmetic and logical, sal/sar, shl/shr; rotate with and without carry, rcl/rcr, rol/ror, a complement of BCD arithmetic instructions, aaa, aad, daa and others. Floating-point instructions x86 assembly language includes instructions for a stack-based floating-point unit (FPU). The FPU was an optional separate coprocessor for the 8086 through the 80386, it was an on-chip option for the 80486 series, and it is a standard feature in every Intel x86 CPU since the 80486, starting with the Pentium. The FPU instructions include addition, subtraction, negation, multiplication, division, remainder, square roots, integer truncation, fraction truncation, and scale by power of two. The operations also include conversion instructions, which can load or store a value from memory in any of the following formats: binary-coded decimal, 32-bit integer, 64-bit integer, 32-bit floating-point, 64-bit floating-point or 80-bit floating-point (upon loading, the value is converted to the currently used floating-point mode). x86 also includes a number of transcendental functions, including sine, cosine, tangent, arctangent, exponentiation with the base 2 and logarithms to bases 2, 10, or e. The stack register to stack register format of the instructions is usually fop st, st(n) or fop st(n), st, where st is equivalent to st(0), and st(n) is one of the 8 stack registers (st(0), st(1), ..., st(7)). Like the integers, the first operand is both the first source operand and the destination operand. fsubr and fdivr should be singled out as first swapping the source operands before performing the subtraction or division. The addition, subtraction, multiplication, division, store and comparison instructions include instruction modes that pop the top of the stack after their operation is complete. So, for example, faddp st(1), st performs the calculation st(1) = st(1) + st(0), then removes st(0) from the top of stack, thus making what was the result in st(1) the top of the stack in st(0). SIMD instructions Modern x86 CPUs contain SIMD instructions, which largely perform the same operation in parallel on many values encoded in a wide SIMD register. Various instruction technologies support different operations on different register sets, but taken as complete whole (from MMX to SSE4.2) they include general computations on integer or floating point arithmetic (addition, subtraction, multiplication, shift, minimization, maximization, comparison, division or square root). So for example, paddw mm0, mm1 performs 4 parallel 16-bit (indicated by the w) integer adds (indicated by the padd) of mm0 values to mm1 and stores the result in mm0. Streaming SIMD Extensions or SSE also includes a floating point mode in which only the very first value of the registers is actually modified (expanded in SSE2). Some other unusual instructions have been added including a sum of absolute differences (used for motion estimation in video compression, such as is done in MPEG) and a 16-bit multiply accumulation instruction (useful for software-based alpha-blending and digital filtering). SSE (since SSE3) and 3DNow! extensions include addition and subtraction instructions for treating paired floating point values like complex numbers. These instruction sets also include numerous fixed sub-word instructions for shuffling, inserting and extracting the values around within the registers. In addition there are instructions for moving data between the integer registers and XMM (used in SSE)/FPU (used in MMX) registers. Memory instructions The x86 processor also includes complex addressing modes for addressing memory with an immediate offset, a register, a register with an offset, a scaled register with or without an offset, and a register with an optional offset and another scaled register. So for example, one can encode mov eax, [Table + ebx + esi*4] as a single instruction which loads 32 bits of data from the address computed as (Table + ebx + esi * 4) offset from the ds selector, and stores it to the eax register. In general x86 processors can load and use memory matched to the size of any register it is operating on. (The SIMD instructions also include half-load instructions.) The x86 instruction set includes string load, store, move, scan and compare instructions (lods, stos, movs, scas and cmps) which perform each operation to a specified size (b for 8-bit byte, w for 16-bit word, d for 32-bit double word) then increments/decrements (depending on DF, direction flag) the implicit address register (si for lods, di for stos and scas, and both for movs and cmps). For the load, store and scan operations, the implicit target/source/comparison register is in the al, ax or eax register (depending on size). The implicit segment registers used are ds for si and es for di. The cx or ecx register is used as a decrementing counter, and the operation stops when the counter reaches zero or (for scans and comparisons) when inequality is detected. The stack is implemented with an implicitly decrementing (push) and incrementing (pop) stack pointer. In 16-bit mode, this implicit stack pointer is addressed as SS:[SP], in 32-bit mode it is SS:[ESP], and in 64-bit mode it is [RSP]. The stack pointer actually points to the last value that was stored, under the assumption that its size will match the operating mode of the processor (i.e., 16, 32, or 64 bits) to match the default width of the push/pop/call/ret instructions. Also included are the instructions enter and leave which reserve and remove data from the top of the stack while setting up a stack frame pointer in bp/ebp/rbp. However, direct setting, or addition and subtraction to the sp/esp/rsp register is also supported, so the enter/leave instructions are generally unnecessary. This code in the beginning of a function: push ebp ; save calling function's stack frame (ebp) mov ebp, esp ; make a new stack frame on top of our caller's stack sub esp, 4 ; allocate 4 bytes of stack space for this function's local variables ...is functionally equivalent to just: enter 4, 0 Other instructions for manipulating the stack include pushf/popf for storing and retrieving the (E)FLAGS register. The pusha/popa instructions will store and retrieve the entire integer register state to and from the stack. Values for a SIMD load or store are assumed to be packed in adjacent positions for the SIMD register and will align them in sequential little-endian order. Some SSE load and store instructions require 16-byte alignment to function properly. The SIMD instruction sets also include "prefetch" instructions which perform the load but do not target any register, used for cache loading. The SSE instruction sets also include non-temporal store instructions which will perform stores straight to memory without performing a cache allocate if the destination is not already cached (otherwise it will behave like a regular store.) Most generic integer and floating point (but no SIMD) instructions can use one parameter as a complex address as the second source parameter. Integer instructions can also accept one memory parameter as a destination operand. Program flow The x86 assembly has an unconditional jump operation, jmp, which can take an immediate address, a register or an indirect address as a parameter (note that most RISC processors only support a link register or short immediate displacement for jumping). Also supported are several conditional jumps, including jz (jump on zero), jnz (jump on non-zero), jg (jump on greater than, signed), jl (jump on less than, signed), ja (jump on above/greater than, unsigned), jb (jump on below/less than, unsigned). These conditional operations are based on the state of specific bits in the (E)FLAGS register. Many arithmetic and logic operations set, clear or complement these flags depending on their result. The comparison cmp (compare) and test instructions set the flags as if they had performed a subtraction or a bitwise AND operation, respectively, without altering the values of the operands. There are also instructions such as clc (clear carry flag) and cmc (complement carry flag) which work on the flags directly. Floating point comparisons are performed via fcom or ficom instructions which eventually have to be converted to integer flags. Each jump operation has three different forms, depending on the size of the operand. A short jump uses an 8-bit signed operand, which is a relative offset from the current instruction. A near jump is similar to a short jump but uses a 16-bit signed operand (in real or protected mode) or a 32-bit signed operand (in 32-bit protected mode only). A far jump is one that uses the full segment base:offset value as an absolute address. There are also indirect and indexed forms of each of these. In addition to the simple jump operations, there are the call (call a subroutine) and ret (return from subroutine) instructions. Before transferring control to the subroutine, call pushes the segment offset address of the instruction following the call onto the stack; ret pops this value off the stack, and jumps to it, effectively returning the flow of control to that part of the program. In the case of a far call, the segment base is pushed following the offset; far ret pops the offset and then the segment base to return. There are also two similar instructions, int (interrupt), which saves the current (E)FLAGS register value on the stack, then performs a far call, except that instead of an address, it uses an interrupt vector, an index into a table of interrupt handler addresses. Typically, the interrupt handler saves all other CPU registers it uses, unless they are used to return the result of an operation to the calling program (in software called interrupts). The matching return from interrupt instruction is iret, which restores the flags after returning. Soft Interrupts of the type described above are used by some operating systems for system calls, and can also be used in debugging hard interrupt handlers. Hard interrupts are triggered by external hardware events, and must preserve all register values as the state of the currently executing program is unknown. In Protected Mode, interrupts may be set up by the OS to trigger a task switch, which will automatically save all registers of the active task. Examples "Hello world!" program for DOS in MASM style assembly Using interrupt 21h for output – other samples use libc's printf to print to stdout. .model small .stack 100h .data msg db 'Hello world!$' .code start: mov ah, 09h ; Display the message lea dx, msg int 21h mov ax, 4C00h ; Terminate the executable int 21h end start "Hello world!" program for Windows in MASM style assembly ; requires /coff switch on 6.15 and earlier versions .386 .model small,c .stack 1000h .data msg db "Hello world!",0 .code includelib libcmt.lib includelib libvcruntime.lib includelib libucrt.lib includelib legacy_stdio_definitions.lib extrn printf:near extrn exit:near public main main proc push offset msg call printf push 0 call exit main endp end "Hello world!" program for Windows in NASM style assembly ; Image base = 0x00400000 %define RVA(x) (x-0x00400000) section .text push dword hello call dword [printf] push byte +0 call dword [exit] ret section .data hello db "Hello world!" section .idata dd RVA(msvcrt_LookupTable) dd -1 dd 0 dd RVA(msvcrt_string) dd RVA(msvcrt_imports) times 5 dd 0 ; ends the descriptor table msvcrt_string dd "msvcrt.dll", 0 msvcrt_LookupTable: dd RVA(msvcrt_printf) dd RVA(msvcrt_exit) dd 0 msvcrt_imports: printf dd RVA(msvcrt_printf) exit dd RVA(msvcrt_exit) dd 0 msvcrt_printf: dw 1 dw "printf", 0 msvcrt_exit: dw 2 dw "exit", 0 dd 0 "Hello world!" program for Linux in NASM style assembly ; ; This program runs in 32-bit protected mode. ; build: nasm -f elf -F stabs name.asm ; link: ld -o name name.o ; ; In 64-bit long mode you can use 64-bit registers (e.g. rax instead of eax, rbx instead of ebx, etc.) ; Also change "-f elf " for "-f elf64" in build command. ; section .data ; section for initialized data str: db 'Hello world!', 0Ah ; message string with new-line char at the end (10 decimal) str_len: equ $ - str ; calcs length of string (bytes) by subtracting the str's start address ; from this address ($ symbol) section .text ; this is the code section global _start ; _start is the entry point and needs global scope to be 'seen' by the ; linker --equivalent to main() in C/C++ _start: ; definition of _start procedure begins here mov eax, 4 ; specify the sys_write function code (from OS vector table) mov ebx, 1 ; specify file descriptor stdout --in gnu/linux, everything's treated as a file, ; even hardware devices mov ecx, str ; move start _address_ of string message to ecx register mov edx, str_len ; move length of message (in bytes) int 80h ; interrupt kernel to perform the system call we just set up - ; in gnu/linux services are requested through the kernel mov eax, 1 ; specify sys_exit function code (from OS vector table) mov ebx, 0 ; specify return code for OS (zero tells OS everything went fine) int 80h ; interrupt kernel to perform system call (to exit) "Hello world!" program for Linux in NASM style assembly using the C standard library ; ; This program runs in 32-bit protected mode. ; gcc links the standard-C library by default ; build: nasm -f elf -F stabs name.asm ; link: gcc -o name name.o ; ; In 64-bit long mode you can use 64-bit registers (e.g. rax instead of eax, rbx instead of ebx, etc..) ; Also change "-f elf " for "-f elf64" in build command. ; global main ;main must be defined as it being compiled against the C-Standard Library extern printf ;declares use of external symbol as printf is declared in a different object-module. ;Linker resolves this symbol later. segment .data ;section for initialized data string db 'Hello world!', 0Ah, 0h ;message string with new-line char (10 decimal) and the NULL terminator ;string now refers to the starting address at which 'Hello, World' is stored. segment .text main: push string ;push the address of first character of string onto stack. This will be argument to printf call printf ;calls printf add esp, 4 ;advances stack-pointer by 4 flushing out the pushed string argument ret ;return "Hello world!" program for 64-bit mode Linux in NASM style assembly ; build: nasm -f elf64 -F dwarf hello.asm ; link: ld -o hello hello.o DEFAULT REL ; use RIP-relative addressing modes by default, so [foo] = [rel foo] SECTION .rodata ; read-only data can go in the .rodata section on GNU/Linux, like .rdata on Windows Hello: db "Hello world!",10 ; 10 = `\n`. len_Hello: equ $-Hello ; get NASM to calculate the length as an assemble-time constant ;; write() takes a length so a 0-terminated C-style string isn't needed. It would be for puts SECTION .text global _start _start: mov eax, 1 ; __NR_write syscall number from Linux asm/unistd_64.h (x86_64) mov edi, 1 ; int fd = STDOUT_FILENO lea rsi, [rel Hello] ; x86-64 uses RIP-relative LEA to put static addresses into regs mov rdx, len_Hello ; size_t count = len_Hello syscall ; write(1, Hello, len_Hello); call into the kernel to actually do the system call ;; return value in RAX. RCX and R11 are also overwritten by syscall mov eax, 60 ; __NR_exit call number (x86_64) xor edi, edi ; status = 0 (exit normally) syscall ; _exit(0) Running it under strace verifies that no extra system calls are made in the process. The printf version would make many more system calls to initialize libc and do dynamic linking. But this is a static executable because we linked using ld without -pie or any shared libraries; the only instructions that run in user-space are the ones you provide. $ strace ./hello > /dev/null # without a redirect, your program's stdout is mixed strace's logging on stderr. Which is normally fine execve("./hello", ["./hello"], 0x7ffc8b0b3570 /* 51 vars */) = 0 write(1, "Hello world!\n", 13) = 13 exit(0) = ? +++ exited with 0 +++ Using the flags register Flags are heavily used for comparisons in the x86 architecture. When a comparison is made between two data, the CPU sets the relevant flag or flags. Following this, conditional jump instructions can be used to check the flags and branch to code that should run, e.g.: cmp eax, ebx jne do_something ; ... do_something: ; do something here Flags are also used in the x86 architecture to turn on and off certain features or execution modes. For example, to disable all maskable interrupts, you can use the instruction: cli The flags register can also be directly accessed. The low 8 bits of the flag register can be loaded into ah using the lahf instruction. The entire flags register can also be moved on and off the stack using the instructions pushf, popf, int (including into) and iret. Using the instruction pointer register The instruction pointer is called ip in 16-bit mode, eip in 32-bit mode, and rip in 64-bit mode. The instruction pointer register points to the memory address which the processor will next attempt to execute; it cannot be directly accessed in 16-bit or 32-bit mode, but a sequence like the following can be written to put the address of next_line into eax: call next_line next_line: pop eax This sequence of instructions generates position-independent code because call takes an instruction-pointer-relative immediate operand describing the offset in bytes of the target instruction from the next instruction (in this case 0). Writing to the instruction pointer is simple — a jmp instruction sets the instruction pointer to the target address, so, for example, a sequence like the following will put the contents of eax into eip: jmp eax In 64-bit mode, instructions can reference data relative to the instruction pointer, so there is less need to copy the value of the instruction pointer to another register. See also Assembly language X86 instruction listings X86 architecture CPU design List of assemblers Self-modifying code DOS References Further reading Manuals Intel 64 and IA-32 Software Developer Manuals AMD64 Architecture Programmer's Manual (Volume 1-5) Books Assembly languages X86 architecture
Operating System (OS)
992
STOS BASIC STOS BASIC is a dialect of the BASIC programming language for the Atari ST personal computer. It was designed for creating games, but the set of high-level graphics and sound commands it offers is suitable for developing multimedia software without knowledge of the internals of the Atari ST. STOS BASIC was developed by Jawx–François Lionet, and Constantin Sotiropoulos–and published by Mandarin Software (now known as Europress Software). History Although the first version of STOS to be released in the UK (version 2.3) was released in late 1988 by Mandarin Software, a version had been released earlier in France. Version 2.3 was bundled with three complete games (Orbit, Zoltar and Bullet Train), and many accessories and utilities (such as sprite and music editors). Initially implemented as a BASIC interpreter, a compiler was soon released that enabled the user to compile the STOS Basic program into an executable file that ran a lot faster because it was compiled rather than interpreted. In order to be compatible with the compiler, STOS needed to be upgraded to version 2.4 (which came with the compiler). STOS 2.4 also fixed a few bugs and had faster floating point mathematics code, but the floating point numbers had a smaller range. STOS 2.5 was released to make STOS run on Atari STEs with TOS 1.06 (1.6), and then STOS 2.6 was needed to make STOS run on Atari STEs with TOS 1.62. STOS 2.7 was a compiler-only upgrade that made programs with the STOS tracker extension (used to play MOD music) compile. There was a 3rd-party hack called STOS 2.07 designed to make STOS run on even more TOS versions, and behave on the Atari Falcon. Around 2001 François Lionet released via the Clickteam website the source code of STOS BASIC. On the 4th of April, 2019 François Lionet announced the release of AMOS2 on his website Amos2.tech. AMOS2 replaces STOS and AMOS together, using JavaScript as its code interpreter, making the new development system independent and generally deployed in internet browsers. AMOS2 is now known as AOZ Studio. Extensions It was possible to extend the functionality of STOS by adding extensions which added more commands to the language and increased the functionality. The first such extension to be released was STOS Maestro which added the ability to play sampled sounds. STOS Maestro plus was STOS Maestro bundled with a sound-sampler cartridge. Other extensions included TOME, STOS 3D, STE extension, Misty, The Missing Link, Control extension, Extra and Ninja Tracker. These extensions kept STOS alive for many years after its release. Criticisms While giving programmers the ability to rapidly create a game without knowing the internals, STOS was criticised for being slow (especially when intensively using the non-high-level commands), and for not allowing the user to program in a structured manner. Other platforms In 1990, AMOS BASIC was released for the Amiga. It was originally meant to shortly follow the release of STOS on the Atari ST. AMOS was released about two years after the UK release of STOS. But this turned out to be a blessing in disguise for the Amiga community thanks to the extra development time. Not only did AMOS take advantage of the extra Amiga hardware and have more commands than STOS, but the style of BASIC was completely different - it had no line-numbers, and there were many structured programming constructs (at one time, the STOS Club Newsletter published a program that allowed the reader to program STOS using that style). While it was often possible to directly convert STOS BASIC programs that did not heavily rely on extensions to AMOS BASIC, the reverse was not usually true. A PC version called PCOS was once mentioned, but that never materialised. Instead, the publishers Mandarin Software renamed themselves Europress Software. One of the developers in Jawx, Francois Lionet, was later to form Clickteam with Yves Lamoureux and went on to release the Klik (click) series of games-creation tools (which were dissimilar to STOS as they use a primarily mouse-driven interface without the need for traditional code). Klik & Play, The Games Factory, Multimedia Fusion and Multimedia Fusion 2 have been released in this series. References External links General History of STOS and AMOS STOS Time Tunnel - A site dedicated to STOS STOS - Basic Language for Making Games - Article about STOS and its extensions (with photos of the products and scans of old ads) Publishers Clickteam STOS and AMOS page - Source code for STOS and AMOS in 68000 ASM (archived ZIP, Compiler) Patches Generic STOS fixer - Use this to fix compiled STOS programs so that they run on a greater number of TOS versions. STOS Basic 2.07 - Use this to patch a version of STOS to version 2.07. It makes the compiled programs compatible with more TOS versions and hardware. It even makes STOS work properly on the Atari Falcon Resources MINI DOC POUR LE STOS BASIC (Atari) - A small documentation of STOS's most simple commands (in French). http://www.umich.edu/~archive/atari/Programming/Stos/ - Index of the Atari Archive STOS section Nostalgia STOS Wiz-Coders Forgotten Creations by Simon Hazelgrove Silly Software Atari ST software BASIC compilers BASIC interpreters Discontinued BASICs Video game development software BASIC programming language family
Operating System (OS)
993
On-board data handling The on-board data handling (OBDH) subsystem of a spacecraft is the subsystem which carries and stores data between the various electronics units and the ground segment, via the telemetry, tracking and command (TT&C) subsystem. In the earlier decades of the space industry, the OBDH function was usually considered a part of the TT&C, particularly before computers became common on board. In recent years, the OBDH function has expanded, so much that it is generally considered a separate subsystem to the TT&C, which is these days concerned solely with the RF link between the ground and the spacecraft. Functions commonly performed by the OBDH are: Reception, error correction and decoding of telecommands (TCs) from the TT&C Forwarding of telecommands for execution by the target Avionics Storage of telecommands until a defined time ('time tagged' TCs) Storage of telecommands until a defined position ('position tagged' TCs) Measurement of discrete values such as voltages, temperatures, binary statuses etc. Collection of measurements made by other units and subsystems via one or more data busses, such as MIL-STD-1553. Real-time buffering of the measurements in a data pool Provision of a processing capability to achieve the aims of the mission, often using the data collected. Collation and encoding of pre-defined telemetry frames Storage of telemetry frames in a mass memory Downlinking of telemetry to the ground, via the TT&C. Management and distribution of time signals Telecommand reception The OBDH receives the TCs as a synchronous PCM data stream from the TT&C Telecommand execution The desired effect of the telecommand may be just to change a value in the on-board software, or to open/close a latching relay to reconfigure or power a unit, or maybe to fire a thruster or main engine. Whichever effect is desired, the OBDH subsystem will facilitate this either by sending an electric pulse from the OBC, or by passing the command through a data bus to the unit which will eventually execute the TC. Some TCs are part of a large block of commands, used to upload updated software or data tables to fine tune the operation of the spacecraft, or to deal with anomalies. Time-tagged telecommands It is often required to delay a command's execution until a certain time. This is often because the spacecraft is not in view of the ground station, but may also be for reasons of precision. The OBC will store the TC until the required time in a queue, and then execute it. Position-tagged telecommands Similar to time-tagged commands are commands that are stored for execution until the spacecraft is at a specified position. These are most useful for earth observation satellites, which need to start an observation over a specified point of the Earth's surface. The spacecraft, often in sun-synchronous orbits, take a precisely repeating track over the earth. Observations which are taken from the same position may be compared using interferometry, if they are in close enough register. The precise position required is sensed using GPS. Once a position tagged command has been executed, it may be flagged for deletion or left to execute again when the spacecraft is once again over the same point. Processing function The modern OBDH always uses an on-board computer (OBC) that is reliable, usually with redundant processors. The processing power is made available to other applications which support the spacecraft bus, such as attitude control algorithms, thermal control, failure detection isolation and recovery. If the mission itself requires only a small amount of computing power (such as a small scientific satellite) then the payload may also be controlled by the software running on the OBC, to save launch mass and the considerable expense of a dedicated payload computer. See also Spacecraft bus References External links https://ecss.nl/standard/ecss-e-st-50-04c-space-data-links-telecommand-protocols-synchronization-and-channel-coding/ Avionics
Operating System (OS)
994
OpenROAD OpenROAD stands for "Open Rapid Object Application Development". It is a software product of Actian Corporation. OpenROAD is a fourth-generation programming language (4GL) which include a suite of development tools, with built-in Integrated development environment (IDE) (Written in OpenROAD), Code Repository, allowing applications to be developed and deployed on Microsoft and UNIX/LINUX platforms. History The history of OpenROAD is closely tied to that of the Ingres relational database. OpenROAD started in the early 1990s as a product called Windows 4GL. When Ingres was re-badged as OpenIngres, the new name of OpenROAD was born. Since that time it has been through a number of major developments. The Ingres Product set, (marketed by ASK Corporation, Computer Associates, Ingres Corporation and then Actian) was popular in the governments of North West Europe, and can be found in many government departments. OpenROAD née Windows4GL appeared in beta form on the SUN platform in 1991 as Windows4GL 1.0, and was available to British Universities under a special license agreement. The development environment was known as the Sapphire Editor. The Sapphire Editor allowed the creation of complex GUI interfaces using an IDE, rather than large volumes of Motif code / resource files. This was one of the first environments to enable rapid prototyping of GUI clients. Windows4GL 2.0 introduced Microsoft Windows compatibility and the debugger. OpenROAD 3.0 was when it became stable on MS Windows. OpenROAD 3.5(1) was when it became stable. OpenROAD 4.1 introduced an interface to ActiveX controls, providing access to ActiveX control attributes and methods within the language. This mechanism often requires 'Wrapper' DLLs to be written to handle data type issues, one of which being a 2000 character limit on strings of text. It is an interpreted language that uses a runtime distributable client to process 'image' files, thus no DLL or .NET dependency issues under MS Windows (ActiveX aside). It was possible to use images in any environment (Unix, VMS or MS Windows up to version 3.51), however portability issues between GUI environments (mostly related to FONT differences) made this difficult. There was a Macintosh Beta version produced. After 3.51, the UNIX environments used a Commercial PC emulator to give native capability, possibly one of the hurdles on the ROAD to its Open Source status across all platforms. Variations in the distribution include FAT client (Requires Ingres NET for communication), Thin eClient (can be used without Ingres NET but needs to use the Application Server instead (DCOM)), and finally mClient for Mobile Windows Clients (HTTP services required to interface to the Application server). OpenROAD 2006 (5.0+) went General Available December 2006. OpenROAD 5.1 went Generally Available Apr 2011. The defining feature of the release was general-purpose system classes for XML support, to allow creation and parsing of arbitrary XML documents without the need to create additional user classes or to use external components (3GL Procedures or External Class Libraries). Providing an XML based export file format will that will be documented, human readable, produce clean differences between different revisions of a file, allow changes to be merged, will allow OpenROAD source components to be managed by many different Software Configuration Management (SCM) systems. OpenROAD 2006 5.5 with UNICODE support was a special limited release. OpenROAD 6.0 is the current general release and includes the additional UNICODE support of 5.5 . Language structure The syntax of OpenROAD is very closely linked to that of the Ingres database, with direct support for embedded SQL. In a similar way to other event based programming languages, code can be placed in groups for related windows/system events. The syntax is similar to Microsoft Visual Basic, allowing OpenROAD users to quickly adapt to Visual Basic with the help of Intellisense. Intellisense is still not available (Q2 of 2008) in the OpenROAD IDE, however editors like TextPAD have syntax files that allow colour-coding of source files using key word recognition. OpenROAD comes with system classes with following functionality: application source (allows to dynamically fetch, create, modify source artifacts) database access data types (scalar and complex) runtime control visual forms (incl. common widgets and controls) Features object oriented language: class, simple inheritance (no interfaces, currently no constructor/destructor but planned for version 5.0) Cross platform support Integrated Debugger/IDE Integrated Application Server Support for Windows CE development (V5.0) Support for VB.Net/Java Integration Features needed (Q2 2008) Intellisense for source, SQL statements and user defined objects. The ability to construct user objects that inherit from the system classes Better configuration management for large development teams Native access to .NET classes In process access to Ingres NET for FAT clients making distribution easier. Extension of the OpenROAD language into the Ingres database engine replacing the Procedure language. Access to the sources of the OpenROAD language Platforms OpenROAD application can be deployed on the following clients :- Thin Client (Web), Windows, and various flavours of Linux/Unix. It has support for n-tier systems by using the OpenROAD Application Server. The Application Server can be deployed on Windows or Linux/Unix platforms. It has built-in support for the Ingres database, or one of the following using a product called Enterprise Access: Oracle, SQL Server or DB2, which allows the client to use the same SQL syntax for all target databases. External links Product links: Ingres Corporation Community links: North American Ingres Users Association German Ingres User Association Ingres UserGroup Nederland OpenROAD FAQ (1997) Ingres Community OpenROAD Wiki Mailing Lists: Openroad-users Mailing List Webcasts: OpenROAD Application Development Fourth-generation programming languages
Operating System (OS)
995
Outline of software The following outline is provided as an overview of and topical guide to software: Software – collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more computer programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures, algorithms and its documentation concerned with the operation of a data processing system. The term was coined to contrast to the old term hardware (meaning physical devices). In contrast to hardware, software "cannot be touched". Software is also sometimes used in a more narrow sense, meaning application software only. Sometimes the term includes data that has not traditionally been associated with computers, such as film, tapes, and records. . What type of thing is software? Software can be described as all of the following: Technology Computer technology Tools Types of software Application software – end-user applications of computers such as word processors or video games, and ERP software for groups of users. Business software Computer-aided design Databases Decision-making software Educational software Emotion-sensitive software Image editing Industrial automation Mathematical software Medical software Molecular modeling software Quantum chemistry and solid state physics software Simulation software Spreadsheets Telecommunications (i.e., the Internet and everything that flows on it) Video editing software Video games Word processors Middleware controls and co-ordinates distributed systems. Programming languages – define the syntax and semantics of computer programs. For example, many mature banking applications were written in the language COBOL, invented in 1959. Newer applications are often written in more modern languages. System software – provides the basic functions for computer usage and helps run the computer hardware and system. It includes a combination of the following: Device driver Operating system Package management system Server Utility Window system Teachware – any special breed of software or other means of product dedicated to education purposes in software engineering and beyond in general education. Testware – any software for testing hardware or software. Firmware – low-level software often stored on electrically programmable memory devices. Firmware is given its name because it is treated like hardware and run ("executed") by other software programs. Firmware often is not accessible for change by other entities but the developers' enterprises. Shrinkware is the older name given to consumer-purchased software, because it was often sold in retail stores in a shrink wrapped box. Device drivers – control parts of computers such as disk drives, printers, CD drives, or computer monitors. Programming tools – assist a programmer in writing computer programs, and software using various programming languages in a more convenient way. The tools include: Compilers Debuggers Interpreters Linkers Text editors profiler Integrated development environment (IDE) – single application for managing all of these functions. Software products By publisher List of Adobe software List of Microsoft software By platform List of Macintosh software List of old Macintosh software List of proprietary software for Linux List of Linux audio software List of Linux games By type List of software categories List of 2D animation software List of 3D animation software List of 3D computer graphics software List of 3D modeling software List of antivirus software List of chess software List of compilers List of computer-aided design software List of computer algebra systems List of computer-assisted organic synthesis software List of computer simulation software List of concept- and mind-mapping software List of content management systems List of desktop publishing software List of discrete event simulation software List of finite element software packages List of graphing software List of HDL simulators List of text editors List of HTML editors List of information graphics software List of Linux distributions List of operating systems List of protein structure prediction software List of molecular graphics systems List of numerical analysis software List of optimization software List of PDF software List of PHP editors List of proof assistants List of quantum chemistry and solid state physics software List of spreadsheet software List of statistical packages List of theorem provers List of tools for static code analysis List of Unified Modeling Language tools List of video editing software List of web browsers Comparisons Comparison of 3D computer graphics software Comparison of accounting software Comparison of audio player software Comparison of computer-aided design editors Comparison of data modeling tools Comparison of database tools Comparison of desktop publishing software Comparison of digital audio editors Comparison of DOS operating systems Comparison of email clients Comparison of EM simulation software Comparison of force field implementations Comparison of instant messaging clients Comparison of issue tracking systems Comparison of Linux distributions Comparison of mail servers Comparison of network monitoring systems Comparison of nucleic acid simulation software Comparison of operating systems Comparison of raster graphics editors Comparison of software for molecular mechanics modeling Comparison of system dynamics software Comparison of text editors Comparison of vector graphics editors Comparison of web frameworks Comparison of web server software Comparison of word processors Comparison of deep-learning software History of software History of software engineering History of free and open-source software History of software configuration management History of programming languages Timeline of programming languages History of operating systems History of Mac OS X History of Microsoft Windows Timeline of Microsoft Windows History of the web browser Web browser history Software development Software development  (outline) – development of a software product, which entails computer programming (process of writing and maintaining the source code), but also encompasses a planned and structured process from the conception of the desired software to its final manifestation. Therefore, software development may include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products. Computer programming Computer programming  (outline) – Software engineering Software engineering  (outline) – Software distribution Software distribution – Software licenses Beerware Free Free and open source software Freely redistributable software Open-source software Proprietary software Public domain software Revenue models Adware Donationware Freemium Freeware Commercial software Nagware Postcardware Shareware Delivery methods Digital distribution List of mobile software distribution platforms On-premises software Pre-installed software Product bundling Software as a service Software plus services Scams Scareware Malware End of software life cycle Abandonware Software industry Software industry Software publications Free Software Magazine InfoWorld PC Magazine Software Magazine Wired (magazine) Persons influential in software Bill Gates Steve Jobs Jonathan Sachs Wayne Ratliff See also Outline of information technology Outline of computers Outline of computing List of computer hardware terms Bachelor of Science in Information Technology Custom software Functional specification Marketing strategies for product software Service-Oriented Modeling Framework Bus factor Capability Maturity Model Software publisher User experience References External links Outline Software Software
Operating System (OS)
996
Windows XP Professional x64 Edition Microsoft Windows XP Professional x64 Edition, released on April 25, 2005, is an edition of Windows XP for x86-64 personal computers. It is designed to use the expanded 64-bit memory address space provided by the x86-64 architecture. The primary benefit of moving to 64-bit is the increase in the maximum allocatable random-access memory (RAM). 32-bit editions of Windows XP are limited to a total of 4 gigabytes. Although the theoretical memory limit of a 64-bit computer is about 16 exabytes (17.1 billion gigabytes), Windows XP x64 is limited to 128GB of physical memory and 16 terabytes of virtual memory. Windows XP Professional x64 Edition uses the same kernel and code tree as Windows Server 2003 and is serviced by the same service packs. However, it includes client features of Windows XP such as System Restore, Windows Messenger, Fast User Switching, Welcome Screen, Security Center and games, which Windows Server 2003 does not have. Windows XP Professional x64 Edition is not to be confused with Windows XP 64-Bit Edition as the latter was designed for Itanium architecture. During the initial development phases, Windows XP Professional x64 Edition was named Windows XP 64-Bit Edition for 64-Bit Extended Systems. Advantages Supports up to 128GB of RAM. Supports up to two physical CPUs (in separate physical sockets) and up to 64 logical processors (i.e. cores or threads on a single CPU). As such, , the OS supports all commercially available multicore CPUs, including Intel Core series, or AMD FX series. Uses the Windows Server 2003 kernel which is newer than 32-bit Windows XP and has improvements to enhance scalability. Windows XP Professional x64 Edition also introduces Kernel Patch Protection (also known as PatchGuard) which can help improve security by helping to eliminate rootkits. Supports GPT-partitioned disks for data volumes (but not bootable volumes) after SP1, which allows disks greater than 2TB to be used as a single GPT partition for storing data. Allows faster encoding of audio or video, higher performance video gaming and faster 3D rendering in software optimized for 64-bit hardware. Ships with Internet Information Services (IIS) version 6.0. All other 32-bit editions of Windows XP have IIS v5.1. Ships with Windows Media Player (WMP) version 10. Windows XP Professional shipped with WMP 8 (with WMP 9 shipping with Service Pack 2 and later), although WMP 11 is available for Windows XP Service Pack 2 or later. Benefits from IPsec features and improvements made in Windows Server 2003. Benefits from Shadow Copy features introduced in Windows Server 2003. Remote Desktop Services supports Unicode keyboard input, client-side time-zone redirection, GDI+ rendering primitives for improved performance, FIPS encryption, fallback printer driver, auto-reconnect and new Group Policy settings. Files and Settings Transfer Wizard supports migrating settings from both 32-bit and 64-bit Windows XP PCs. Software compatibility Windows XP Professional x64 Edition uses a technology named Windows-on-Windows 64-bit (WoW64), which permits the execution of 32-bit software. It was first used in Windows XP 64-bit Edition (for Itanium architecture). Later, it was adopted for x64 editions of Windows XP and Windows Server 2003. Since the x86-64 architecture includes hardware-level support for 32-bit instructions, WoW64 simply switches the process between 32- and 64-bit modes. As a result, x86-64 architecture microprocessors suffer no performance loss when executing 32-bit Windows applications. On the Itanium architecture, WoW64 was required to translate 32-bit x86 instructions into their 64-bit Itanium equivalents—which in some cases were implemented in quite different ways—so that the processor could execute them. All 32-bit processes are shown with *32 in the task manager, while 64-bit processes have no extra text present. Although 32-bit applications can be run transparently, the mixing of the two types of code within the same process is not allowed. A 64-bit program cannot use a 32-bit dynamic-link library (DLL) and similarly a 32-bit program cannot use a 64-bit DLL. This may lead to the need for library developers to provide both 32-bit and 64-bit binary versions of their libraries. Specifically, 32-bit shell extensions for Windows Explorer fail to work with 64-bit Windows Explorer. Windows XP x64 Edition ships with both 32-bit and 64-bit versions of Windows Explorer. The 32-bit version can become the default Windows Shell. Windows XP x64 Edition also includes both 32-bit and 64-bit versions of Internet Explorer 6, so that user can still use browser extensions or ActiveX controls that are not available in 64-bit versions. Only 64-bit drivers are supported in Windows XP x64 Edition, but 32-bit codecs are supported as long as the media player that uses them is 32-bit. Installation of programs By default, 64-bit (x86-64) Windows programs are installed onto their own folders under folder location "C:\Program Files", while 32-bit (x86-32) Windows programs are installed onto their own folders under folder location "C:\Program Files (x86)". Known limitations There are some common issues that arise with Windows XP Professional x64 Edition. Does not include NTVDM or Windows on Windows, so 16-bit Windows applications or native MS-DOS applications cannot run. Some old 32-bit programs use 16-bit installers which do not run; however, replacements for 16-bit installers such as ACME Setup versions 2.6, 3.0, 3.01, 3.1 and InstallShield 5.x are hardcoded into WoW64 to mitigate this issue. The same is true with later 64-bit versions of Windows. Only 64-bit drivers are supported. Any 32-bit Windows Explorer shell extensions fail to work with 64-bit Windows Explorer. However, Windows XP x64 Edition also ships with a 32-bit Windows Explorer. It is possible to make it the default Windows Shell. Windows Command Prompt does not load in full-screen. No native support for Type 1 fonts. Does not contain a Web Extender Client component for Web Folders (WebDAV). Spell checking is not available in Outlook Express. IEEE 1394 (FireWire) audio is not supported. Does not support hibernation if PC's RAM is greater than 4GB. No EFI or UEFI boot support. ACPI BIOS is required. Only provides English or Japanese as native display language. These MUIs are available for English version: Chinese, French, German, Italian, Japanese, Korean, Spanish, Swedish. Service packs The RTM version of Windows XP Professional x64 Edition was built from the Windows Server 2003 Service Pack 1 codebase. Because Windows XP Professional x64 Edition comes from a different codebase than 32-bit Windows XP, its service packs are also developed separately. For the same reason, Service Pack 2 for Windows XP x64 Edition, released on March 13, 2007, is not the same as Service Pack 2 for 32-bit versions of Windows XP. In fact, due to the earlier release date of the 32-bit version, many of the key features introduced by Service Pack 2 for 32-bit (x86) editions of Windows XP were already present in the RTM version of its x64 counterpart. Service Pack 2 is the last released service pack for Windows XP Professional x64 Edition. Upgrade A machine running Windows XP Professional x64 Edition can be upgraded to Windows Vista or Windows 7. Unlike other versions of Windows XP, Windows XP Professional x64 Edition is not upgradable to Windows Vista as the 64 bit DVD recognizes the OS as a 32-bit system. Vista can still be installed, but it requires a clean install. The last version of Microsoft Office to be compatible with Windows XP Professional x64 Edition is Office 2007, and the last version of Internet Explorer compatible with the operating system is Internet Explorer 8 (Service Pack 2 is required). References Further reading External links Windows XP X86-64 operating systems ca:Windows XP#64 bits
Operating System (OS)
997
Linux startup process Linux startup process is the multi-stage initialization process performed during booting a Linux installation. It is in many ways similar to the BSD and other Unix-style boot processes, from which it derives. Booting a Linux installation involves multiple stages and software components, including firmware initialization, execution of a boot loader, loading and startup of a Linux kernel image, and execution of various startup scripts and daemons. For each of these stages and components there are different variations and approaches; for example, GRUB, coreboot or Das U-Boot can be used as boot loaders (historical examples are LILO, SYSLINUX or Loadlin), while the startup scripts can be either traditional init-style, or the system configuration can be performed through modern alternatives such as systemd or Upstart. Overview Early stages of the Linux startup process depend very much on the computer architecture. IBM PC compatible hardware is one architecture Linux is commonly used on; on these systems, the BIOS plays an important role, which might not have exact analogs on other systems. In the following example, IBM PC compatible hardware is assumed: The BIOS performs startup tasks like the Power-on self-test specific to the actual hardware platform. Once the hardware is enumerated and the hardware which is necessary for boot is initialized correctly, the BIOS loads and executes the boot code from the configured boot device. The boot loader often presents the user with a menu of possible boot options and has a default option, which is selected after some time passes. Once the selection is made, the boot loader loads the kernel into memory, supplies it with some parameters and gives it control. The kernel, if compressed, will decompress itself. It then sets up system functions such as essential hardware and memory paging, and calls start_kernel() which performs the majority of system setup (interrupts, the rest of memory management, device and driver initialization, etc.). It then starts up, separately, the idle process, scheduler, and the init process, which is executed in user space. The init either consists of scripts that are executed by the shell (sysv, bsd, runit) or configuration files that are executed by the binary components (systemd, upstart). Init has specific levels (sysv, bsd) or targets (systemd), each of which consists of specific set of services (daemons). These provide various non-operating system services and structures and form the user environment. A typical server environment starts a web server, database services, and networking. The typical desktop environment begins with a daemon, called the display manager, that starts a graphic environment which consists of a graphical server that provides a basic underlying graphical stack and a login manager that provides the ability to enter credentials and select a session. After the user has entered the correct credentials, the session manager starts a session. A session is a set of programs such as UI elements (panels, desktops, applets, etc.) which, together, can form a complete desktop environment. On shutdown, init is called to close down all user space functionality in a controlled manner. Once all the other processes have terminated, init makes a system call to the kernel instructing it to shut the system down. Boot loader phase The boot loader phase varies by computer architecture. Since the earlier phases are not specific to the operating system, the BIOS-based boot process for x86 and x86-64 architectures is considered to start when the master boot record (MBR) code is executed in real mode and the first-stage boot loader is loaded. In UEFI systems, the Linux kernel can be executed directly by UEFI firmware via EFISTUB, but usually uses GRUB 2 or systemd-boot as a boot loader. Below is a summary of some popular boot loaders: GRUB 2 differs from GRUB 1 by being capable of automatic detection of various operating systems and automatic configuration. The stage1 is loaded and executed either by the BIOS from the Master boot record (MBR). The intermediate stage loader (stage1.5, usually core.img) is loaded and executed by the stage1 loader. The second-stage loader (stage2, the /boot/grub/ files) is loaded by the stage1.5 and displays the GRUB startup menu that allows the user to choose an operating system or examine and edit startup parameters. After a menu entry is chosen and optional parameters are given, GRUB loads the linux kernel into memory and passes control to it. GRUB 2 is also capable of chain-loading of another boot loader. In UEFI systems, the stage1 and stage1.5 usually are the same UEFI application file (such as grubx64.efi for x64 UEFI systems). systemd-boot (formerly Gummiboot), a bootloader included with systemd that requires minimal configuration (for UEFI systems only). SYSLINUX/ISOLINUX is a boot loader that specializes in booting full Linux installations from FAT filesystems. It is often used for boot or rescue floppy discs, live USBs, and other lightweight boot systems. ISOLINUX is generally used by Linux live CDs and bootable install CDs. rEFInd, a boot manager for UEFI systems. coreboot is a free implementation of the UEFI or BIOS and usually deployed with the system board, and field upgrades provided by the vendor if need be. Parts of coreboot becomes the systems BIOS and stays resident in memory after boot. Das U-Boot is a boot loader for embedded systems. It is used on systems that does not have a BIOS/UEFI but rather employ custom methods to read the boot loader into memory and execute it. Historical boot loaders, no longer in common use, include: LILO does not understand or parse filesystem layout. Instead, a configuration file (/etc/lilo.conf) is created in a live system which maps raw offset information (mapper tool) about location of kernel and ram disks (initrd or initramfs). The configuration file, which includes data such as boot partition and kernel pathname for each, as well as customized options if needed, is then written together with bootloader code into MBR bootsector. When this bootsector is read and given control by BIOS, LILO loads the menu code and draws it then uses stored values together with user input to calculate and load the Linux kernel or chain-load any other boot-loader. GRUB 1 includes logic to read common file systems at run-time in order to access its configuration file. This gives GRUB 1 ability to read its configuration file from the filesystem rather than have it embedded into the MBR, which allows it to change the configuration at run-time and specify disks and partitions in a human-readable format rather than relying on offsets. It also contains a command-line interface, which makes it easier to fix or modify GRUB if it is misconfigured or corrupt. Loadlin is a boot loader that can replace a running DOS or Windows 9x kernel with the Linux kernel at run time. This can be useful in the case of hardware that needs to be switched on via software and for which such configuration programs are proprietary and only available for DOS. This booting method is less necessary nowadays, as Linux has drivers for a multitude of hardware devices, but it has seen some use in mobile devices. Another use case is when the Linux is located on a storage device which is not available to the BIOS for booting: DOS or Windows can load the appropriate drivers to make up for the BIOS limitation and boot Linux from there. Kernel phase The Linux kernel handles all operating system processes, such as memory management, task scheduling, I/O, interprocess communication, and overall system control. This is loaded in two stages – in the first stage, the kernel (as a compressed image file) is loaded into memory and decompressed, and a few fundamental functions such as basic memory management are set up. Control is then switched one final time to the main kernel start process. Once the kernel is fully operational – and as part of its startup, upon being loaded and executing – the kernel looks for an init process to run, which (separately) sets up a user space and the processes needed for a user environment and ultimate login. The kernel itself is then allowed to go idle, subject to calls from other processes. For some platforms (like ARM 64-bit), kernel decompression has to be performed by the boot loader instead. The kernel is typically loaded as an image file, compressed into either zImage or bzImage formats with zlib. A routine at the head of it does a minimal amount of hardware setup, decompresses the image fully into high memory, and takes note of any RAM disk if configured. It then executes kernel startup via ./arch/i386/boot/head and the startup_32 () (for x86 based processors) process. The startup function for the kernel (also called the swapper or process 0) establishes memory management (paging tables and memory paging), detects the type of CPU and any additional functionality such as floating point capabilities, and then switches to non-architecture specific Linux kernel functionality via a call to start_kernel(). start_kernel executes a wide range of initialization functions. It sets up interrupt handling (IRQs), further configures memory, starts the Init process (the first user-space process), and then starts the idle task via cpu_idle(). Notably, the kernel startup process also mounts the initial RAM disk ("initrd") that was loaded previously as the temporary root file system during the boot phase. The initrd allows driver modules to be loaded directly from memory, without reliance upon other devices (e.g. a hard disk) and the drivers that are needed to access them (e.g. a SATA driver). This split of some drivers statically compiled into the kernel and other drivers loaded from initrd allows for a smaller kernel. The root file system is later switched via a call to pivot_root() which unmounts the temporary root file system and replaces it with the use of the real one, once the latter is accessible. The memory used by the temporary root file system is then reclaimed. Thus, the kernel initializes devices, mounts the root filesystem specified by the boot loader as read only, and runs Init (/sbin/init) which is designated as the first process run by the system (PID = 1). A message is printed by the kernel upon mounting the file system, and by Init upon starting the Init process. It may also optionally run Initrd to allow setup and device related matters (RAM disk or similar) to be handled before the root file system is mounted. According to Red Hat, the detailed kernel process at this stage is therefore summarized as follows: "When the kernel is loaded, it immediately initializes and configures the computer's memory and configures the various hardware attached to the system, including all processors, I/O subsystems, and storage devices. It then looks for the compressed initrd image in a predetermined location in memory, decompresses it, mounts it, and loads all necessary drivers. Next, it initializes virtual devices related to the file system, such as LVM or software RAID before unmounting the initrd disk image and freeing up all the memory the disk image once occupied. The kernel then creates a root device, mounts the root partition read-only, and frees any unused memory. At this point, the kernel is loaded into memory and operational. However, since there are no user applications that allow meaningful input to the system, not much can be done with it." An initramfs-style boot is similar, but not identical to the described initrd boot. At this point, with interrupts enabled, the scheduler can take control of the overall management of the system, to provide pre-emptive multi-tasking, and the init process is left to continue booting the user environment in user space. Early user space initramfs, also known as early user space, has been available since version 2.5.46 of the Linux kernel, with the intent to replace as many functions as possible that previously the kernel would have performed during the start-up process. Typical uses of early user space are to detect what device drivers are needed to load the main user space file system and load them from a temporary filesystem. Many distributions use dracut to generate and maintain the initramfs image. Init process Once the kernel has started, it starts the init process. Historically this was the "SysV init", which was just called "init". More recent Linux distributions are likely to use one of the more modern alternatives such as systemd. Basically, these are grouped as operating system service-management. See also SYSLINUX Windows startup process References External links Greg O'Keefe - From Power Up To Bash Prompt a developerWorks article by M. Tim Jones Bootchart: Boot Process Performance Visualization The bootstrap process on EFI systems, LWN.net, February 11, 2015, by Matt Fleming Booting Linux Linux kernel
Operating System (OS)
998
Lt. Kernal Lt. Kernal is a SCSI hard drive subsystem developed for the 8-bit Commodore 64 and Commodore 128 home computers. The Lt. Kernal is capable of a data transfer rate of more than per second and 65 kilobytes per second in Commodore 128 fast mode. History The original design of both the technically complicated hardware interface and disk operating system came from Lloyd Sponenburgh and Roy Southwick of Fiscal Information, Inc., a now-defunct Florida-based turnkey vendor of minicomputer-based medical information systems. Fiscal demonstrated a working prototype in 1984 and starting advertising the system for sale early in 1985. It immediately found a niche with some Commodore software developers and bulletin board SysOps. It was released over the years in capacities of 10 megabytes to 330 megabytes. The subsequent development of a multiplexing accessory allows one Lt. Kernal to be shared by up to 16 computers, using a round robin scheduling algorithm. This makes the use of the Lt. Kernal with multiple line BBSs practical. Later, streaming tape support, using QIC-02 tape cartridges, was added to provide a premium backup strategy. Fiscal built the units to order until late 1986, at which time the decision was made to turn over the production, marketing and customer support to Xetec Inc. Fiscal continued to provide secondary technical support, as well as free DOS upgrades, until December 1991, at which time production of new Lt. Kernal systems ceased. Following the shutdown of Xetec in 1995, private support of the Lt. Kernal was carried on for several years by Ron Fick until his untimely death in 1999. Overview Lt. Kernal uses a 5" Hard disk drive with a capacity of 10 MB and later up to 330 MB. The hard drive uses MFM modulation to encode data and a ST-506 interface to the OMTI 5300 intelligent SASI controller. This controller board presents a SASI interface (SCSI) externally that is connected via a cable with DB-25 connectors in both ends. That is plugged into the host adapter that handles the SASI signals and protocol with a controller that plugs directly into the 44-pin ROM cartridge expansion slot in the form of a physical edge connector that mates the controller board to the system bus of the host computer. The connection between the computer host adapter (DB-25F) and the hard drive unit consists of a cable with two DB-25 connectors. A key feature of the Lt. Kernal is its sophisticated disk operating system, which behaves much like that of the Point 4 minicomputers that Fiscal was reselling in the 1980s. A high degree of control over the Lt. Kernal is possible with simple typed commands, many of which had never been seen before in the 8-bit Commodore environment. It features a keyed random access filing system. Reception The Lt. Kernal was favorably and comprehensively reviewed in The Transactor, which praised the drive's speed, storage capacity, and ease of use. Some criticism was levied at the product's incomplete documentation, its drain on the resources of the host computer (particularly with the Commodore 64, whose limited memory requires frequent paging of the DOS), and the lack of an automated backup utility. The review noted the drive's particular suitability for professional programmers, business users, and BBS sysops. See also Commodore with IEEE-488 Commodore bus References External links Lt. Kernal Data Archive Commodore 64 Hard disk computer storage CBM storage devices
Operating System (OS)
999