text
stringlengths
101
134k
type
stringclasses
12 values
__index_level_0__
int64
0
14.7k
Macintosh File System Macintosh File System (MFS) is a volume format (or disk file system) created by Apple Computer for storing files on 400K floppy disks. MFS was introduced with the original Apple Macintosh computer in January 1984. MFS is notable both for introducing resource forks to allow storage of structured data, and for storing metadata needed to support the graphical user interface of Mac OS. MFS allows file names to be up to 255 characters in length, although Finder does not allow users to create names longer than 63 characters (31 characters in later versions). MFS is called a flat file system because it does not support a hierarchy of directories. Folders exist as a concept on the original MFS-based Macintosh, but work completely differently from the way they do on modern systems. They are visible in Finder windows, but not in the open and save dialog boxes. There is always one empty folder on the volume, and if it is altered in any way (such as by adding or renaming files), a new Empty Folder appears, thus providing a way to create new folders. MFS stores all of the file and directory listing information in a single file. The Finder creates the illusion of folders, by storing all files as pairs of directory handles and file handles. To display the contents of a particular folder, MFS scans the directory for all files in that handle. There is no need to find a separate file containing the directory listing. The Macintosh File System does not support volumes over 20 MB in size, or about 1,400 files. While this is small by today's standards, at the time it seemed very expansive when compared to the Macintosh's 400 KB floppy drive. Apple introduced Hierarchical File System as a replacement for MFS in September 1985. In Mac OS 7.6.1, Apple removed support for writing to MFS volumes, and in Mac OS 8.0 support for MFS volumes was removed altogether. Although macOS has no built-in support for MFS, an example VFS plug-in from Apple called MFSLives provides read-only access to MFS volumes. See also Comparison of file systems References Bibliography External links Apple Tech Article 9502 - MFS volume support in Mac OS 7.x MacTech Volume 1, Issue 5: Disks - organization of the standard Macintosh disk (April 1985) Resurrecting MFS Macintosh Floppies Fred's Follies - HFS used in Macs with 128K ROMs different from MFS used in Macs with 64K ROMs Q&A: Mac Plus - limitation due to MFS on an external 400K floppy drive disk MFSLives - VFS read-only plug-in for MFS in macOS Apple Inc. file systems Disk file systems Macintosh operating systems
Operating System (OS)
600
Windows-1252 Windows-1252 or CP-1252 (code page 1252) is a single-byte character encoding of the Latin alphabet, used by default in the legacy components of Microsoft Windows for English and many European languages including Spanish, French, and German. It is the most-used single-byte character encoding in the world (on websites at least). , 0.3% of all websites declared use of Windows-1252, but at the same time 1.1% used ISO 8859-1 (while only 5 of top-1000 websites), which by HTML5 standards should be considered the same encoding, so that 1.4% of websites effectively use Windows-1252. Pages declared as US-ASCII would also count as this character set. An unknown (but probably large) subset of other pages use only the ASCII portion of UTF-8, or only the codes matching Windows-1252 from their declared character set, and could also be counted. Depending on the country, use can be much higher than the global average, e.g. for Germany, according to website use (including ISO-8859-1) at 5.1%. Details This character encoding is a superset of ISO 8859-1 in terms of printable characters, but differs from the IANA's ISO-8859-1 by using displayable characters rather than control characters in the 80 to 9F (hex) range. Notable additional characters include curly quotation marks and all the printable characters that are in ISO 8859-15 (at different places than ISO 8859-15). It is known to Windows by the code page number 1252, and by the IANA-approved name "windows-1252". It is very common to mislabel Windows-1252 text with the charset label ISO-8859-1. A common result was that all the quotes and apostrophes (produced by "smart quotes" in word-processing software) were replaced with question marks or boxes on non-Windows operating systems, making text difficult to read. Most modern web browsers and e-mail clients treat the media type charset ISO-8859-1 as Windows-1252 to accommodate such mislabeling. This is now standard behavior in the HTML5 specification, which requires that documents advertised as ISO-8859-1 actually be parsed with the Windows-1252 encoding. Historically, the phrase "ANSI Code Page" was used in Windows to refer to non-DOS encodings; the intention was that most of these would be ANSI standards such as ISO-8859-1. Even though Windows-1252 was the first and by far most popular code page named so in Microsoft Windows parlance, the code page has never been an ANSI standard. Microsoft explains, "The term ANSI as used to signify Windows code pages is a historical reference, but is nowadays a misnomer that continues to persist in the Windows community." In LaTeX packages, CP-1252 is referred to as "ansinew". IBM uses code page 1252 (CCSID 1252 and euro sign extended CCSID 5348) for Windows-1252. It is called "WE8MSWIN1252" by Oracle. Character set The following table shows Windows-1252. Each character is shown with its Unicode equivalent based on the Unicode.org mapping of Windows-1252 with "best fit". According to the information on Microsoft's and the Unicode Consortium's websites, positions 81, 8D, 8F, 90, and 9D are unused; however, the Windows API MultiByteToWideChar maps these to the corresponding C1 control codes. The "best fit" mapping documents this behavior, too. History The first version of the codepage 1252 used in Microsoft Windows 1.0 did not have positions D7 and F7 defined. All the characters in the ranges 80–9F were undefined too. The second version, used in Microsoft Windows 2.0, positions D7, F7, 91, and 92 had been defined. The third version, used since Microsoft Windows 3.1, had all the present-day positions defined, except euro sign and Z with caron character pair. The final version listed above debuted in Microsoft Windows 98 and was ported to older versions of Windows with the euro symbol update. OS/2 extensions The OS/2 operating system supports an encoding by the name of Code page 1004 (CCSID 1004) or "Windows Extended". This mostly matches code page 1252, with the exception of certain C0 control characters being replaced by diacritic characters. MSDOS extensions [rare] There is a rarely used, but useful, graphics extended code page 1252 where codes 0x00 to 0x1f allow for box drawing as used in applications such as MSDOS Edit and Codeview. One of the applications to use this code page was an Intel Corporation Install/Recovery disk image utility from mid/late 1995. These programs were written for its P6 User Test Program machines (US example). It was used exclusively in its then EMEA region (Europe, Middle East & Africa). In time the programs were changed to use code page 850. Palm OS variant This variant of Windows-1252 is used by Palm OS 3.5. Python gives it the label. See also Western Latin character sets (computing) Windows-1250 References External links Microsoft's code charts for Windows-1252 ("Code Page 1252 Windows Latin 1 (ANSI)") Unicode mapping table and code page definition with best fit mappings for Windows-1252 Windows code pages Computer-related introductions in 1985
Operating System (OS)
601
Multi-user software Multi-user software is computer software that allows access by multiple users of a computer. Time-sharing systems are multi-user systems. Most batch processing systems for mainframe computers may also be considered "multi-user", to avoid leaving the CPU idle while it waits for I/O operations to complete. However, the term "multitasking" is more common in this context. An example is a Unix or Unix-like system where multiple remote users have access (such as via a serial port or Secure Shell) to the Unix shell prompt at the same time. Another example uses multiple X Window sessions spread across multiple terminals powered by a single machine - this is an example of the use of thin client. Similar functions were also available in a variety of non-Unix-like operating systems, such as Multics, VM/CMS, OpenVMS, MP/M, Concurrent CP/M, Concurrent DOS, FlexOS, Multiuser DOS, REAL/32, OASIS, THEOS, PC-MOS, TSX-32 and VM/386. Some multi-user operating systems such as Windows versions from the Windows NT family support simultaneous access by multiple users (for example, via Remote Desktop Connection) as well as the ability for a user to disconnect from a local session while leaving processes running (doing work on their behalf) while another user logs into and uses the system. The operating system provides isolation of each user's processes from other users, while enabling them to execute concurrently. Management systems are implicitly designed to be used by multiple users, typically one system administrator or more and an end-user community. The complementary term, single-user, is most commonly used when talking about an operating system being usable only by one person at a time, or in reference to a single-user software license agreement. Multi-user operating systems such as Unix sometimes have a single user mode or runlevel available for emergency maintenance. Examples of single-user operating systems include MS-DOS, OS/2 and Classic Mac OS. See also AT Multiuser System Multiseat Multiuser DOS Federation (MDOS) External links Interix in a Multi-User Windows TSE Environment paper about the Unix multi-user model and MS-Windows NT TSE Operating system technology
Operating System (OS)
602
UNIVAC EXEC I EXEC I is a discontinued UNIVAC's original operating system developed for the UNIVAC 1107 in 1962. EXEC I is a batch processing operating system that supports multiprogramming. See also UNIVAC EXEC II List of UNIVAC products History of computing hardware References External links EXEC 1
Operating System (OS)
603
One-person operation One-person operation (OPO), also known as driver-only operation (DOO), one-man operation (OMO), single person train operation (SPTO), or one-person train operation (OPTO), similarly to Driver Controlled Operation, is operation of a train, bus, or tram by the driver alone, without a conductor. On one-person operated passenger trains, the engineer must be able to see the whole train to make sure that all the doors are safe for departure. On curved platforms a CCTV system, mirror or station dispatch staff are required. Although extra infrastructure such as cameras and mirrors might require additional investment, one-person operation is usually faster and cheaper to implement than automatic train operation, requiring a smaller investment in, for example, platform intruder detection systems and track protection (fencing, bridge-caging, CCTV etc.). In some cases, one-person operation can be seen as an intermediate step towards automatic train operation. While European freight trains are normally one-person operated, the larger North American freight trains are almost exclusively crewed by a conductor as well as the engineer. While one-person operation is popular and on the rise among the train operating companies as it reduces the number of crew required and correspondingly reduces costs, it is for that reason controversial and is often strongly opposed by trade unions, often claiming that it is an unsafe practice. Passenger trains History One of the first examples of a public transport vehicle that was developed specifically for one-person operation is the Birney streetcar introduced in the United States in 1916. The Birney was pre-equipped with one of the most important safety devices for enabling one-person operation – the dead man's switch. At the time (and to a certain extent also today) one of the most cited arguments against one-person operation was the safety risks to passengers and bystanders if the operator fell ill. The dead man switch ensured that the tram would stop in the event of an incapacitated driver. For this reason, the Birneys were also called "safety cars". Another critical feature of the Birney in dealing with safety issues from the critics of one-person operation was its compact size which eased the driver's view of the road and reducing the number of doors to a single one. In the US, regardless of various technological solutions to resolve the safety issues of one-person operation, there was consistent resistance towards one-person operation among the drivers and conductors of the streetcars. Whenever the workforce was well-organized in unions – which was the case in around half of all cities with streetcar companies – any proposal of one-person operation would generally be challenged, regardless of whether the streetcar company was in serious financial difficulties. In many cities, it took a municipal ordinance to authorize one-person operation, thus also politicizing the subject. The end result of all this was typically strikes and other industrial action whenever one-person operation was implemented. While the Birney was one of the first public transport vehicles designed for one-person operation, it was not the first public transport vehicle to be equipped with a dead man's switch. In 1903, the Metropolitan District Railway equipped two of its A Stock trains with a dead man's switch. The switch was introduced so that one person could operate in the driving cab on their own, which became standard for all train companies operating the London Underground in 1908. Even though this did not make the trains one-person operated – seeing as the trains were still operated with a guard – it was one of the first steps towards it. Besides the dead man's switch, the electrification and dieselisation of railways also helped reduce the required staff in the locomotive to a sole operator – as diesel and electric traction does not require a fireman to shovel coal into a boiler. On the London Underground, the use of multiple units ended the need for a second crew member in the driving cab to assist with coupling at the terminal train station. Australia Adelaide Adelaide Metro's metropolitan rail network is configured for driver-only operation, but also operate with Passenger Service Assistants (PSA). This is safety role, but with a focus on customer service and revenue protection. Normally, the train driver operates the doors, but PSA's are also able to. The Ghan, the Indian Pacific and The Overland all feature Train Managers who perform a similar role, as did the Great Southern. Pacific National trains between Adelaide and Port Augusta are occasionally driver-only operated. Melbourne The Melbourne suburban railway network (currently operated by train operating company Metro Trains Melbourne) began one-person operation in 1993, as part of a wider reform of public transport by the newly elected Kennett government. By 22 November 1995, all suburban trains were one-person operated. Perth The entire Transperth network are driver-only operated. Conversion to DOO initiated in the early 1990's when then new A-series trains were introduced. Pacific National trains between Kewdale and West Merriden are occasionally driver-only operated. Canada Toronto subway The Toronto Transit Commission contains a mix of one-person train operation and two-person operation. Since its opening in 1985, the light-metro Scarborough RT line is operated with a single operator, while the heavy-rail Yonge-University-Spadina and Bloor-Danforth lines have always operated with two-person crews of a train operator and guard (conductor). The guard is responsible for operating the doors, as well as observing the platform. On 9 October 2016, OPTO was implemented on the heavy-rail Sheppard line, which uses four-car sets of Bombardier Toronto Rockets. According to a 2016 presentation, OPTO is "one of the TTC's key modernization efforts" as a cost-saving measure. The Toronto Rocket trains were altered to include a train door monitor system uses cameras to display a clear view of train doors while maintaining unobstructed views of the track and signals. The cameras replace the role of the train guard who used to observe the platform for safety. However, this system was not adequate to keep passengers safe, as there has been a 50% increase of dangerous "red light violations", or train operators not stopping for stop signals, after OPTO implementation on the Sheppard subway, due in part to the sole train operator having to both monitor the cameras and simultaneously operate the vehicle. It is expected that the Yonge-University-Spadina line will have OPTO implemented in 2019, and the Bloor-Danforth line will follow with OPTO in 2021. However, due to delays in implementing automatic train control (ATC), which allows trains to be run entirely by computers to remove the need for the guard, this date has been pushed back to the end of 2021 for the Yonge-University-Spadina line and pushed back indefinitely for the Bloor-Danforth line. In 2020, a Mainstreet Research survey of Torontonians revealed that the public strongly opposed OPTO on the Yonge-University-Spadina subway. More than 6-in-10 respondents disapproved of OPTO, and three-quarters disapproved of the TTC's decision not to inform the public of the plan to implement OPTO. In 2021, a Corbett Communications survey of Torontonians produced similar results: 7-in-10 respondents disapproved of OPTO, and 7-in-10 disapproved of the fact that the TTC decided to not offer public consultation on the issue. This survey also revealed that 6-in-10 Torontonians would feel unsafe riding in a train operated by just one staff member. The TTC's future light rail lines will use one-person operation in conjunction with an ATC system. Greater Toronto Area GO Transit in Ontario operates with a conductor and engineer in the cab, as well as a conductor called a "Customer Service Ambassador" located within the train who is responsible for controlling the doors and making announcements. Via Rail Via Rail operates with two Locomotive Engineers and several on board staff. Montreal Metro The Montreal Metro operates with one-person crews. Light Rail All Canadian Light Rail Systems are either DOO or Driver Controlled Operation. Denmark In Denmark, the state owned railway company DSB started implementing one-person operation on the commuter rail S-train system in 1975. The S-train system has been completely one-person operated since 1978. At the start of 2013 DSB also used one-person operated trains on the two small regional rail lines Svendborgbanen and Aarhus nærbane. As a result of several years of major annual deficits, DSB started implementing one-person operation on several rail lines on 29 June 2013. This led to reductions in staff, followed by widespread protest and some small illegal strikes by train drivers, who accused DSB of using rolling stock which was unsafe for one-person operation. The Danish Railway Union stated in 2011 "that one-person operation wasn't their cup of tea". The lines that were planned to become one-person operated were: Copenhagen-Ringsted, Copenhagen-Kalundborg, Copenhagen-Nykøbing F., Aarhus-Aalborg, Fredericia-Esbjerg and Roskild-Køge-Ringsted The one-person operation of the railway line Aarhus-Aalborg was implemented using temporary and very manual safety procedures – much to the dissatisfaction of the train drivers. On 17 July 2013 DSB abandoned these temporary manual safety procedures and resumed to operate the Jutlandic regional trains with guards, on the grounds that the safety of their trains was not to be cast in doubt and that this was more important than "whether or not one-man operation was implemented a month or two latter than planned". DSBs preparations of the lines permanent standard procedures for one-person operation did however prove to be more difficult than first anticipated. DSB was only planning to use one-person operation at the local lines north and south of Aalborg – and far from all the way to Aarhus. DSB has also stated that the rest of the remaining timeline for implementing one-person operations will be re-evaluated DSB has pointed to a bureaucratic safety approval system with an independent safety assessor as the main reason for the lack of progress. On 7 June 2013, the Danish Ministry of Transport decided to implement one-person operation on the tendered Coastal Line, which led to the sacking of 50 guards. The one-person operation was set to start from 15 December 2013. Meanwhile, sickness absence among the sacked guards rose to six times the normal levels, resembling "sick-out" strike action. This compelled the train operating company DSB Øresund to offer the sacked guards a "stay healthy bonus" of up to 5000 Danish kroner per month (about US$900 or GB£600). The safety approval of one-person operation on the Coastal Line is part of a joint DSB one-person operation project, which entails that the Coastal Line will not be one-person operated before DSB has managed to obtain safety approval for other lines first. In August 2015 DSB stated that they would reevaluate whether or not they would implement one person operation on the Coastal Line. DSB stated at the same time, that they did not expect one man operation to be implemented on the Coastal Line in 2015. The trains operated by Arriva on the rural single-track railways of Jutland have been one-person operated since Arriva won a tender to operate the lines in 2003. The small train company Nordjyske Jernbaner which operates in the sparsely populated most northern parts of Denmark also uses exclusively one-person operated trains. The railway companies Regionstog and Lokalbanen, operating the single-track railways of Zealand, use solely one-person operated trains as well. On all Danish one-person operated passenger trains, ticket inspectors still board the train now and then to perform spot checks. Europe In the EU, train Drivers have an EU licence, and national certificates according to Directive 2007/59/EC. With ERTMS, the driver has to communicate with the signaller. In the EUR, there are also other crew members performing safety-critical tasks. Some of these safety task, such as passenger protection and evacuation might be harmonized, while procedure-related and rolling‑stock dependent tasks, such as door closure may vary depending on the trains operated by the company. Those safety task may include, depending on the country: Check train composition, Checks and test before departure, Train departure at any station, Train run, Operation in degraded mode, Operation in emergency situations. The other crew members performing safety-critical task are regulated at national level, with regulations which are not fully compliant with EU legal framework as they restrict business. Thus, they should be reviewed by each member nation with the Railway Safety Directive. France Several systems within France are DOO. Marseille The Marseille Metro is entirely operated using Driver Only Operation. Paris Various Paris Métro lines and all of the Tramways in Île-de-France routes and lines are driver only operated. Germany The S-Bahn rapid transit system in Berlin and Hamburg were using platform train dispatchers to ensure all doors are closed and a train can safely start for the next section. Although there were a couple of test runs since the 1970s these mass rapid transit systems were the last train systems in Germany to be converted to a one-person operation as rapid transit requires to ensure a minimum time to call at a station especially in rush hours. In Hamburg the "" SAT (self-dispatching by the train driver) was first introduced in 2000 and the last station was becoming unmanned in 2006. On the bigger Berlin S-Bahn network the "" ZAT (train dispatching by the train driver) was introduced in 2006. However it was only used on straight platforms so far. Since 2014 the Berlin S-Bahn introduces a system where an electronic monitor is in the driver cab. There is a camera on the platform that transmits the images via Wireless LAN to the train and the train has a connection back to the (existing) loudspeakers on the platform. The system was tested since 2007 but due to safety concerns its introduction was held off for several years. With its introduction a platform may be served in one-person operation either by ZAT-oU or the ZAT-FM, being the old "" ZAT-oU (train dispatching by train driver without technical support) or the new "" ZAT-FM (train dispatching by train driver with driver cab monitor). Officials pointed out that the one-person operation does even lower the time a train halts on a station – on the busy central lines the train on one side of the platform did often have to wait for the train in the opposite direction on the other side of the platform to be dispatched. Although most of the central lines will be converted to ZAT-FM there will be about 20 stations left in the network that will continue to have platform dispatchers. Ireland Most trains operating in Ireland are driver only operated Japan In Japan, passenger trains without a conductor are indicated by a green sign, often accompanied by a pre-recorded in-car announcement mentioning that the train is a "one man train". Most buses are also one-person operations. In most cases, a boarding voucher (整理券) is taken when boarding the vehicle, which has a number printed corresponding to the station that the passenger boarded on, since fare is calculated by distance traveled. When disembarking, passengers pay at a fare collection box at the exit. An increasing number of subways are becoming one-person operation, including the Nagahori Tsurumi-ryokuchi Line, which was designed to operate on one-person operation, Toei Ōedo Line, which was one-person operated since its opening in the year 1990, and the Tokyo Metro Marunouchi Line, which became driver-only operated from 2009. New Zealand By 1997, more than 90 percent of all trains – both passenger and freight – operated by the then main freight and passenger rail operator in New Zealand, Tranz Rail, had only one person in the loco cab. Sweden In general all passenger trains on railways in Sweden have a driver and at least one conductor on board by rules, even it is not entirely mandatory. In Sweden around 2 daily departures on the Swedish part of the Oresundtrain system operated by Veolia Transport is one-man operated. This practice is however only utilized when there is an abrupt shortage of train managers. In 2013 the company's health and safety representative – who (in Sweden) is a train driver appointed by a trade union – deemed it to be an unsafe practice demanding it be stopped. An important safety check done mainly by the conductor is to check that all doors get closed without any passenger stuck in any. This is hard to check in long trains, and long trains usually have at least two conductors. Trams and metro trains are however in general one-man operated. Freight trains are in general also one-man operated. Spain The Barcelona Metro, Bilbao Metro, and Madrid Metro Systems are all Driver-Only Operated. United Kingdom On the British railway network, around 30% of all passenger services are single crewed or 'driver-only operated' (DOO). The remaining 70% employ approximately 6,800 guards. The term 'guard' is the common name used for the role which in most countries is referred to as a 'conductor'; it's also the name used in the railway's rule book. Many train companies use alternative names for the role (conductor, senior conductor, train manager), but the role is mostly the same regardless of operator. On the UK light railways and tramways, conductors have all but disappeared in an operational sense and now the term 'conductor' is commonly used for revenue and customer service staff. Historically 'operational' conductors were the 'norm' on all systems including the London Underground (who used the term 'guard' like the mainline railway). With exception to the Blackpool system, London Underground and Glasgow Subway – all current UK light rail systems are of modern construction and were built as 'new' for one-person operation. British buses also once had operational conductors on most services, most buses were front engined meaning the passenger saloon door had to be behind the driver's cab. The last buses to have a conductor are in London on the remaining AEC Routemaster double-deck buses, otherwise all UK buses are one-person operated. London All trains on the London Underground are single-manned. Conversion to one-person operation began in 1984 and was completed in 2000. TFL now operates 100% of its London Overground network as driver-only trains. The latest conversion was announced in July 2013 on the Gospel Oak to Barking Line. The National Union of Rail, Maritime and Transport Workers (RMT) challenged the move, claiming passenger safety would be compromised. Transport for London replied that at the time the East London Line, already one person operated, has one door-related incident for every seven million passengers, while the section of the network which currently uses conductors has one door-related incident for every four million passengers. On 16 August 2013, the RMT called a 48-hour strike over the August bank holiday weekend. According to the RMT, the proposal set forth by Transport for London would imply driver-only operations on the whole of the London Overground network and make 130 guards redundant. London Overground Rail Operations stated in response that they had given "the RMT assurances on employing conductors in alternative customer service roles and offering a generous voluntary redundancy package to those who want it." According to RMT, the proposals to implement driver-only operations are in response to the 12.5% reduction in Transport for London's funding announced in Chancellor of the Exchequer George Osborne's Comprehensive Spending Review. England and Wales By 21 July 2010, Sir Roy McNulty, chair of the major value for money inquiry of the rail industry in the United Kingdom, tabled a scoping report titled Realising the potential of GB rail commissioned by the Department of Transport (DfT) and the Office of Rail Regulation (ORR). The report recommended that "the default position for all services on the GB rail network should be DOO (driver-only operation), with a second member of train-crew only being provided where there is a commercial, technical or other imperative", in order to reach the overall industry goal of a "30% unit cost reduction" by around 2018. The RMT stated that "any proposed extensions of DOO would be fought by the union on grounds of safety and efficiency". The British government has proposed the extension of driver-only trains as a part of the new Northern franchise and has left it optional to the new operators of the Trans Pennine franchise. Additionally it has been proposed for the new Hitachi Super Express Trains which will be in use on the East Coast and Great Western franchises. In April 2016, drivers belonging to the ASLEF trade union refused to pick up passengers using DOO on the new Class 387 trains on the Gatwick Express route. This is the system currently used for the 10-car Class 442 used on Gatwick Express, but the union claimed that extending this to 12-car trains put too much pressure on the driver and was unsafe. The operators Govia Thameslink Railway took legal action, and the union ultimately dropped the claim. In the summer of 2016, guards working for Southern and belonging to the RMT trade union went on strike over plans to introduce DOO on more Southern services. Scotland DCO was first implemented in the 1980s, and currently more than 56% of ScotRail's trains are one-person operated, mainly electric services around the Glasgow and Strathclyde regions. When First ScotRail launched a plan to implement one-person operations on the newly opened Airdrie-Bathgate Rail Link in 2010, the National Union of Rail, Maritime and Transport Workers (RMT) staged several strikes, claiming that the system was unsafe. ScotRail replied that they had been using one-person operated trains since the 1980s, and that the Class 334 trains planned for the Airdrie-Bathgate line had not even been delivered with a conductor's door panel. The strikes were ultimately ended by the unions, in part because of disagreements within the RMT about which principal stand to take on one-person operations. Other sources point to a "strike breaker" clause in ScotRail's contract, which enabled ScotRail to draw compensation from Scottish taxpayers during a strike, as another factor in the union's ending of the strikes. Even though the trains are now driven without a guard, a ticket inspector is still present on every train, although the ticket inspectors are paid less than guards. The RMT Union called strike action in Summer 2016 when New Franchisee Abellio ScotRail announced plans to extend driver-only operation. The dispute was resolved when ScotRail agreed that a conductor would be kept on all new trains with the driver opening the doors and the conductor shutting them. Current driver-only / one-person operations Metro Systems London Underground – Has operated entirely driver-only or one-person operated services since 2000. Certain Underground trains (on the Jubilee, Central, Victoria and Northern lines) are driven automatically with a 'train operator' to carry out other duties such as door operation. Glasgow Subway – Operates an entirely driver-only operated service and has done since the Modernisation in 1977–1980. Tyne and Wear Metro – Has been entirely driver-only operated since it opened in 1980. Trams - Most tram systems in the UK are one person operated, including: London Tramlink, West Midlands Metro, Nottingham Express Transit, Manchester Metrolink and Blackpool Tramway. Sheffield Supertram and Edinburgh Trams are DCO as the employ Conductors, but do not assist with the doors or any operational roles. Many systems employ some form of ticket examiner for revenue and customer service reasons. Docklands Light Railway – The London Docklands system operates automatically, occasionally with a member of staff carrying out much of the role of a conductor, but also has the ability to take manual control of the train. Bus – Nearly all bus services in the United Kingdom are one-person operated, this includes long-distance coach services such as National Express. There is part of Route 15 in London that uses conductors and several other bus routes in London where a customer assistant is provided for much of the day. National Rail Abellio Greater Anglia – All trains operating out of Liverpool Street are driver-only operated as far north as Colchester with the exception of Norwich services which are DCO. Formally trains operated by 'locomotive hauled' rolling stock required the presence of a guard / conductor. c2c – Operates an entirely driver-only operated train service. Chiltern Railways – Services operating to Aylesbury Vale Parkway and south of Banbury are driver-only operated with the exception of 'locomotive hauled' services. Great Western Railway – Most 'Networker' Class 165 and 166 and 'Electrostar' British Rail Class 387 operated services are driver-only trains, operating mostly in the Thames Valley from London Paddington to Bedwyn, Banbury and Oxford. For operational reasons 'Networker' services to Basingstoke, Gatwick Airport and services west of Oxford towards Worcester via the Cotswolds are operated with a guard / conductor. Govia Thameslink Railway – Operates an entirely driver-only operated train service on the Thameslink and Great Northern sub-brands. Heathrow Express – Operates an entirely DCO service, they do provide a 'customer service representative' on board for revenue and customer service duties London Overground – Has operated an entirely driver-only operated service since July 2013. ScotRail – Most electric train services in the Strathclyde area are either Driver Only or Driver Controlled Operation, as some services maintain a ticket examiner for revenue and customer service duties. Southeastern – Operate a large network of driver-only trains on their Metro services, mainly around South East London. The HS1 services are DCO as they have an 'on board manager' for mainly revenue and customer service duties. Mainline Services from Kent are also DCO when operating within London. Southern – Operates driver-only trains in South London and on the Brighton mainline. After disputes with both the RMT and ASLEF unions over the extension of driver-only trains across the rest of the network, ASLEF Drivers accepted a deal on 8 November 2017, resulting in the implementation of Driver Controlled Operation throughout the rest of the network which was not previously operating DOO services, excluding routes to Milton Keynes Central , Uckfield and Ashford International to Eastbourne. Safety The UK rail safety regulator, the Rail Safety and Standards Board (RSSB) has stated that its research found no increased risks from driver-only operation. In December 2016, the overall rail regulator, the Office of Rail and Road (ORR) responded by letter to the Transport Select Committee's enquiry into rail safety. In their related press release an ORR spokesman said: The RMT union disputes the independence of both the RSSB due to the involvement of train operating company representatives on the RSSB board. and says that both RSSB and ORR are disregarding wider safety issues by one-person working beyond the operation of the doors. United States Atlanta The MARTA and the Atlanta Streetcar are both DOO systems. Baltimore All light rail lines, and the one subway line, operate on single-person operation. Boston On the Boston subway, also referred to as "The T", all three subway lines became completely one-person operated at the end of March 2012. This marked the ending of the gradual implementation of one-person operations that started in 1996 with parts of the subway's shortest line, the Blue Line, continued with the Orange Line in 2010 and ended with the longest line, the Red Line in 2012. The Green Line is also DOO, but uses one crewmember per car; a typical train has two cars and thus requires two crewmembers. According to Massachusetts Bay Transportation Authority spokesperson Joe Pesaturo, the Carmen's Union "has never embraced" one-person operation. Bay Area The Bay Area Rapid Transit is entirely DOO. Chicago In Chicago the city's main rapid transit system – the L – has been using one-person operation on the Yellow Line since its opening in 1964. On 31 October 1993, the Orange Line began operating DOO trains as well, and this gradually spread to the entire network. As of 1998, the whole system runs with only a single crew member per train. Cleveland Cleveland's only heavy rail metro line, the Red Line (RTA Rapid Transit), is operated using DOO. Light Rail All US Light Rail Systems are driver only operated. Los Angeles In Los Angeles, the city's rapid transit system (known as the Metro) has been using one-person operation on all of its transit lines since it began operating in 1991. Miami The Miami-Dade County Metrorail is DOO operated. New York City In the New York City area, most subway trains over are operated by a two-person crew of a motorman and a conductor. On September 1, 1997, OPTO began on the 42nd Street Shuttle, Franklin Avenue Shuttle and Rockaway Park Shuttle during all times, and on the B-West End Shuttle and 5-Dyre Avenue Shuttle during late nights. The following New York City Subway services and rolling stock are used for one-person operation : Full-time one-man operation Franklin Avenue Shuttle (R68) Rockaway Park Shuttle (R46) Part-time one-man operation 5 train during late nights (R142) A train on Ozone Park–Lefferts Boulevard branch during late nights (R46) G weekends (R46, R68, R68A) M weekends (R160) Philadelphia Area PATCO Speedline, is also one-person OPTO operation from its opening in 1969, as well as SEPTA's Broad Street Subway, Market-Frankford Subway-Elevated, Media & Sharon Hill Light Rail and Norristown High-Speed Lines are all OPTO. Washington, D.C The Washington Metro has always operated under the "one man rule" from the opening of the first line in 1976. In addition to the DC Streetcar. Freight trains Canada Most freight trains in Canada do not allow one-person train operation for safety reasons. The Montreal, Maine and Atlantic Railway and Quebec North Shore and Labrador Railway are the only two railways in Canada approved by Transport Canada to run one-person freight trains. Following the Lac-Mégantic derailment in July 2013 when a one-person operated Montreal, Maine and Atlantic Railway train was involved in a major and fatal accident, the Canadian Government issued an emergency order banning one-person freight trains carrying hazardous cargo. This move was criticised as rash action before the cause of the accident had been uncovered. Critics of the emergency order further pointed to a 1997 "Study of One-Person Train Operations," commissioned by Transport Canada which concluded that it is unlikely that two persons in the cab improves safety. Denmark Danish freight trains are usually one-man operated. Ireland Irish freight trains operated by locomotives of the 201 Class are one-person operated, however most freight trains of the 071 Class are operated by two engineers. Sweden Swedish freight trains are usually one-person operated. United Kingdom Most British freight trains are one-person- or driver-only-operated, but certain freight trains do have guards on board for operational or safety reasons (such as DRS nuclear trains). United States According to the Federal Railroad Administration, one-person operated freight trains are "very rare" in the United States because it is hard to comply with federal safety regulations with only one person on the train. In the wake of the Lac-Mégantic derailment in July 2013, Federal Railroad Administrator Joseph C. Szabo demanded that Montreal, Maine and Atlantic Railway start using two-person train crews in the US. The US has however not issued a ban on one-person-operated freight trains. In July 2013, the 55,000-member Canadian and American Brotherhood of Locomotive Engineers and Trainmen stated that they had been opposed to one-person freight trains for safety reasons since the introduction of the idea approximately a decade ago. In November 2019, eight U.S. railroads filed a federal lawsuit against the union to allow for the implementation of one person crews. References Bus transport Rail transport operations
Operating System (OS)
604
Microsoft Servers Microsoft Servers (previously called Windows Server System) is a discontinued brand that encompasses Microsoft software products for server computers. This includes the Windows Server editions of the Microsoft Windows operating system, as well as products targeted at the wider business market. Microsoft has since replaced this brand with Microsoft Azure, Microsoft 365 and Windows 365. Servers Operating system The Windows Server family of operating systems consists of Windows operating systems developed and licensed for use on server computers. This family started with Windows Server 2003, for which Microsoft released a major upgrade every four years and a minor upgrade every two years following a major release. This family has branded members too, such as Windows Home Server, Windows HPC Server and Windows MultiPoint Server. Windows components The following products are shipped as Windows component, as opposed to standalone products. Internet Information Services (IIS): Web server, FTP server and basic email server Hyper-V: Bare-metal hypervisor Windows Services for UNIX Windows Server Update Services Productivity Some of the products included in the Windows Server System product branding are designed specifically for interaction with Microsoft Office. These include: BizTalk Server: Business process design and integration tools Exchange Server: E-mail and collaboration server Host Integration Server: Data and management connector between Windows environments and mainframe and midrange platforms such as IBM i. Formerly known as Microsoft SNA Server Project Server: Project management and resource allocation services; works as the server component to Microsoft Project SharePoint Server: Produces sites intended for collaboration, file sharing, web databases, social networking and web publishing. Skype for Business Server: Instant messaging and presence server, integration with telephone PBX systems. Integrates with Skype for Business. SQL Server: Relational Database Management and business intelligence server Security Exchange Online Protection Identity Integration Server – Identity management product Microsoft System Center Microsoft System Center, a set of server products, aims specifically at helping corporate system administrators manage a network of Windows Server and client desktop systems. System Center Advisor: Software-as-a-service offering that helps change or assess the configuration of Microsoft Servers software over the Internet System Center App Controller: Unified management for public and private clouds, including cloud-based virtual machines and services System Center Capacity Planner Provides purchasing and best-practice capacity planning guidance Microsoft Endpoint Configuration Manager: Configuration management, asset management, patch deployment tools for Windows desktops (previously Systems Management Server); includes Software Center. System Center Data Protection Manager: Continuous data protection and data recovery System Center Endpoint Protection: Anti-malware and security tools for Microsoft products System Center Essentials: Combined features of Operations Manager and Windows Software Update Services (WSUS), aimed at small and medium-sized businesses System Center Orchestrator (formerly Opalis): An automation platform for orchestrating and integrating administrative tools to decrease the cost of datacenter operations while improving the reliability of IT processes. It enables organizations to automate best practices, such as those found in Microsoft Operations Framework (MOF) and ITIL. Orchestrator operates through workflow processes that coordinate System Center and other management tools to automate incident response, change and compliance, and service-lifecycle management processes. System Center Operations Manager: Services and application monitoring System Center Service Manager: Ties in with SCOM, SCCM for asset tracking as well as incident, problem, change and configuration management (code name: Service Desk) System Center Virtual Machine Manager: Virtual-machine management and datacenter virtualization Discontinued server products Microsoft Application Center: Deployment of web applications across multiple servers. Some of its capabilities are now in System Center. Commerce Server: E-Commerce portal Site Server (replaced by Commerce Server) Merchant Server (replaced by Microsoft Site Server) Content Management Server: Web site content management and publishing. Merged into Microsoft SharePoint Server. Forefront: Comprehensive line of business security products Threat Management Gateway: Firewall, routing, VPN and web caching server, formerly known as Microsoft ISA Server or Microsoft Proxy Server in its earlier iterations Microsoft Proxy Server (replaced by Forefront Threat Management Gateway) Protection for Exchange Server Protection for SharePoint Server Unified Access Gateway Identity Manager Forms Server: Server-based electronic forms Groove Server: Collaboration server; works in conjunction with Microsoft SharePoint Workspace PerformancePoint Server: Business performance management server Project Portfolio Server Search Server Microsoft SNA Server (replaced by Host Integration Server) Speech Server: Speech applications for automated telephone systems, including voice recognition Virtual Server: Platform virtualization of operating systems References External links Microsoft System Center web site System Center technical library Extending the Power of Microsoft System Center to Heterogeneous Environments web site Servers
Operating System (OS)
605
OCR Systems OCR Systems, Inc., was an American computer hardware manufacturer and software publisher dedicated to optical character recognition technologies. The company's first product, the System 1000 in 1970, was used by numerous large corporations for bill processing and mail sorting. Following a series of pitfalls in the 1970s and early 1980s, founder Theodore Herzl Levine put the company in the hands of Gregory Boleslavsky and Vadim Brikman, the company's vice presidents and recent immigrants from the Soviet Ukraine, who were able to turn OCR System's fortunes around and expand its employee base. The company released the software-based OCR application ReadRight for DOS, later ported to Windows, in the late 1980s. Adobe Inc. bought the company in 1992. History OCR Systems was co-founded by Theodore Herzl Levine ( 1923 – May 30, 2005). Levine served in the U.S. Army Signal Corps during World War II in the Solomon Islands, where he helped develop a sonar to find ejected pilots in the ocean. After the war, Levine spent 22 years at the University of Pennsylvania, earning his bachelor's degree in 1951, his master's degree in electrical engineering in 1957, and his doctorate in 1968. Alongside his studies, Levine taught statistics and calculus at Temple University, Rutgers University, La Salle University and Penn State Abington. Sometime in the 1960s, Levine was hired at Philco. He and two of his co-workers decided to form their own company dedicated to optical character recognition, founding OCR Systems in 1969 in Bensalem, Pennsylvania. OCR Systems's first product, the System 1000, was announced in 1970. OCR Systems entered a partnership with 3M to resell the System 1000 throughout the United States in March 1973. This was 3M's entry into the data entry field, managed by the company's Microfilm Products Division and accompanying 3M's suite of data retrieval systems. It soon found use among Texas Instruments, AT&T, Ricoh, Panasonic and Canon for bill processing and mail sorting. Later in the mid-1970s an unspecified Fortune 500 company reneged on a contract to distribute the System 1000; later still a Canadian company distributing the System 1000 in Canada went defunct. Both incidents led OCR Systems to go nearly bankrupt, although it eventually recovered. By the early 1980s, however, the company was almost insolvent. In 1983 Levine had only $8,000 in his savings and became bedridden with an illness. He left the company in the hands of Gregory Boleslavsky and Vadim Brikman, two Soviet Ukraine expats whom Levine had hired earlier in the 1980s. Boleslavsky was hired as a wire wrapper for the System 1000 and as a programmer and beta tester for ReadRight—a software package developed by Levine implementing patents from Nonlinear Technology, another OCR-centric company from Greenbelt, Maryland. Boleslavsky in turn recommended Brikman to Levine. The two soon became vice presidents of the company while Levine was bedridden; in Boleslavsky's case, he worked 14-hour work days for over half a year in pursuit of the title. The two presented OCR Systems' products to the National Computer Conference in Chicago, where they were massively popular. The company soon gained such clients as Allegheny Energy in Pennsylvania and the postal service of Belgium and received an influx of employees—mostly expats from Russia but also Poland and South Korea, as well as American-born workers. To accommodate the company's employee base, which had grown to over 30 in 1988, Levine moved OCR System's headquarters from Bensalem to the Masons Mill Business Park in Bryn Athyn. Chinon Industries of Japan signed an agreement with OCR Systems in 1987 to distribute OCR's ReadRight 1.0 software with Chinon's scanners, starting with their N-205 overhead scanner. In 1988, OCR opened their agreement to distribute ReadRight to other scanner manufacturers, including Canon, Hewlett-Packard, Skyworld, Taxan, Diamond Flower and Abaton. That year, the company posted a revenue of $3 million. OCR Systems extended their agreement with Chinon in 1989 and introduced version 2.0 of ReadRight. OCR Systems faced stiff competition in the software OCR market in the turn of the 1990s. The Toronto-based software firm Delrina signed a letter of intent to purchase the company in November 1991, expecting the deal to close in December and have OCR software available by Christmas. OCR was to receive $3 million worth of Delrina shares in a stock swap, but the deal collapsed in January 1992. Delrine later marketed its own Extended Character Recognition, or XCR, software package to compete with ReadRight. In July 1992, OCR Systems was purchased by Adobe Inc. for an undisclosed sum. Products System 1000 The System 1000 was based on the 16-bit Varian Data 620/i minicomputer with 4 KB of core memory. The system used the 620/i for controlling the paper feed, interpreting the format of the documents, the optical character recognition process itself, error detection, sequencing and output. The System was initially programmed to recognize 1428 OCR (used by Selectrics); IBM 407 print; and the full character sets of OCR-A, OCR-B and Farrington 7B; as well as optical marks and handwritten numbers. OCR Systems promised added compatibility with more fonts available down the line—per request—in 1970. The number of fonts supported was limited by the amount of core memory, which was expandable in 4 KB increments up to 32 KB. The System 1000 later supported generalized typewriter and photocopier fonts. The rest of the System 1000 comprised the document transport, one or more scanner elements, a CRT display and a Teletype Model 33 or 35. Pages are fed via friction with a rubber belt. Up to three lines could be scanned per document, while the rest of the scanned document could be laid out in any manner granted there was enough space around the fields to be read. The reader initially supported pages as small as 3.25 in by 3.5 in dimension (later supporting 2.6 in by 3.5 in utility cash stubs) all the way to the standard ANSI letter size (8.5 in by 11 in; later 8.5 in by 12 in as used in stock certificates). The initial System 1000 had a maximum throughput of 420 documents per minute per transport (later 500 documents per minute), contingent on document size and content. A feature unique to the System 1000 over other optical character recognition systems of the time was its ability to alert the operator when a field was unreadable or otherwise invalid. This feature, called Document Referral, placed the document in front of the operator and displayed a blank field on the screen of the included CRT monitor for manual re-entry via keyboard. Once input, data could be output to 7- or 9-track tape, paper tape, punched cards and other mass storage media or to System/360 mainframes for further processing. The complete System 1000 could be purchased for US$69,000. Options for renting were $1,800 per month on a three-year lease or $1,600 per month for five years. Computerworld wrote that it was less than half the cost of its competitors while more capable and user-friendly. Competing systems included the Recognition Equipment Retina, the Scan-Optics IC/20 and the Scan-Data 250/350. ReadRight ReadRight processes individual letters topographically: it breaks down the scanned letter into parts—strokes, curves, angles, ascenders and descenders—and follows a tree structure of letters broken down into these parts to determine the corresponding character code. ReadRight was entirely software-based, requiring no expansion card to work. Version 2.01, the last version released for DOS, runs in real mode in under 640 KB of RAM. OCR Systems released the Windows-only version 3.0 in 1991 while offering version 2.01 alongside it. The company unveiled a sister product, ReadRight Personal, dedicated to handheld scanners and for Windows only in October 1991. This version adds real-time scanning—each word is updated to the screen while lines are being scanned. ReadRight proper was later made a Windows-only product with version 3.1 in 1992. The inclusion of ReadRight 2.0 with Canon's IX-12F flatbed scanner led PC Magazine to award it an Editor's Choice rating in 1989. Despite this, reviewer Robert Kendall found qualification with ReadRight's ability to parse proportional typefaces such as Helvetica and Times New Roman. Mitt Jones of the same publication found version 2.01 to have improved its ability to read such typefaces and praised its ease of use and low resource intensiveness. Jones disliked the inability to handle uneven page paragraph column widths and graphics, noting that the manual recommended the user block out graphics with a Post-it Note. Version 3.1 for Windows received mixed reviews. Mike Heck of InfoWorld wrote that its "low cost and rich collection of features are hard to ignore" but rated its speed and accuracy average. Barry Simon of PC Magazine called it economical but inaccurate, unable to correct errors it did not detect, and found its spellchecker flawed and its speed lacking compared to Calera's WordScan Plus. Gary Berline of the same publication wrote that "ReadRight produced serviceable accuracy on clean files with simple layouts, but at a less than sprightly pace", finding it unable to process small type and multicolumn text with small margins between columns. The software also regularly interpreted graphical illustrations as text in his experience. OCR Systems announced a follow-up release promising to correcting these issues in July 1992, which never came to fruition on account of Adobe buying the company. Citations References Adobe Inc. American companies established in 1969 American companies disestablished in 1992 Computer companies established in 1969 Computer companies disestablished in 1992 Defunct computer companies of the United States Defunct computer hardware companies Defunct software companies of the United States Optical character recognition
Operating System (OS)
606
Elonex ONE The Elonex ONE (also known as ONE) was a netbook computer marketed to the education sector by Elonex. The ONE's operating system was called Linos, based on Linux kernel 2.6.21, and the device had Wi-Fi connectivity, Ethernet networking, a solid-state hard drive, two USB ports and weighed less than 1 kg. The ONE was described by Elonex at the time as the cheapest laptop in the UK at a retail price of £99 in the UK. Its official unveiling took place on 28 February 2008 at The Education Show at the NEC in Birmingham and a shipping date of June 2008 was announced. Customer deliveries started in August 2008. In February 2008, Elonex stated their vision was for "every pupil to have their own laptop" to "improve computer literacy across the nation". Elonex aligned the cost of the ONE with the aims of the DCSF closing the achievement gap between those from low-income and disadvantaged backgrounds and their peers. To support this initiative Elonex committed to "donate 1 ONE to underprivileged children from disadvantaged areas for every hundred sold". Hardware The hardware specifications were published on 28 February 2008. The "LNX Code 8" is also the CPU used in the Elonex One+, One T, and One T+ ultra-low-cost notebook computers. While Elonex declined to release information on the designer and manufacturer of the processor, Engadget reported that the One could be a rebranded Fontastic A-View laptop, which would make the processor an x86-compatible 300 MHz Aday5F. Processor, main memory LNX Code 8 Mobile 300 MHz Processor Dedicated Linux Memory 128 MB DDR-II SD RAM (256 MB in upgraded model) On-board 1 GB Flash Memory, optimised for Linux (2 GB in upgraded model) Removable 1 GB, 2 GB, 4 GB, 8 GB, 16 GB wristvault (sold separately) Dimensions, casing Display: 7 inch (18 cm) High Resolution TFT LCD display; 800 x 480px Widescreen Dimension: 22 cm x 15 cm x 3 cm (W x L x H) Weight: 0.95 kg Interchangeable Outer Rubberised Skin (Optional) Splash-proof, removable QWERTY keyboard Networking Wi-Fi 802.11b/g (Wireless) (54Mbit/s) Ethernet (Wired) (10/100 Mbit/s) Bluetooth (Wireless) in upgraded model Peripherals, ports 2 USB 2.0 ports 2 built-in speakers 3.5 mm audio-in/mic 3.5 mm headphones 2 Mouse emulators (one on keyboard and one on rear of device, advertised as for Tablet use) Power Integrated 3 cell Battery – approximately 4 hours usage Power adapter Operating system Linux – Linos 2.6.21 operating system, with pre-installed software bundle Similar devices The ONE is similar to the A-View or AW-300 A-BOOK products from Aware. They use an Aday 5F-300 MHz X86 processor. An earlier Aware product the AW-150 is sold in the US for $199 as the MiTYBOOK. Elonex revealed further notebook models at Computex in June 2008. Elonex introduced the One successor at IFA 2009. It's a smartbook based on ARM 11 CPA and Windows CE OS. References Linux-based devices Subnotebooks
Operating System (OS)
607
Process management (computing) A process is a program in execution. An integral part of any modern-day operating system (OS). The OS must allocate resources to processes, enable processes to share and exchange information, protect the resources of each process from other processes and enable synchronization among processes. To meet these requirements, the OS must maintain a data structure for each process, which describes the state and resource ownership of that process, and which enables the OS to exert control over each process. Multiprogramming In any modern operating system there can be more than one instance of a program loaded in memory at the same time. For example, more than one user could be executing the same program, each user having separate copies of the program loaded into memory. With some programs, it is possible to have one copy loaded into memory, while several users have shared access to it so that they each can execute the same program-code. Such a program is said to be re-entrant. The processor at any instant can only be executing one instruction from one program but several processes can be sustained over a period of time by assigning each process to the processor at intervals while the remainder become temporarily inactive. A number of processes being executed over a period of time instead of at the same time is called concurrent execution. A multiprogramming or multitasking OS is a system executing many processes concurrently. Multiprogramming requires that the processor be allocated to each process for a period of time and de-allocated at an appropriate moment. If the processor is de-allocated during the execution of a process, it must be done in such a way that it can be restarted later as easily as possible. There are two possible ways for an OS to regain control of the processor during a program’s execution in order for the OS to perform de-allocation or allocation: The process issues a system call (sometimes called a software interrupt); for example, an I/O request occurs requesting to access a file on hard disk. A hardware interrupt occurs; for example, a key was pressed on the keyboard, or a timer runs out (used in pre-emptive multitasking). The stopping of one process and starting (or restarting) of another process is called a context switch or context change. In many modern operating systems, processes can consist of many sub-processes. This introduces the concept of a thread. A thread may be viewed as a sub-process; that is, a separate, independent sequence of execution within the code of one process. Threads are becoming increasingly important in the design of distributed and client–server systems and in software run on multi-processor systems. How multiprogramming increases efficiency A common trait observed among processes associated with most computer programs, is that they alternate between CPU cycles and I/O cycles. For the portion of the time required for CPU cycles, the process is being executed; i.e. is occupying the CPU. During the time required for I/O cycles, the process is not using the processor. Instead, it is either waiting to perform Input/Output, or is actually performing Input/Output. An example of this is the reading from or writing to a file on disk. Prior to the advent of multiprogramming, computers operated as single-user systems. Users of such systems quickly became aware that for much of the time that a computer was allocated to a single user, the processor was idle; when the user was entering information or debugging programs for example. Computer scientists observed that overall performance of the machine could be improved by letting a different process use the processor whenever one process was waiting for input/output. In a uni-programming system, if N users were to execute programs with individual execution times of t1, t2, ..., tN, then the total time, tuni, to service the N processes (consecutively) of all N users would be: tuni = t1 + t2 + ... + tN. However, because each process consumes both CPU cycles and I/O cycles, the time which each process actually uses the CPU is a very small fraction of the total execution time for the process. So, for process i: ti (processor) ≪ ti (execution) where ti (processor) is the time process i spends using the CPU, and ti (execution) is the total execution time for the process; i.e. the time for CPU cycles plus I/O cycles to be carried out (executed) until completion of the process. In fact, usually the sum of all the processor time, used by N processes, rarely exceeds a small fraction of the time to execute any one of the processes; Therefore, in uni-programming systems, the processor lay idle for a considerable proportion of the time. To overcome this inefficiency, multiprogramming is now implemented in modern operating systems such as Linux, UNIX and Microsoft Windows. This enables the processor to switch from one process, X, to another, Y, whenever X is involved in the I/O phase of its execution. Since the processing time is much less than a single job's runtime, the total time to service all N users with a multiprogramming system can be reduced to approximately: tmulti = max(t1, t2, ..., tN) Process creation Operating systems need some ways to create processes. In a very simple system designed for running only a single application (e.g., the controller in a microwave oven), it may be possible to have all the processes that will ever be needed be present when the system comes up. In general-purpose systems, however, some way is needed to create and terminate processes as needed during operation. There are four principal events that cause a process to be created: System initialization. Execution of process creation system call by a running process. A user request to create a new process. Initiation of a batch job. When an operating system is booted, typically several processes are created. Some of these are foreground processes, that interact with a (human) user and perform work for them. Others are background processes, which are not associated with particular users, but instead have some specific function. For example, one background process may be designed to accept incoming e-mails, sleeping most of the day but suddenly springing to life when an incoming e-mail arrives. Another background process may be designed to accept an incoming request for web pages hosted on the machine, waking up when a request arrives to service that request. Process creation in UNIX and Linux are done through fork() or clone() system calls. There are several steps involved in process creation. The first step is the validation of whether the parent process has sufficient authorization to create a process. Upon successful validation, the parent process is copied almost entirely, with changes only to the unique process id, parent process, and user-space. Each new process gets its own user space. Process creation in Windows is done through the CreateProcessA() system call. A new process runs in the security context of the calling process, but otherwise runs independently of the calling process. Methods exist to alter the security context in which a new processes runs. New processes are assigned identifiers by which the can be accessed. Functions are provided to synchronize calling threads to newly created processes. Process termination There are many reasons for process termination: Batch job issues halt instruction User logs off Process executes a service request to terminate Error and fault conditions Normal completion Time limit exceeded Memory unavailable Bounds violation; for example: attempted access of (non-existent) 11th element of a 10-element array Protection error; for example: attempted write to read-only file Arithmetic error; for example: attempted division by zero Time overrun; for example: process waited longer than a specified maximum for an event I/O failure Invalid instruction; for example: when a process tries to execute data (text) Privileged instruction Data misuse Operating system intervention; for example: to resolve a deadlock Parent terminates so child processes terminate (cascading termination) Parent request Two-state process management model The operating system’s principal responsibility is in controlling the execution of processes. This includes determining the interleaving pattern for execution and allocation of resources to processes. One part of designing an OS is to describe the behaviour that we would like each process to exhibit. The simplest model is based on the fact that a process is either being executed by a processor or it is not. Thus, a process may be considered to be in one of two states, RUNNING or NOT RUNNING. When the operating system creates a new process, that process is initially labeled as NOT RUNNING, and is placed into a queue in the system in the NOT RUNNING state. The process (or some portion of it) then exists in main memory, and it waits in the queue for an opportunity to be executed. After some period of time, the currently RUNNING process will be interrupted, and moved from the RUNNING state to the NOT RUNNING state, making the processor available for a different process. The dispatch portion of the OS will then select, from the queue of NOT RUNNING processes, one of the waiting processes to transfer to the processor. The chosen process is then relabeled from a NOT RUNNING state to a RUNNING state, and its execution is either begun if it is a new process, or is resumed if it is a process which was interrupted at an earlier time. From this model we can identify some design elements of the OS: The need to represent, and keep track of each process. The state of a process. The queuing of NON RUNNING processes Three-state process management model Although the two-state process management model is a perfectly valid design for an operating system, the absence of a BLOCKED state means that the processor lies idle when the active process changes from CPU cycles to I/O cycles. This design does not make efficient use of the processor. The three-state process management model is designed to overcome this problem, by introducing a new state called the BLOCKED state. This state describes any process which is waiting for an I/O event to take place. In this case, an I/O event can mean the use of some device or a signal from another process. The three states in this model are: RUNNING: The process that is currently being executed. READY: A process that is queuing and prepared to execute when given the opportunity. BLOCKED: A process that cannot execute until some event occurs, such as the completion of an I/O operation. At any instant, a process is in one and only one of the three states. For a single processor computer, only one process can be in the RUNNING state at any one instant. There can be many processes in the READY and BLOCKED states, and each of these states will have an associated queue for processes. Processes entering the system must go initially into the READY state, processes can only enter the RUNNING state via the READY state. Processes normally leave the system from the RUNNING state. For each of the three states, the process occupies space in main memory. While the reason for most transitions from one state to another might be obvious, some may not be so clear. RUNNING → READY The most common reason for this transition is that the running process has reached the maximum allowable time for uninterrupted execution; i.e. time-out occurs. Other reasons can be the imposition of priority levels as determined by the scheduling policy used for the Low Level Scheduler, and the arrival of a higher priority process into the READY state. RUNNING → BLOCKED A process is put into the BLOCKED state if it requests something for which it must wait. A request to the OS is usually in the form of a system call, (i.e. a call from the running process to a function that is part of the OS code). For example, requesting a file from disk or a saving a section of code or data from memory to a file on disk. Process description and control Each process in the system is represented by a data structure called a Process Control Block (PCB), or Process Descriptor in Linux, which performs the same function as a traveller's passport. The PCB contains the basic information about the job including: What it is Where it is going How much of its processing has been completed Where it is stored How much it has “spent” in using resources Process Identification: Each process is uniquely identified by the user’s identification and a pointer connecting it to its descriptor. Process Status: This indicates the current status of the process; READY, RUNNING, BLOCKED, READY SUSPEND, BLOCKED SUSPEND. Process State: This contains all of the information needed to indicate the current state of the job. Accounting: This contains information used mainly for billing purposes and for performance measurement. It indicates what kind of resources the process has used and for how long. Processor modes Contemporary processors incorporate a mode bit to define the execution capability of a program in the processor. This bit can be set to kernel mode or user mode. Kernel mode is also commonly referred to as supervisor mode, monitor mode or ring 0. In kernel mode, the processor can execute every instruction in its hardware repertoire, whereas in user mode, it can only execute a subset of the instructions. Instructions that can be executed only in kernel mode are called kernel, privileged or protected instructions to distinguish them from the user mode instructions. For example, I/O instructions are privileged. So, if an application program executes in user mode, it cannot perform its own I/O. Instead, it must request the OS to perform I/O on its behalf. The computer architecture may logically extend the mode bit to define areas of memory to be used when the processor is in kernel mode versus user mode. If the mode bit is set to kernel mode, the process executing in the processor can access either the kernel or user partition of the memory. However, if user mode is set, the process can reference only the user memory space. We frequently refer to two classes of memory user space and system space (or kernel, supervisor or protected space). In general, the mode bit extends the operating system's protection rights. The mode bit is set by the user mode trap instruction, also called a Supervisor Call instruction. This instruction sets the mode bit, and branches to a fixed location in the system space. Since only system code is loaded in the system space, only system code can be invoked via a trap. When the OS has completed the supervisor call, it resets the mode bit to user mode prior to the return. The Kernel system concept The parts of the OS critical to its correct operation execute in kernel mode, while other software (such as generic system software) and all application programs execute in user mode. This fundamental distinction is usually the irrefutable distinction between the operating system and other system software. The part of the system executing in kernel supervisor state is called the kernel, or nucleus, of the operating system. The kernel operates as trusted software, meaning that when it was designed and implemented, it was intended to implement protection mechanisms that could not be covertly changed through the actions of untrusted software executing in user space. Extensions to the OS execute in user mode, so the OS does not rely on the correctness of those parts of the system software for correct operation of the OS. Hence, a fundamental design decision for any function to be incorporated into the OS is whether it needs to be implemented in the kernel. If it is implemented in the kernel, it will execute in kernel (supervisor) space, and have access to other parts of the kernel. It will also be trusted software by the other parts of the kernel. If the function is implemented to execute in user mode, it will have no access to kernel data structures. However, the advantage is that it will normally require very limited effort to invoke the function. While kernel-implemented functions may be easy to implement, the trap mechanism and authentication at the time of the call are usually relatively expensive. The kernel code runs fast, but there is a large performance overhead in the actual call. This is a subtle, but important point. Requesting system services There are two techniques by which a program executing in user mode can request the kernel's services: System call Message passing Operating systems are designed with one or the other of these two facilities, but not both. First, assume that a user process wishes to invoke a particular target system function. For the system call approach, the user process uses the trap instruction. The idea is that the system call should appear to be an ordinary procedure call to the application program; the OS provides a library of user functions with names corresponding to each actual system call. Each of these stub functions contains a trap to the OS function. When the application program calls the stub, it executes the trap instruction, which switches the CPU to kernel mode, and then branches (indirectly through an OS table), to the entry point of the function which is to be invoked. When the function completes, it switches the processor to user mode and then returns control to the user process; thus simulating a normal procedure return. In the message passing approach, the user process constructs a message, that describes the desired service. Then it uses a trusted send function to pass the message to a trusted OS process. The send function serves the same purpose as the trap; that is, it carefully checks the message, switches the processor to kernel mode, and then delivers the message to a process that implements the target functions. Meanwhile, the user process waits for the result of the service request with a message receive operation. When the OS process completes the operation, it sends a message back to the user process. The distinction between two approaches has important consequences regarding the relative independence of the OS behavior, from the application process behavior, and the resulting performance. As a rule of thumb, operating system based on a system call interface can be made more efficient than those requiring messages to be exchanged between distinct processes. This is the case, even though the system call must be implemented with a trap instruction; that is, even though the trap is relatively expensive to perform, it is more efficient than the message passing approach, where there are generally higher costs associated with process multiplexing, message formation and message copying. The system call approach has the interesting property that there is not necessarily any OS process. Instead, a process executing in user mode changes to kernel mode when it is executing kernel code, and switches back to user mode when it returns from the OS call. If, on the other hand, the OS is designed as a set of separate processes, it is usually easier to design it so that it gets control of the machine in special situations, than if the kernel is simply a collection of functions executed by users processes in kernel mode. Even procedure-based operating system usually find it necessary to include at least a few system processes (called daemons in UNIX) to handle situation whereby the machine is otherwise idle such as scheduling and handling the network. See also Process isolation References Sources Operating System incorporating Windows and UNIX, Colin Ritchie. Operating Systems, William Stallings, Prentice Hall, (4th Edition, 2000) Multiprogramming, Process Description and Control Operating Systems – A Modern Perspective, Gary Nutt, Addison Wesley, (2nd Edition, 2001). Process Management Models, Scheduling, UNIX System V Release 4: Modern Operating Systems, Andrew Tanenbaum, Prentice Hall, (2nd Edition, 2001). Operating System Concepts, Silberschatz & Galvin & Gagne (http://codex.cs.yale.edu/avi/os-book/OS9/slide-dir/), John Wiley & Sons, (6th Edition, 2003) Process (computing) Operating system technology
Operating System (OS)
608
Wombat (operating system) In computing, Wombat is an operating system, a high-performance virtualised Linux embedded operating system marketed by Open Kernel Labs, a spin-off of National ICT Australia's (now NICTA) Embedded, Real Time, Operating System Program. Wombat is a de-privileged (paravirtualised) Linux running on an L4 and IGUANA system. It is optimized for embedded systems. See also L4Linux References External links Wombat: A portable user-mode Linux for embedded systems (presentation slides) Virtualised os: wombat Iguana L4 Based Operating Systems L4.Sec Microkernel Specification NICTA L4-embedded Kernel Real-time operating systems Embedded operating systems Microkernel-based operating systems ARM operating systems
Operating System (OS)
609
Application virtualization Application virtualization is a software technology that encapsulates computer programs from the underlying operating system on which they are executed. A fully virtualized application is not installed in the traditional sense, although it is still executed as if it were. The application behaves at runtime like it is directly interfacing with the original operating system and all the resources managed by it, but can be isolated or sandboxed to varying degrees. In this context, the term "virtualization" refers to the artifact being encapsulated (application), which is quite different from its meaning in hardware virtualization, where it refers to the artifact being abstracted (physical hardware). Description Full application virtualization requires a virtualization layer. Application virtualization layers replace part of the runtime environment normally provided by the operating system. The layer intercepts all disk operations of virtualized applications and transparently redirects them to a virtualized location, often a single file. The application remains unaware that it accesses a virtual resource instead of a physical one. Since the application is now working with one file instead of many files spread throughout the system, it becomes easy to run the application on a different computer and previously incompatible applications can be run side by side. Examples of this technology for the Windows platform include: Cameyo Ceedo Citrix XenApp Microsoft App-V Numecent Cloudpaging Oracle Secure Global Desktop Sandboxie Shade sandbox Turbo (software) (formerly Spoon and Xenocode) Symantec Workspace Virtualization VMware ThinApp V2 Cloud Benefits Application virtualization allows applications to run in environments that do not suit the native application. For example, Wine allows some Microsoft Windows applications to run on Linux. Application virtualization reduces system integration and administration costs by maintaining a common software baseline across multiple diverse computers in an organization. Lesser integration protects the operating system and other applications from poorly written or buggy code. In some cases, it provides memory protection, IDE-style debugging features and may even run applications that are not written correctly, for example applications that try to store user data in a read-only system-owned location. (This feature assists in the implementation of the principle of least privilege by removing the requirement for end-users to have administrative privileges in order to run poorly written applications.) It allows incompatible applications to run side by side, at the same time and with minimal regression testing against one another. Isolating applications from the operating system has security benefits as well, as the exposure of the virtualized application does not automatically entail the exposure of the entire OS. Application virtualization also enables simplified operating system migrations. Applications can be transferred to removable media or between computers without the need of installing them, becoming portable software. Application virtualization uses fewer resources than a separate virtual machine. Limitations Not all computer programs can be virtualized. Some examples include applications that require a device driver (a form of integration with the OS) and 16-bit applications that need to run in shared memory space. Anti-virus programs and applications that require heavy OS integration, such as WindowBlinds or StyleXP are difficult to virtualize. Moreover, in software licensing, application virtualization bears great licensing pitfalls mainly because both the application virtualization software and the virtualized applications must be correctly licensed. While application virtualization can address file and Registry-level compatibility issues between legacy applications and newer operating systems, applications that don't manage the heap correctly will not execute on Windows Vista as they still allocate memory in the same way, regardless of whether they are virtualized. For this reason, specialist application compatibility fixes (shims) may still be needed, even if the application is virtualized. Functional discrepancies within the multicompatibility model are an additional limitation, where utility-driven access points are shared within a public network. These limitations are overcome by designating a system level share point driver. Related technologies Technology categories that fall under application virtualization include: Application streaming. Pieces of the application's code, data, and settings are delivered when they're first needed, instead of the entire application being delivered before startup. Running the packaged application may require the installation of a lightweight client application. Packages are usually delivered over a protocol such as HTTP, CIFS or RTSP. Remote Desktop Services (formerly called Terminal Services) is a server-based computing/presentation virtualization component of Microsoft Windows that allows a user to access applications and data hosted on a remote computer over a network. Remote Desktop Services sessions run in a single shared-server operating system (e.g. Windows Server 2008 R2 and later) and are accessed using the Remote Desktop Protocol. Desktop virtualization software technologies improve portability, manageability and compatibility of a personal computer's desktop environment by separating part or all of the desktop environment and associated applications from the physical client device that is used to access it. A common implementation of this approach is to host multiple desktop operating system instances on a server hardware platform running a hypervisor. This is generally referred to as "virtual desktop infrastructure" (VDI). See also Workspace virtualization Operating-system-level virtualization ("containerization") Portable application creators Comparison of application virtual machines Shim (computing) Virtual application References Virtualization software Windows security software MacOS security software Linux emulation software Unix emulation software
Operating System (OS)
610
Shell (computing) In computing, a shell is a computer program which exposes an operating system's services to a human user or other programs. In general, operating system shells use either a command-line interface (CLI) or graphical user interface (GUI), depending on a computer's role and particular operation. It is named a shell because it is the outermost layer around the operating system. Command-line shells require the user to be familiar with commands and their calling syntax, and to understand concepts about the shell-specific scripting language (for example, bash). Graphical shells place a low burden on beginning computer users, and are characterized as being easy to use. Since they also come with certain disadvantages, most GUI-enabled operating systems also provide CLI shells. Overview Operating systems provide various services to their users, including file management, process management (running and terminating applications), batch processing, and operating system monitoring and configuration. Most operating system shells are not direct interfaces to the underlying kernel, even if a shell communicates with the user via peripheral devices attached to the computer directly. Shells are actually special applications that use the kernel API in just the same way as it is used by other application programs. A shell manages the user–system interaction by prompting users for input, interpreting their input, and then handling an output from the underlying operating system (much like a read–eval–print loop, REPL). Since the operating system shell is actually an application, it may easily be replaced with another similar application, for most operating systems. In addition to shells running on local systems, there are different ways to make remote systems available to local users; such approaches are usually referred to as remote access or remote administration. Initially available on multi-user mainframes, which provided text-based UIs for each active user simultaneously by means of a text terminal connected to the mainframe via serial line or modem, remote access has extended to Unix-like systems and Microsoft Windows. On Unix-like systems, Secure Shell protocol is usually used for text-based shells, while SSH tunneling can be used for X Window System–based graphical user interfaces (GUIs). On Microsoft Windows, Remote Desktop Protocol can be used to provide GUI remote access, and since Windows Vista, PowerShell Remote can be used for text-based remote access via WMI, RPC, and WS-Management. Most operating system shells fall into one of two categories command-line and graphical. Command line shells provide a command-line interface (CLI) to the operating system, while graphical shells provide a graphical user interface (GUI). Other possibilities, although not so common, include voice user interface and various implementations of a text-based user interface (TUI) that are not CLI. The relative merits of CLI- and GUI-based shells are often debated. Command-line shells A command-line interface (CLI) is an operating system shell that uses alphanumeric characters typed on a keyboard to provide instructions and data to the operating system, interactively. For example, a teletypewriter can send codes representing keystrokes to a command interpreter program running on the computer; the command interpreter parses the sequence of keystrokes and responds with an error message if it cannot recognize the sequence of characters, or it may carry out some other program action such as loading an application program, listing files, logging in a user and many others. Operating systems such as UNIX have a large variety of shell programs with different commands, syntax and capabilities, with the POSIX shell being a baseline. Some operating systems had only a single style of command interface; commodity operating systems such as MS-DOS came with a standard command interface (COMMAND.COM) but third-party interfaces were also often available, providing additional features or functions such as menuing or remote program execution. Application programs may also implement a command-line interface. For example, in Unix-like systems, the telnet program has a number of commands for controlling a link to a remote computer system. Since the commands to the program are made of the same keystrokes as the data being sent to a remote computer, some means of distinguishing the two are required. An escape sequence can be defined, using either a special local keystroke that is never passed on but always interpreted by the local system. The program becomes modal, switching between interpreting commands from the keyboard or passing keystrokes on as data to be processed. A feature of many command-line shells is the ability to save sequences of commands for re-use. A data file can contain sequences of commands which the CLI can be made to follow as if typed in by a user. Special features in the CLI may apply when it is carrying out these stored instructions. Such batch files (script files) can be used repeatedly to automate routine operations such as initializing a set of programs when a system is restarted. Batch mode use of shells usually involves structures, conditionals, variables, and other elements of programming languages; some have the bare essentials needed for such a purpose, others are very sophisticated programming languages in and of themselves. Conversely, some programming languages can be used interactively from an operating system shell or in a purpose-built program. The command-line shell may offer features such as command-line completion, where the interpreter expands commands based on a few characters input by the user. A command-line interpreter may offer a history function, so that the user can recall earlier commands issued to the system and repeat them, possibly with some editing. Since all commands to the operating system had to be typed by the user, short command names and compact systems for representing program options were common. Short names were sometimes hard for a user to recall, and early systems lacked the storage resources to provide a detailed on-line user instruction guide. Graphical shells A graphical user interface (GUI) provides means for manipulating programs graphically, by allowing for operations such as opening, closing, moving and resizing windows, as well as switching focus between windows. Graphical shells may be included with desktop environments or come separately, even as a set of loosely coupled utilities. Most graphical user interfaces develop the metaphor of an "electronic desktop", where data files are represented as if they were paper documents on a desk, and application programs similarly have graphical representations instead of being invoked by command names. Unix-like systems Graphical shells typically build on top of a windowing system. In the case of X Window System or Wayland, the shell consists of an X window manager or a Wayland compositor, respectively, as well as of one or multiple programs providing the functionality to start installed applications, to manage open windows and virtual desktops, and often to support a widget engine. In the case of macOS, Quartz Compositor acts as the windowing system, and the shell consists of the Finder, the Dock, SystemUIServer, and Mission Control. Microsoft Windows Modern versions of the Microsoft Windows operating system use the Windows shell as their shell. Windows Shell provides desktop environment, start menu, and task bar, as well as a graphical user interface for accessing the file management functions of the operating system. Older versions also include Program Manager, which was the shell for the 3.x series of Microsoft Windows, and which in fact shipped with later versions of Windows of both the 95 and NT types at least through Windows XP. The interfaces of Windows versions 1 and 2 were markedly different. Desktop applications are also considered shells, as long as they use a third-party engine. Likewise, many individuals and developers dissatisfied with the interface of Windows Explorer have developed software that either alters the functioning and appearance of the shell or replaces it entirely. WindowBlinds by StarDock is a good example of the former sort of application. LiteStep and Emerge Desktop are good examples of the latter. Interoperability programmes and purpose-designed software lets Windows users use equivalents of many of the various Unix-based GUIs discussed below, as well as Macintosh. An equivalent of the OS/2 Presentation Manager for version 3.0 can run some OS/2 programmes under some conditions using the OS/2 environmental subsystem in versions of Windows NT. Other uses "Shell" is also used loosely to describe application software that is "built around" a particular component, such as web browsers and email clients, in analogy to the shells found in nature. Indeed, the (command-line) shell encapsulates the operating system kernel. These are also sometimes referred to as "wrappers". In expert systems, a shell is a piece of software that is an "empty" expert system without the knowledge base for any particular application. See also Comparison of command shells Human–computer interaction Internet Explorer shell Shell account Shell builtin Superuser Unix shell Window manager provides a rudimentary process management interface Read-eval-print loop also called language shell, a CLI for an interpreted programming language References Desktop environments
Operating System (OS)
611
MinGW MinGW ("Minimalist GNU for Windows"), formerly mingw32, is a free and open source software development environment to create Microsoft Windows applications. The development of the MinGW project has been forked with the creation in 2005–2008 of an alternative project called Mingw-w64. MinGW includes a port of the GNU Compiler Collection (GCC), GNU Binutils for Windows (assembler, linker, archive manager), a set of freely distributable Windows specific header files and static import libraries which enable the use of the Windows API, a Windows native build of the GNU Project's GNU Debugger, and miscellaneous utilities. MinGW does not rely on third-party C runtime dynamic-link library (DLL) files, and because the runtime libraries are not distributed using the GNU General Public License (GPL), it is not necessary to distribute the source code with the programs produced, unless a GPL library is used elsewhere in the program. MinGW can be run either on the native Microsoft Windows platform, cross-hosted on Linux (or other Unix), or "cross-native" on Cygwin. Although programs produced under MinGW are 32-bit executables, they can be used both in 32 and 64-bit versions of Windows. History MinGW was originally called mingw32 ("Minimalist GNU for W32"), following the GNU convention whereby Windows is shortened as "W32". The numbers were dropped in order to avoid the implication that it would be limited to producing 32-bit binaries. Colin Peters authored the initial release in 1998, consisting only of a Cygwin port of GCC. Jan-Jaap van der Heijden created a Windows-native port of GCC and added binutils and make. Mumit Khan later took over development, adding more Windows-specific features to the package, including the Windows system headers by Anders Norlander. In 2000, the project was moved to SourceForge in order to solicit more assistance from the community and centralize its development. MinGW was selected as Project of the Month at SourceForge for September 2005. MSYS (a contraction of "Minimal System") was introduced as a Bourne shell command line interpreter system with the aim of better interoperability with native Windows software. In 2018, following a disagreement with SourceForge about the administration of its mailing lists, MinGW migrated to OSDN. Fork In 2007, a fork of the original MinGW called Mingw-w64 appeared in order to provide support for 64 bits and new APIs. It has since then gained widespread use and distribution. MSYS2 ("minimal system 2") is a software distribution and a development platform for Microsoft Windows, based on Mingw-w64 and Cygwin, that helps to deploy code from the Unix world on Windows. Programming language support Most languages supported by GCC are supported on the MinGW port as well. These include C, C++, Objective-C, Objective-C++, Fortran, and Ada. The GCC runtime libraries are used (libstdc++ for C++, libgfortran for Fortran, etc.). MinGW links by default to the Windows OS component library MSVCRT, which is the C library that Visual C++ version 6.0 linked to (the initial target was CRTDLL), which was released in 1998 and therefore does not include support for C99 features, or even all of C89. While targeting MSVCRT yields programs that require no additional runtime redistributables to be installed, the lack of support for C99 has caused porting problems, particularly where printf-style conversion specifiers are concerned. These issues have been partially mitigated by the implementation of a C99 compatibility library, libmingwex, but the extensive work required is far from complete and may never be fully realized. Mingw-w64 has resolved these issues, and provides fully POSIX compliant printf functionality. Link compatibility Binaries (executables or DLLs) generated with different C++ compilers (like MinGW and Visual Studio) are in general not link compatible. However, compiled C code is link compatible. Components The MinGW project maintains and distributes a number of different core components and supplementary packages, including various ports of the GNU toolchain, such as GCC and binutils, translated into equivalent packages. These utilities can be used from the Windows command line or integrated into an IDE. Packages may be installed using the command line via mingw-get. MinGW supports dynamic libraries named according to the <name>.lib and <name>.dll conventions, as well as static libraries following the lib<name>.a naming convention common on Unix and Unix-like systems. In addition, a component of MinGW known as MSYS (minimal system) provides Windows ports of a lightweight Unix-like shell environment including rxvt and a selection of POSIX tools sufficient to enable autoconf scripts to run, but it does not provide a C compiler or a case-sensitive file system. mingwPORTs are user contributed additions to the MinGW software collection. Rather than providing these "add-ons" as precompiled binary packages, they are supplied in the form of interactive Bourne shell scripts, which guide the end user through the process of automatically downloading and patching original source code, then building and installing it. Users who wish to build any application from a mingwPORT must first install both MinGW and MSYS. The implementation of Windows system headers and static import libraries are released under a permissive license, while the GNU ports are provided under the GNU General Public License. Binary downloads of both the complete MSYS package and individual MinGW GNU utilities are available from the MinGW site. Comparison with Cygwin Although both Cygwin and MinGW can be used to port Unix software to Windows, they have different approaches: Cygwin aims to provide a complete POSIX layer comprising a full implementation of all major Unix system calls and libraries. Compatibility is considered a higher priority than performance. On the other hand, MinGW's priorities are simplicity and performance. As such, it does not provide certain POSIX APIs which cannot easily be implemented using the Windows API, such as fork(), mmap() and ioctl(). Applications written using a cross-platform library that has itself been ported to MinGW, such as SDL, wxWidgets, Qt, or GTK, will usually compile as easily in MinGW as they would in Cygwin. Windows programs written with Cygwin run on top of a copylefted compatibility DLL that must be distributed with the program, along with the program's source code. MinGW does not require a compatibility layer, since MinGW-based programs are compiled with direct calls to Windows APIs. The combination of MinGW and MSYS provides a small, self-contained environment that can be loaded onto removable media without leaving entries in the registry or files on the computer. It is also possible to cross-compile Windows applications with MinGW-GCC under POSIX systems. This means that developers do not need a Windows installation with MSYS to compile software that will run on Windows with or without Cygwin. See also Cygwin Windows Subsystem for Linux Mingw-w64 References External links official MinGW website official software repository in OSDN official Mingw-w64 website official MSYS2 website nuwen 64-bit MinGW distro - maintained by a Microsoft employee MXE - Makefiles to build MinGW on Unix and many common dependencies libraries, pre-built packages available 1998 software C (programming language) compilers C++ compilers Cross-compilers Fortran compilers Free compilers and interpreters Public-domain software
Operating System (OS)
612
MONECS MONECS (Monash University Educational Computing System) was a computer operating system with BASIC, COBOL, FORTRAN, Pascal interpreters, plus machine language facility. Specifically designed for computer science education in Australian secondary schools and at the university undergraduate level. Alternative designations were DEAMON (Digital Equipment Australia - Monash University) or SCUBA (local designation at Melbourne University) systems. Overview For teaching computer science students in Australian schools Monash University created subsets of the FORTRAN language, an elementary version called MINITRAN then an enhanced version called MIDITRAN. MIDITRAN versions were available for a number of different mainframe systems, i.e. Burroughs B5000/B5500 series, CDC 3000, IBM 360 and ICL 1900. Student's programs were submitted on IBM Port-a-Punch cards that can be programmed with an IBM board and stylus or even a bent paper clip. Standard 80-column punch cards were an option for students if a card punch was available. Before the minicomputer, it was impossible for a class of Australian students to have hands-on access to a computer within a one-hour school period. Mainframes were too expensive for small schools and remote job entry equipment was typically limited to major corporations, universities and research centres. A group at Monash University under the leadership of Dr Len G. Whitehouse solved the problem with a small PDP-11 minicomputer system that could be used in the classroom. Mark sense cards were used, and a class of 30 children could each get two runs in a one-hour period. The Monash University series of Student FORTRAN predated and was an independent effort not associated with DEC's PDP-8 based EDUSYSTEM series which centred on the BASIC language. MONECS was optimised for the low end hardware of the Digital Equipment Corporation (DEC) PDP-11 minicomputer family. Typical installation would be a PDP-11/03, /04, /05 /10 or D. D. Webster Electronics' Spectrum-IIB (repackaged DEC LSI) processor with 32k Bytes memory. MONECS systems were based on the PDP-11/05 or PDP-11/10 processors with core memory. This was identical hardware rebadged by the manufacturer DEC just to indicate an OEM version. Student systems were fitted with a custom UNIBUS interface to support the Memorex 651 flexible drive which was an early version of an 8-inch floppy disk. Next major releases were the DEAMON systems based on PDP-11/04 or PDP/11/34 processors with semiconductor memory and DEC RX01 8-inch floppy disk drive(s). Then the LSI-11 systems based systems which moved away from the UNIBUS based processors and used the PDP-11/03 and Spectrum-IIB systems. All systems were installed with a mark sense card reader PDI, Hewlett-Packard or Documation M-200, plus a 132 column lineprinter from Tally, DEC, etc. Student programs were typically submitted as a deck of mark sense cards although punched cards were an option. Due to the 32k Byte memory constraint MONECS serially processed student programs with all jobs queued in the input hopper of the cardreader. The appropriate language interpreter was loaded from the floppy disk for each job and the results printed before reading in the next student's program. The MONECS systems were supported by staff from the Monash University Computer Centre which was an entity independent from the Computer Science Department. The Computer Centre shared facilities and staff with the Victorian Hospitals Computing Service (HCS). The Computer Centre also processed mark-sense sheets on an ICL 1800 series reader for the Victorian Education Department's Secondary Students final (year 12) examinations. A MONECS system at St Peter's Lutheran College was the first computer available for student use in a Queensland school. See also Timeline of operating systems References Further reading Monash University 1974 software
Operating System (OS)
613
Idris (operating system) Idris is a discontinued multi-tasking, Unix-like, multi-user, real-time operating system released by Whitesmiths, of Westford, Massachusetts. The product was commercially available from 1979 through 1988. Background Idris was originally written for the PDP-11 by P. J. Plauger, who started working on Idris in August 1978. It was binary compatible with Unix V6 on PDP-11, but it could run on non-memory managed systems (like LSI-11 or PDP-11/23) as well. The kernel required 31 KB of RAM, and the C compiler (provided along with the standard V6 toolset) had more or less the same size. Ports Although Idris was initially available for the PDP-11, it was later ported to run on a number of platforms, such as the VAX, Motorola 68000, System/370 and Intel 8086. There was also a version that used bank-switching for memory management, that ran on the Intel 8080. In 1986, David M. Stanhope at Computer Tools International ported Idris to the Atari ST and developed its ROM boot cartridge. This work also included a port of X to Idris. Computer Tools and Whitesmiths offered it to Atari as a replacement for Atari TOS, but eventually marketed it directly to ST enthusiasts. A specific version of Idris (CoIdris) was packaged as a .COM file under DOS and used it for low level I/O services. Idris was ported to the Apple Macintosh (as MacIdris) by John O'Brien (of Whitesmiths Australia) and remained available until the early 1990s. MacIdris ran as an application under the Finder or Multifinder. After Whitesmiths had been merged with Intermetrics, Idris along with its development toolchain was ported by Real Time Systems Ltd to the INMOS T800 transputer architecture for the Parsytec SN1000 multiprocessor. References Discontinued operating systems PDP-11 Unix variants 68k architecture
Operating System (OS)
614
IBM System/36 The IBM System/36 (often abbreviated as S/36) was a midrange computer marketed by IBM from 1983 to 2000 - a multi-user, multi-tasking successor to the System/34. Like the System/34 and the older System/32, the System/36 was primarily programmed in the RPG II language. One of the machine's optional features was an off-line storage mechanism (on the 5360 model) that utilized "magazines" – boxes of 8-inch floppies that the machine could load and eject in a nonsequential fashion. The System/36 also had many mainframe features such as programmable job queues and scheduling priority levels. While these systems were similar to other manufacturer's minicomputers, IBM themselves described the System/32, System/34 and System/36 as "small systems" and later as midrange computers along with the System/38 and succeeding IBM AS/400 range. The AS/400 series and IBM Power Systems running IBM i can run System/36 code in the System/36 Environment, although the code needs to be recompiled on IBM i first. Overview of the IBM System/36 The IBM System/36 was a popular small business computer system, first announced on 16 May 1983 and shipped later that year. It had a 17-year product lifespan. The first model of the System/36 was the 5360. In the 1970s, the US Department of Justice brought an antitrust lawsuit against IBM, claiming it was using unlawful practices to knock out competitors. At this time, IBM had been about to consolidate its entire line (System/370, 4300, System/32, System/34, System/38) into one "family" of computers with the same ISAM database technology, programming languages, and hardware architecture. After the lawsuit was filed, IBM decided it would have two families: the System/38 line, intended for large companies and representing IBM's future direction, and the System/36 line, intended for small companies who had used the company's legacy System/32/34 computers. In the late 1980s the lawsuit was dropped, and IBM decided to recombine the two product lines, creating the AS/400 - which replaced both the System/36 and System/38. The System/36 used virtually the same RPG II, Screen Design Aid, OCL, and other technologies that the System/34 used, though it was object-code incompatible. The S/36 was a small business computer; it had an 8-inch diskette drive, between one and four hard drives in sizes of 30 to 716 MB, and memory from 128K up to 7MB. Tape drives were available as backup devices; the 6157 QIC (quarter-inch cartridge) and the reel-to-reel 8809 both had capacities of roughly 60MB. The Advanced/36 9402 tape drive had a capacity of 2.5GB. The IBM 5250 series of terminals were the primary interface to the System/36. System architecture Processors S/36s had two sixteen-bit processors, the CSP or Control Storage Processor, and the MSP or Main Storage Processor. The MSP was the workhorse; it performed the instructions in the computer programs. The CSP was the governor; it performed system functions in the background. Special utility programs were able to make direct calls to the CSP to perform certain functions; these are usually system programs like $CNFIG which was used to configure the computer system. As with the earlier System/32 and System/34 hardware, the execution of so-called "scientific instructions" (i.e. floating point operations) was implemented in software on the CSP. The primary purpose of the CSP was to keep the MSP busy; as such, it ran at slightly more than 4X the speed of the MSP. The first System/36 models (the 5360-A) had a 4 MHz CSP and a 1 MHz MSP. The CSP would load code and data into main storage behind the MSP's program counter. As the MSP was working on one process, the CSP was filling storage for the next process. The 5360 processors came in four models, labeled 5360-A through 5360-D. The later "D" model was about 60 percent faster than the "A" model. Front panel The 5360, 5362, and 5363 processors had a front panel display with four hexadecimal LEDs. If the operator "dialed up" the combination F-F-0-0 before performing an Initial Program Load (IPL, or system boot), many diagnostics were skipped, causing the duration of the IPL to be about a minute instead of about 10 minutes. Of course part of the IPL was typically keysorting the indexed files and if the machine had been shut down without a "keysort" (performed part of the P S (or STOP SYSTEM) then depending on the number of indexed files (and their sizes) it could take upwards of an hour to come back up. Memory and disk The smallest S/36 had 128K of RAM and a 30 MB hard drive. The largest configured S/36 could support 7MB of RAM and 1478MB of disk space. This cost over US$200,000 back in the early 1980s. S/36 hard drives contained a feature called "the extra cylinder," so that bad spots on the drive were detected and dynamically mapped out to good spots on the extra cylinder. It is therefore possible for the S/36 to use more space than it can technically address. Disk address sizes limit the size of the active S/36 partition to about 2GB; however, the Advanced/36 Large Package had a 4GB hard drive which could contain up to three (emulated) S/36s, and Advanced/36 computers had more memory than SSP could address (32MB to 96MB) which was used to increase disk caching. Disk space on the System/36 was organized by blocks, with one block consisting of 2560 bytes. A high-end 5360 system would ship with about 550,000 blocks of disk space available. System objects could be allocated in blocks or records, but internally it was always blocks. The System/36 supported memory paging, referring to as "swapping". Software The System Support Program (SSP) was the only operating system of the S/36. It contained support for multiprogramming, multiple processors, 80 devices, job queues, printer queues, security, indexed file support, and fully installed, it was about 10MB. On the Advanced/36, the number of workstations/printers was increased to 160. In the Guest/36 environment of certain OS/400 releases, up to 216 devices were supported. The S/36 could compile and run programs up to 64 kB in size, although most were not this large. This became a bottleneck issue only for the largest screen programs. With the Advanced/36, there were features added to the SSP operating system including the ability to call other programs from within. So a program that was say 60 kB could call another program that was 30kB or 40KB. This call/parm had been available with third-party packages on the System/36 but not widely used until the feature was put in 7.1 and 7.5 of SSP on the Advanced/36. Hardware models Main line System/36 Model 5360 The System/36 5360 was the first model of System/36. It weighed 700 lb (318 kg), cost $140,000 and is believed to have had processor speeds of about 2MHz and 8MHz for its two processors. The system ran on 208 or 240 volts AC. The five red lights on the System/36 were as follows: (1) Power check. (2) Processor check. (3) Program check. (4) Console check. (5) Temperature check. If any light other than #4 ever came on, the system needed to be rebooted. Console can be restored if it has been powered off, but the other conditions are unrecoverable. There were various models of the 5360, including a C and D model that dealt with speed and the ability to support an additional frame to house two additional drives. System/36 Model 5362 IBM introduced the 5362 or "Compact 36" in 1984 as a system targeted at the lower end of their market. It had a deskside tower form factor. It was designed to operate in a normal office environment, requiring little special consideration. It differed from the 5360 in by having a more limited card cage, capable of fewer peripherals. It used 14" fixed disks (30 or 60MB) and could support up to two; main storage ranged from 128KB to 512 KB. One 8" floppy diskette drive was built in. The 5362 also allowed the use of a channel attached external desktop 9332-200, 400, & 600 DASD, effectively allowing a maximum of 720MB. The 5362 weighed 150 pounds (68 kg) and cost $20,000. System/36 Model 5364 The model 5364 was called the "System/36 PC" or "Desktop 36" (and also, informally, the "Baby/36" by some – but this name was later attached to a software program produced by California Software Products, Inc.). The 5364 was a June 1985 attempt by IBM to implement a System/36 on PC-sized hardware. Inside, there were IBM chips, but the cabinet size was reminiscent of an IBM PC/AT of the period. The machine had a 1.2 MB 5.25-inch diskette drive, which was incompatible with PCs and with other S/36s. The control panel/system console (connected via an expansion card) was an IBM PC with at least 256KB RAM. System/36 Model 5363 The model 5363 was positioned as a replacement for the 5364, and was announced in October 1987. It used a deskside tower style enclosure like that of the 5362, but was only 2/3 the size. It featured updated hardware using newer, smaller hard drive platters, a 5" diskette drive, and a revised distribution of the SSP. AS/400-based backports The System/36 Environment of IBM i (previously OS/400) is a feature which provides a number of SSP utilities, as well as RPG II and OCL support. It does not implement binary compatibility with the System/36 - instead it allows programmers to port System/36 applications to IBM i by recompiling the code on top of the System/36 Environment, generating programs which use the native IBM i APIs. From V3R6 to V4R4, OS/400 was capable of running up to three instances of SSP inside virtual machines known as ‘’Guest/36’’ or ‘’M36’’. This relied on emulation of emulation of the MSP implemented by the OS/400 SLIC, and thus provided binary compatibility with SSP programs. AS/Entry (9401) The AS/Entry was just a stripped-down AS/400, first model was based on a AS/400 9401-P03. The operating system was SSP Release 6. This machine was offered c.1991 to target customers who had a S/36 and wanted to one day migrate to an AS/400, but did not want a large investment in an AS/400. In this regard, the AS/Entry was a failure because IBM decided the machine's architecture was not economically feasible and the older model 5363 that the 9401 was based on was a much more reliable system. The entry line was later upgraded to AS/400 9401-150 hardware. Advanced/36 (9402, 9406) In 1994, IBM released the AS/400 Advanced/36 with two models (9402-236 and 9402-436). Priced as low as $7995, it was a machine that allowed System/36 users to get faster and more modern hardware while "staying 36." Based on standard AS/400 hardware, the Advanced/36 could run SSP, the operating system of the System/36, alone, or within AS/400's OS/400 as a virtual machine so that it could be upgraded to a full-blown AS/400 for just extra licensing costs. The A/36 was packaged in a black enclosure which was slightly larger than a common PC cabinet. The Advanced/36 bought the world of System/36 and SSP about five more years in the marketplace, but by the end of the 20th century, the marketplace for the System/36 was almost unrecognizable. The IBM printers and displays that had completely dominated the marketplace in the 80s were replaced by a PC or a third-party monitor with an attached PC-type printer. Twinaxial cable had disappeared in favor of cheap adapters and standard telephone wire. The System/36 was eventually replaced by AS/400s at the high end and PCs at the low end. The Advanced line was later upgraded to AS/400 9406-170 hardware. By 2000, the Advanced/36 was withdrawn from marketing. References Further reading News 3X/400's Desktop Guide to the S/36 Midrange Computing's Power Tools Everything You Always Wanted to Know About the System/36 But Nobody Told You by Charlie Massoglia Writing and Using System/36 Procedures Effectively by Charlie Massoglia Everything You Always Wanted to Know About POP But Nobody Told You by Merikay Lee System/3, System/34, and System/36 Disk Sort as a Programming Language by Charlie Massoglia External links IBM Archives: IBM System/36 Bitsavers' Archive of System/36 Documentation IBM System/36 brochures and Manuals System 36 Computer-related introductions in 1983 16-bit computers
Operating System (OS)
615
Microsoft Microsoft Corporation is an American multinational technology corporation which produces computer software, consumer electronics, personal computers, and related services. Its best-known software products are the Microsoft Windows line of operating systems, the Microsoft Office suite, and the Internet Explorer and Edge web browsers. Its flagship hardware products are the Xbox video game consoles and the Microsoft Surface lineup of touchscreen personal computers. Microsoft ranked No. 21 in the 2020 Fortune 500 rankings of the largest United States corporations by total revenue; it was the world's largest software maker by revenue as of 2016. It is one of the Big Five American information technology companies, alongside Alphabet, Amazon, Apple, and Meta. Microsoft (the word being a portmanteau of "microcomputer software") was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800. It rose to dominate the personal computer operating system market with MS-DOS in the mid-1980s, followed by Microsoft Windows. The company's 1986 initial public offering (IPO), and subsequent rise in its share price, created three billionaires and an estimated 12,000 millionaires among Microsoft employees. Since the 1990s, it has increasingly diversified from the operating system market and has made a number of corporate acquisitions, their largest being the acquisition of LinkedIn for $26.2 billion in December 2016, followed by their acquisition of Skype Technologies for $8.5 billion in May 2011. , Microsoft is market-dominant in the IBM PC compatible operating system market and the office software suite market, although it has lost the majority of the overall operating system market to Android. The company also produces a wide range of other consumer and enterprise software for desktops, laptops, tabs, gadgets, and servers, including Internet search (with Bing), the digital services market (through MSN), mixed reality (HoloLens), cloud computing (Azure), and software development (Visual Studio). Steve Ballmer replaced Gates as CEO in 2000, and later envisioned a "devices and services" strategy. This unfolded with Microsoft acquiring Danger Inc. in 2008, entering the personal computer production market for the first time in June 2012 with the launch of the Microsoft Surface line of tablet computers, and later forming Microsoft Mobile through the acquisition of Nokia's devices and services division. Since Satya Nadella took over as CEO in 2014, the company has scaled back on hardware and has instead focused on cloud computing, a move that helped the company's shares reach its highest value since December 1999. Earlier dethroned by Apple in 2010, in 2018 Microsoft reclaimed its position as the most valuable publicly traded company in the world. In April 2019, Microsoft reached the market cap, becoming the third U.S. public company to be valued at over $1 trillion after Apple and Amazon respectively. , Microsoft has the third-highest global brand valuation. History 1972–1985: Founding Childhood friends Bill Gates and Paul Allen sought to make a business using their skills in computer programming. In 1972, they founded Traf-O-Data, which sold a rudimentary computer to track and analyze automobile traffic data. Gates enrolled at Harvard University while Allen pursued a degree in computer science at Washington State University, though he later dropped out to work at Honeywell. The January 1975 issue of Popular Electronics featured Micro Instrumentation and Telemetry Systems's (MITS) Altair 8800 microcomputer, which inspired Allen to suggest that they could program a BASIC interpreter for the device. Gates called MITS and claimed that he had a working interpreter, and MITS requested a demonstration. Allen worked on a simulator for the Altair while Gates developed the interpreter, and it worked flawlessly when they demonstrated it to MITS in March 1975 in Albuquerque, New Mexico. MITS agreed to distribute it, marketing it as Altair BASIC. Gates and Allen established Microsoft on April 4, 1975, with Gates as CEO, and Allen suggested the name "Micro-Soft", short for micro-computer software. In August 1977, the company formed an agreement with ASCII Magazine in Japan, resulting in its first international office of ASCII Microsoft. Microsoft moved its headquarters to Bellevue, Washington, in January 1979. Microsoft entered the operating system (OS) business in 1980 with its own version of Unix called Xenix, but it was MS-DOS that solidified the company's dominance. IBM awarded a contract to Microsoft in November 1980 to provide a version of the CP/M OS to be used in the IBM Personal Computer (IBM PC). For this deal, Microsoft purchased a CP/M clone called 86-DOS from Seattle Computer Products which it branded as MS-DOS, although IBM rebranded it to IBM PC DOS. Microsoft retained ownership of MS-DOS following the release of the IBM PC in August 1981. IBM had copyrighted the IBM PC BIOS, so other companies had to reverse engineer it in order for non-IBM hardware to run as IBM PC compatibles, but no such restriction applied to the operating systems. Microsoft eventually became the leading PC operating systems vendor. The company expanded into new markets with the release of the Microsoft Mouse in 1983, as well as with a publishing division named Microsoft Press. Paul Allen resigned from Microsoft in 1983 after developing Hodgkin's disease. Allen claimed in Idea Man: A Memoir by the Co-founder of Microsoft that Gates wanted to dilute his share in the company when he was diagnosed with Hodgkin's disease because he did not think that he was working hard enough. Allen later invested in low-tech sectors, sports teams, commercial real estate, neuroscience, private space flight, and more. 1985–1994: Windows and Office Microsoft released Microsoft Windows on November 20, 1985, as a graphical extension for MS-DOS, despite having begun jointly developing OS/2 with IBM the previous August. Microsoft moved its headquarters from Bellevue to Redmond, Washington, on February 26, 1986, and went public on March 13, with the resulting rise in stock making an estimated four billionaires and 12,000 millionaires from Microsoft employees. Microsoft released its version of OS/2 to original equipment manufacturers (OEMs) on April 2, 1987. In 1990, the Federal Trade Commission examined Microsoft for possible collusion due to the partnership with IBM, marking the beginning of more than a decade of legal clashes with the government. Meanwhile, the company was at work on Microsoft Windows NT, which was heavily based on their copy of the OS/2 code. It shipped on July 21, 1993, with a new modular kernel and the 32-bit Win32 application programming interface (API), making it easier to port from 16-bit (MS-DOS-based) Windows. Microsoft informed IBM of Windows NT, and the OS/2 partnership deteriorated. In 1990, Microsoft introduced the Microsoft Office suite which bundled separate applications such as Microsoft Word and Microsoft Excel. On May 22, Microsoft launched Windows 3.0, featuring streamlined user interface graphics and improved protected mode capability for the Intel 386 processor, and both Office and Windows became dominant in their respective areas. On July 27, 1994, the Department of Justice's Antitrust Division filed a competitive impact statement which said: "Beginning in 1988 and continuing until July 15, 1994, Microsoft induced many OEMs to execute anti-competitive 'per processor' licenses. Under a per-processor license, an OEM pays Microsoft a royalty for each computer it sells containing a particular microprocessor, whether the OEM sells the computer with a Microsoft operating system or a non-Microsoft operating system. In effect, the royalty payment to Microsoft when no Microsoft product is being used acts as a penalty, or tax, on the OEM's use of a competing PC operating system. Since 1988, Microsoft's use of per processor licenses has increased." 1995–2007: Foray into the Web, Windows 95, Windows XP, and Xbox Following Bill Gates' internal "Internet Tidal Wave memo" on May 26, 1995, Microsoft began to redefine its offerings and expand its product line into computer networking and the World Wide Web. With a few exceptions of new companies, like Netscape, Microsoft was the only major and established company that acted fast enough to be a part of the World Wide Web practically from the start. Other companies like Borland, WordPerfect, Novell, IBM and Lotus, being much slower to adapt to the new situation, would give Microsoft a market dominance. The company released Windows 95 on August 24, 1995, featuring pre-emptive multitasking, a completely new user interface with a novel start button, and 32-bit compatibility; similar to NT, it provided the Win32 API. Windows 95 came bundled with the online service MSN, which was at first intended to be a competitor to the Internet, and (for OEMs) Internet Explorer, a Web browser. Internet Explorer was not bundled with the retail Windows 95 boxes, because the boxes were printed before the team finished the Web browser, and instead was included in the Windows 95 Plus! pack. Backed by a high-profile marketing campaign and what The New York Times called "the splashiest, most frenzied, most expensive introduction of a computer product in the industry's history," Windows 95 quickly became a success. Branching out into new markets in 1996, Microsoft and General Electric's NBC unit created a new 24/7 cable news channel, MSNBC. Microsoft created Windows CE 1.0, a new OS designed for devices with low memory and other constraints, such as personal digital assistants. In October 1997, the Justice Department filed a motion in the Federal District Court, stating that Microsoft violated an agreement signed in 1994 and asked the court to stop the bundling of Internet Explorer with Windows. On January 13, 2000, Bill Gates handed over the CEO position to Steve Ballmer, an old college friend of Gates and employee of the company since 1980, while creating a new position for himself as Chief Software Architect. Various companies including Microsoft formed the Trusted Computing Platform Alliance in October 1999 to (among other things) increase security and protect intellectual property through identifying changes in hardware and software. Critics decried the alliance as a way to enforce indiscriminate restrictions over how consumers use software, and over how computers behave, and as a form of digital rights management: for example the scenario where a computer is not only secured for its owner, but also secured against its owner as well. On April 3, 2000, a judgment was handed down in the case of United States v. Microsoft Corp., calling the company an "abusive monopoly." Microsoft later settled with the U.S. Department of Justice in 2004. On October 25, 2001, Microsoft released Windows XP, unifying the mainstream and NT lines of OS under the NT codebase. The company released the Xbox later that year, entering the video game console market dominated by Sony and Nintendo. In March 2004 the European Union brought antitrust legal action against the company, citing it abused its dominance with the Windows OS, resulting in a judgment of €497 million ($613 million) and requiring Microsoft to produce new versions of Windows XP without Windows Media Player: Windows XP Home Edition N and Windows XP Professional N. In November 2005, the company's second video game console, the Xbox 360, was released. There were two versions, a basic version for $299.99 and a deluxe version for $399.99. Increasingly present in the hardware business following Xbox, Microsoft in 2006 released the Zune series of digital media players, a successor of its previous software platform Portable Media Center. These expanded on previous hardware commitments from Microsoft following its original Microsoft Mouse in 1983; as of 2007 the company sold the best-selling wired keyboard (Natural Ergonomic Keyboard 4000), mouse (IntelliMouse), and desktop webcam (LifeCam) in the United States. That year the company also launched the Surface "digital table", later renamed PixelSense. 2007–2011: Microsoft Azure, Windows Vista, Windows 7, and Microsoft Stores Released in January 2007, the next version of Windows, Vista, focused on features, security and a redesigned user interface dubbed Aero. Microsoft Office 2007, released at the same time, featured a "Ribbon" user interface which was a significant departure from its predecessors. Relatively strong sales of both products helped to produce a record profit in 2007. The European Union imposed another fine of €899 million ($1.4 billion) for Microsoft's lack of compliance with the March 2004 judgment on February 27, 2008, saying that the company charged rivals unreasonable prices for key information about its workgroup and backoffice servers. Microsoft stated that it was in compliance and that "these fines are about the past issues that have been resolved". 2007 also saw the creation of a multi-core unit at Microsoft, following the steps of server companies such as Sun and IBM. Gates retired from his role as Chief Software Architect on June 27, 2008, a decision announced in June 2006, while retaining other positions related to the company in addition to being an advisor for the company on key projects. Azure Services Platform, the company's entry into the cloud computing market for Windows, launched on October 27, 2008. On February 12, 2009, Microsoft announced its intent to open a chain of Microsoft-branded retail stores, and on October 22, 2009, the first retail Microsoft Store opened in Scottsdale, Arizona; the same day Windows 7 was officially released to the public. Windows 7's focus was on refining Vista with ease-of-use features and performance enhancements, rather than an extensive reworking of Windows. As the smartphone industry boomed in the late 2000s, Microsoft had struggled to keep up with its rivals in providing a modern smartphone operating system, falling behind Apple and Google-sponsored Android in the United States. As a result, in 2010 Microsoft revamped their aging flagship mobile operating system, Windows Mobile, replacing it with the new Windows Phone OS that was released in October that year. It used a new user interface design language, codenamed "Metro", which prominently used simple shapes, typography and iconography, utilizing the concept of minimalism. Microsoft implemented a new strategy for the software industry, providing a consistent user experience across all smartphones using the Windows Phone OS. It launched an alliance with Nokia in 2011 and Microsoft worked closely with the company to co-develop Windows Phone, but remained partners with long-time Windows Mobile OEM HTC. Microsoft is a founding member of the Open Networking Foundation started on March 23, 2011. Fellow founders were Google, HP Networking, Yahoo!, Verizon Communications, Deutsche Telekom and 17 other companies. This nonprofit organization is focused on providing support for a cloud computing initiative called Software-Defined Networking. The initiative is meant to speed innovation through simple software changes in telecommunications networks, wireless networks, data centers and other networking areas. 2011–2014: Windows 8/8.1, Xbox One, Outlook.com, and Surface devices Following the release of Windows Phone, Microsoft undertook a gradual rebranding of its product range throughout 2011 and 2012, with the corporation's logos, products, services and websites adopting the principles and concepts of the Metro design language. Microsoft unveiled Windows 8, an operating system designed to power both personal computers and tablet computers, in Taipei in June 2011. A developer preview was released on September 13, which was subsequently replaced by a consumer preview on February 29, 2012, and released to the public in May. The Surface was unveiled on June 18, becoming the first computer in the company's history to have its hardware made by Microsoft. On June 25, Microsoft paid US$1.2 billion to buy the social network Yammer. On July 31, they launched the Outlook.com webmail service to compete with Gmail. On September 4, 2012, Microsoft released Windows Server 2012. In July 2012, Microsoft sold its 50% stake in MSNBC, which it had run as a joint venture with NBC since 1996. On October 1, Microsoft announced its intention to launch a news operation, part of a new-look MSN, with Windows 8 later in the month. On October 26, 2012, Microsoft launched Windows 8 and the Microsoft Surface. Three days later, Windows Phone 8 was launched. To cope with the potential for an increase in demand for products and services, Microsoft opened a number of "holiday stores" across the U.S. to complement the increasing number of "bricks-and-mortar" Microsoft Stores that opened in 2012. On March 29, 2013, Microsoft launched a Patent Tracker. In August 2012, the New York City Police Department announced a partnership with Microsoft for the development of the Domain Awareness System which is used for Police surveillance in New York City. The Kinect, a motion-sensing input device made by Microsoft and designed as a video game controller, first introduced in November 2010, was upgraded for the 2013 release of the Xbox One video game console. Kinect's capabilities were revealed in May 2013: an ultra-wide 1080p camera, function in the dark due to an infrared sensor, higher-end processing power and new software, the ability to distinguish between fine movements (such as a thumb movement), and determining a user's heart rate by looking at their face. Microsoft filed a patent application in 2011 that suggests that the corporation may use the Kinect camera system to monitor the behavior of television viewers as part of a plan to make the viewing experience more interactive. On July 19, 2013, Microsoft stocks suffered their biggest one-day percentage sell-off since the year 2000, after its fourth-quarter report raised concerns among the investors on the poor showings of both Windows 8 and the Surface tablet. Microsoft suffered a loss of more than US$32 billion. In line with the maturing PC business, in July 2013, Microsoft announced that it would reorganize the business into four new business divisions, namely Operating System, Apps, Cloud, and Devices. All previous divisions will be dissolved into new divisions without any workforce cuts. On September 3, 2013, Microsoft agreed to buy Nokia's mobile unit for $7 billion, following Amy Hood taking the role of CFO. 2014–2020: Windows 10, Microsoft Edge, and HoloLens On February 4, 2014, Steve Ballmer stepped down as CEO of Microsoft and was succeeded by Satya Nadella, who previously led Microsoft's Cloud and Enterprise division. On the same day, John W. Thompson took on the role of chairman, in place of Bill Gates, who continued to participate as a technology advisor. Thompson became the second chairman in Microsoft's history. On April 25, 2014, Microsoft acquired Nokia Devices and Services for $7.2 billion. This new subsidiary was renamed Microsoft Mobile Oy. On September 15, 2014, Microsoft acquired the video game development company Mojang, best known for Minecraft, for $2.5 billion. On June 8, 2017, Microsoft acquired Hexadite, an Israeli security firm, for $100 million. On January 21, 2015, Microsoft announced the release of their first Interactive whiteboard, Microsoft Surface Hub. On July 29, 2015, Windows 10 was released, with its server sibling, Windows Server 2016, released in September 2016. In Q1 2015, Microsoft was the third largest maker of mobile phones, selling 33 million units (7.2% of all). While a large majority (at least 75%) of them do not run any version of Windows Phone— those other phones are not categorized as smartphones by Gartner in the same time frame 8 million Windows smartphones (2.5% of all smartphones) were made by all manufacturers (but mostly by Microsoft). Microsoft's share of the U.S. smartphone market in January 2016 was 2.7%. During the summer of 2015 the company lost $7.6 billion related to its mobile-phone business, firing 7,800 employees. On March 1, 2016, Microsoft announced the merger of its PC and Xbox divisions, with Phil Spencer announcing that Universal Windows Platform (UWP) apps would be the focus for Microsoft's gaming in the future. On January 24, 2017, Microsoft showcased Intune for Education at the BETT 2017 education technology conference in London. Intune for Education is a new cloud-based application and device management service for the education sector. In May 2016, the company announced it was laying off 1,850 workers, and taking an impairment and restructuring charge of $950 million. In June 2016, Microsoft announced a project named Microsoft Azure Information Protection. It aims to help enterprises protect their data as it moves between servers and devices. In November 2016, Microsoft joined the Linux Foundation as a Platinum member during Microsoft's Connect(); developer event in New York. The cost of each Platinum membership is US$500,000 per year. Some analysts deemed this unthinkable ten years prior, however, as in 2001 then-CEO Steve Ballmer called Linux "cancer". Microsoft planned to launch a preview of Intune for Education "in the coming weeks", with general availability scheduled for spring 2017, priced at $30 per device, or through volume licensing agreements. In January 2018, Microsoft patched Windows 10 to account for CPU problems related to Intel's Meltdown security breach. The patch led to issues with the Microsoft Azure virtual machines reliant on Intel's CPU architecture. On January 12, Microsoft released PowerShell Core 6.0 for the macOS and Linux operating systems. In February 2018, Microsoft killed notification support for their Windows Phone devices which effectively ended firmware updates for the discontinued devices. In March 2018, Microsoft recalled Windows 10 S to change it to a mode for the Windows operating system rather than a separate and unique operating system. In March the company also established guidelines that censor users of Office 365 from using profanity in private documents. In April 2018, Microsoft released the source code for Windows File Manager under the MIT License to celebrate the program's 20th anniversary. In April the company further expressed willingness to embrace open source initiatives by announcing Azure Sphere as its own derivative of the Linux operating system. In May 2018, Microsoft partnered with 17 American intelligence agencies to develop cloud computing products. The project is dubbed "Azure Government" and has ties to the Joint Enterprise Defense Infrastructure (JEDI) surveillance program. On June 4, 2018, Microsoft officially announced the acquisition of GitHub for $7.5 billion, a deal that closed on October 26, 2018. On July 10, 2018, Microsoft revealed the Surface Go platform to the public. Later in the month it converted Microsoft Teams to gratis. In August 2018, Microsoft released two projects called Microsoft AccountGuard and Defending Democracy. It also unveiled Snapdragon 850 compatibility for Windows 10 on the ARM architecture. In August 2018, Toyota Tsusho began a partnership with Microsoft to create fish farming tools using the Microsoft Azure application suite for Internet of things (IoT) technologies related to water management. Developed in part by researchers from Kindai University, the water pump mechanisms use artificial intelligence to count the number of fish on a conveyor belt, analyze the number of fish, and deduce the effectiveness of water flow from the data the fish provide. The specific computer programs used in the process fall under the Azure Machine Learning and the Azure IoT Hub platforms. In September 2018, Microsoft discontinued Skype Classic. On October 10, 2018, Microsoft joined the Open Invention Network community despite holding more than 60,000 patents. In November 2018, Microsoft agreed to supply 100,000 Microsoft HoloLens headsets to the United States military in order to "increase lethality by enhancing the ability to detect, decide and engage before the enemy." In November 2018, Microsoft introduced Azure Multi-Factor Authentication for Microsoft Azure. In December 2018, Microsoft announced Project Mu, an open source release of the Unified Extensible Firmware Interface (UEFI) core used in Microsoft Surface and Hyper-V products. The project promotes the idea of Firmware as a Service. In the same month, Microsoft announced the open source implementation of Windows Forms and the Windows Presentation Foundation (WPF) which will allow for further movement of the company toward the transparent release of key frameworks used in developing Windows desktop applications and software. December also saw the company discontinue the Microsoft Edge project in favor of Chromium backends for their browsers. On February 20, 2019, Microsoft Corp said it will offer its cyber security service AccountGuard to 12 new markets in Europe including Germany, France and Spain, to close security gaps and protect customers in political space from hacking. In February 2019, hundreds of Microsoft employees protested the company's war profiteering from a $480 million contract to develop virtual reality headsets for the United States Army. 2020–present: Acquisitions, Xbox Series X/S, and Windows 11 On March 26, 2020, Microsoft announced it was acquiring Affirmed Networks for about $1.35 billion. Due to the COVID-19 pandemic, Microsoft closed all of its retail stores indefinitely due to health concerns. On July 22, 2020, Microsoft announced plans to close its Mixer service, planning to move existing partners to Facebook Gaming. On July 31, 2020, it was reported that Microsoft was in talks to acquire TikTok after the Trump administration ordered ByteDance to divest ownership of the application to the U.S. On August 3, 2020, after speculation on the deal, Donald Trump stated that Microsoft could buy the application, however it should be completed by September 15, 2020, and that the United States Department of the Treasury should receive a portion if it were to go through. On August 5, 2020, Microsoft stopped its xCloud game streaming test for iOS devices. According to Microsoft, the future of xCloud on iOS remains unclear and potentially out of Microsoft's hands. Apple has imposed a strict limit on "remote desktop clients" that means applications are only allowed to connect to a user-owned host device or gaming console owned by the user. On September 21, 2020, Microsoft announced its intent to acquire video game company ZeniMax Media, the parent company of Bethesda Softworks, for about $7.5 billion, with the deal expected to be occurred in the second half of 2021 fiscal year. On March 9, 2021, the acquisition was finalized and ZeniMax Media became part of Microsoft's Xbox Game Studios division. The total price of the deal was $8.1 billion. On September 22, 2020, Microsoft announced that it had an exclusive license to use OpenAI’s GPT-3 artificial intelligence language generator. The previous version of GPT-3, called GPT-2, made headlines for being “too dangerous to release” and had numerous capabilities, including designing websites, prescribing medication, answering questions and penning articles. On November 10, 2020, Microsoft released the Xbox Series X and Xbox Series S video game consoles. In April 2021, Microsoft said that it will buy Nuance Communications for about $16 billion in cash. In 2021, in part due to the strong quarterly earnings spurred by the COVID-19 pandemic, Microsoft's valuation came to near $2 trillion. The increased necessity for remote work and distance education drove up the demand for cloud-computing services and grew the company's gaming sales. On June 24, 2021, Microsoft announced Windows 11 during a livestream. The announcement came with confusion after Microsoft announced Windows 10 would be the last version of the operating system; set to be released in Fall 2021. It was released to the general public on October 5, 2021. In October 2021, Microsoft announced that it began rolling out end-to-end encryption (E2EE) support for Microsoft Teams calls in order to secure business communication while using video conferencing software. Users can ensure that their calls are encrypted and can utilize a security code which both parties on a call must verify on respective ends. On October 7, Microsoft acquired Ally.io, a software service that measures companies' progress against OKRs. Microsoft plans to incorporate Ally.io into its Viva family of employee experience products. On January 18, 2022, Microsoft announced the acquisition of American video game developer and holding company Activision Blizzard in an all-cash deal worth $68.7 billion. Activision Blizzard is best known for producing franchises, including but not limited to Warcraft, Diablo, Call of Duty, StarCraft, Candy Crush Saga, and Overwatch. Activision and Microsoft each released statements saying the acquisition was to benefit their businesses in the metaverse, many saw Microsoft's acquisition of video game studios as an attempt to compete against Meta Platforms, with TheStreet referring to Microsoft wanting to become "the Disney of the metaverse". Microsoft has not released statements regarding Activision's recent legal controversies regarding employee abuse, but reports have alleged that Activision CEO Bobby Kotick, a major target of the controversy, will leave the company after the acquisition is finalized. The deal is expected to close in 2023 followed by a review from the US Federal Trade Commission. Corporate affairs Board of directors The company is run by a board of directors made up of mostly company outsiders, as is customary for publicly traded companies. Members of the board of directors as of July 2020 are Satya Nadella, Reid Hoffman, Hugh Johnston, Teri List-Stoll, Sandi Peterson, Penny Pritzker, Charles Scharf, Arne Sorenson, John W. Stanton, John W. Thompson, Emma Walmsley and Padmasree Warrior. Board members are elected every year at the annual shareholders' meeting using a majority vote system. There are four committees within the board that oversee more specific matters. These committees include the Audit Committee, which handles accounting issues with the company including auditing and reporting; the Compensation Committee, which approves compensation for the CEO and other employees of the company; the Governance and Nominating Committee, which handles various corporate matters including the nomination of the board; and the Regulatory and Public Policy Committee, which includes legal/antitrust matters, along with privacy, trade, digital safety, artificial intelligence, and environmental sustainability. On March 13, 2020, Gates announced that he is leaving the board of directors of Microsoft and Berkshire Hathaway to focus more on his philanthropic efforts. According to Aaron Tilley of The Wall Street Journal this is "marking the biggest boardroom departure in the tech industry since the death of longtime rival and Apple Inc. co-founder Steve Jobs." On January 13, 2022, The Wall Street Journal reported the Microsoft's board of directors plans to hire an external law firm to review its sexual harassment and gender discrimination policies, and to release a summary of how the company handled past allegations of misconduct against Bill Gates and other corporate executives. Chief executives Bill Gates (1975–2000) Steve Ballmer (2000–2014) Satya Nadella (2014–present) Financial When Microsoft went public and launched its initial public offering (IPO) in 1986, the opening stock price was $21; after the trading day, the price closed at $27.75. As of July 2010, with the company's nine stock splits, any IPO shares would be multiplied by 288; if one were to buy the IPO today, given the splits and other factors, it would cost about 9 cents. The stock price peaked in 1999 at around $119 ($60.928, adjusting for splits). The company began to offer a dividend on January 16, 2003, starting at eight cents per share for the fiscal year followed by a dividend of sixteen cents per share the subsequent year, switching from yearly to quarterly dividends in 2005 with eight cents a share per quarter and a special one-time payout of three dollars per share for the second quarter of the fiscal year. Though the company had subsequent increases in dividend payouts, the price of Microsoft's stock remained steady for years. Standard & Poor's and Moody's Investors Service have both given a AAA rating to Microsoft, whose assets were valued at $41 billion as compared to only $8.5 billion in unsecured debt. Consequently, in February 2011 Microsoft released a corporate bond amounting to $2.25 billion with relatively low borrowing rates compared to government bonds. For the first time in 20 years Apple Inc. surpassed Microsoft in Q1 2011 quarterly profits and revenues due to a slowdown in PC sales and continuing huge losses in Microsoft's Online Services Division (which contains its search engine Bing). Microsoft profits were $5.2 billion, while Apple Inc. profits were $6 billion, on revenues of $14.5 billion and $24.7 billion respectively. Microsoft's Online Services Division has been continuously loss-making since 2006 and in Q1 2011 it lost $726 million. This follows a loss of $2.5 billion for the year 2010. On July 20, 2012, Microsoft posted its first quarterly loss ever, despite earning record revenues for the quarter and fiscal year, with a net loss of $492 million due to a writedown related to the advertising company aQuantive, which had been acquired for $6.2 billion back in 2007. As of January 2014, Microsoft's market capitalization stood at $314B, making it the 8th largest company in the world by market capitalization. On November 14, 2014, Microsoft overtook ExxonMobil to become the second most-valuable company by market capitalization, behind only Apple Inc. Its total market value was over $410B—with the stock price hitting $50.04 a share, the highest since early 2000. In 2015, Reuters reported that Microsoft Corp had earnings abroad of $76.4 billion which were untaxed by the Internal Revenue Service. Under U.S. law, corporations don't pay income tax on overseas profits until the profits are brought into the United States. In November 2018, the company won a $480 million military contract with the U.S. government to bring augmented reality (AR) headset technology into the weapon repertoires of American soldiers. The two-year contract may result in follow-on orders of more than 100,000 headsets, according to documentation describing the bidding process. One of the contract's tag lines for the augmented reality technology seems to be its ability to enable "25 bloodless battles before the 1st battle", suggesting that actual combat training is going to be an essential aspect of the augmented reality headset capabilities. Subsidiaries Microsoft is an international business. As such, it needs subsidiaries present in whatever national markets it chooses to harvest. An example is Microsoft Canada, which it established in 1985. Other countries have similar installations, to funnel profits back up to Redmond and to distribute the dividends to the holders of MSFT stock. Marketing In 2004, Microsoft commissioned research firms to do independent studies comparing the total cost of ownership (TCO) of Windows Server 2003 to Linux; the firms concluded that companies found Windows easier to administrate than Linux, thus those using Windows would administrate faster resulting in lower costs for their company (i.e. lower TCO). This spurred a wave of related studies; a study by the Yankee Group concluded that upgrading from one version of Windows Server to another costs a fraction of the switching costs from Windows Server to Linux, although companies surveyed noted the increased security and reliability of Linux servers and concern about being locked into using Microsoft products. Another study, released by the Open Source Development Labs, claimed that the Microsoft studies were "simply outdated and one-sided" and their survey concluded that the TCO of Linux was lower due to Linux administrators managing more servers on average and other reasons. As part of the "Get the Facts" campaign, Microsoft highlighted the .NET Framework trading platform that it had developed in partnership with Accenture for the London Stock Exchange, claiming that it provided "five nines" reliability. After suffering extended downtime and unreliability the London Stock Exchange announced in 2009 that it was planning to drop its Microsoft solution and switch to a Linux-based one in 2010. In 2012, Microsoft hired a political pollster named Mark Penn, whom The New York Times called "famous for bulldozing" his political opponents as Executive Vice-president, Advertising and Strategy. Penn created a series of negative advertisements targeting one of Microsoft's chief competitors, Google. The advertisements, called "Scroogled", attempt to make the case that Google is "screwing" consumers with search results rigged to favor Google's paid advertisers, that Gmail violates the privacy of its users to place ad results related to the content of their emails and shopping results, which favor Google products. Tech publications like TechCrunch have been highly critical of the advertising campaign, while Google employees have embraced it. Layoffs In July 2014, Microsoft announced plans to lay off 18,000 employees. Microsoft employed 127,104 people as of June 5, 2014, making this about a 14 percent reduction of its workforce as the biggest Microsoft lay off ever. This included 12,500 professional and factory personnel. Previously, Microsoft had eliminated 5,800 jobs in 2009 in line with the Great Recession of 2008–2017. In September 2014, Microsoft laid off 2,100 people, including 747 people in the Seattle–Redmond area, where the company is headquartered. The firings came as a second wave of the layoffs that were previously announced. This brought the total number to over 15,000 out of the 18,000 expected cuts. In October 2014, Microsoft revealed that it was almost done with the elimination of 18,000 employees, which was its largest-ever layoff sweep. In July 2015, Microsoft announced another 7,800 job cuts in the next several months. In May 2016, Microsoft announced another 1,850 job cuts mostly in its Nokia mobile phone division. As a result, the company will record an impairment and restructuring charge of approximately $950 million, of which approximately $200 million will relate to severance payments. United States government Microsoft provides information about reported bugs in their software to intelligence agencies of the United States government, prior to the public release of the fix. A Microsoft spokesperson has stated that the corporation runs several programs that facilitate the sharing of such information with the U.S. government. Following media reports about PRISM, NSA's massive electronic surveillance program, in May 2013, several technology companies were identified as participants, including Microsoft. According to leaks of said program, Microsoft joined the PRISM program in 2007. However, in June 2013, an official statement from Microsoft flatly denied their participation in the program: During the first six months in 2013, Microsoft had received requests that affected between 15,000 and 15,999 accounts. In December 2013, the company made statement to further emphasize the fact that they take their customers' privacy and data protection very seriously, even saying that "government snooping potentially now constitutes an 'advanced persistent threat,' alongside sophisticated malware and cyber attacks". The statement also marked the beginning of three-part program to enhance Microsoft's encryption and transparency efforts. On July 1, 2014, as part of this program they opened the first (of many) Microsoft Transparency Center, that provides "participating governments with the ability to review source code for our key products, assure themselves of their software integrity, and confirm there are no "back doors." Microsoft has also argued that the United States Congress should enact strong privacy regulations to protect consumer data. In April 2016, the company sued the U.S. government, arguing that secrecy orders were preventing the company from disclosing warrants to customers in violation of the company's and customers' rights. Microsoft argued that it was unconstitutional for the government to indefinitely ban Microsoft from informing its users that the government was requesting their emails and other documents, and that the Fourth Amendment made it so people or businesses had the right to know if the government searches or seizes their property. On October 23, 2017, Microsoft said it would drop the lawsuit as a result of a policy change by the United States Department of Justice (DoJ). The DoJ had "changed data request rules on alerting the Internet users about agencies accessing their information." Corporate identity Corporate culture Technical reference for developers and articles for various Microsoft magazines such as Microsoft Systems Journal (MSJ) are available through the Microsoft Developer Network (MSDN). MSDN also offers subscriptions for companies and individuals, and the more expensive subscriptions usually offer access to pre-release beta versions of Microsoft software. In April 2004, Microsoft launched a community site for developers and users, titled Channel 9, that provides a wiki and an Internet forum. Another community site that provides daily videocasts and other services, On10.net, launched on March 3, 2006. Free technical support is traditionally provided through online Usenet newsgroups, and CompuServe in the past, monitored by Microsoft employees; there can be several newsgroups for a single product. Helpful people can be elected by peers or Microsoft employees for Microsoft Most Valuable Professional (MVP) status, which entitles them to a sort of special social status and possibilities for awards and other benefits. Noted for its internal lexicon, the expression "eating your own dog food" is used to describe the policy of using pre-release and beta versions of products inside Microsoft in an effort to test them in "real-world" situations. This is usually shortened to just "dog food" and is used as noun, verb, and adjective. Another bit of jargon, FYIFV or FYIV ("Fuck You, I'm [Fully] Vested"), is used by an employee to indicate they are financially independent and can avoid work anytime they wish. Microsoft is an outspoken opponent of the cap on H-1B visas, which allow companies in the U.S. to employ certain foreign workers. Bill Gates claims the cap on H1B visas makes it difficult to hire employees for the company, stating "I'd certainly get rid of the H1B cap" in 2005. Critics of H1B visas argue that relaxing the limits would result in increased unemployment for U.S. citizens due to H1B workers working for lower salaries. The Human Rights Campaign Corporate Equality Index, a report of how progressive the organization deems company policies towards LGBT employees, rated Microsoft as 87% from 2002 to 2004 and as 100% from 2005 to 2010 after they allowed gender expression. In August 2018, Microsoft implemented a policy for all companies providing subcontractors to require 12 weeks of paid parental leave to each employee. This expands on the former requirement from 2015 requiring 15 days of paid vacation and sick leave each year. In 2015, Microsoft established its own parental leave policy to allow 12 weeks off for parental leave with an additional 8 weeks for the parent who gave birth. Environment In 2011, Greenpeace released a report rating the top ten big brands in cloud computing on their sources of electricity for their data centers. At the time, data centers consumed up to 2% of all global electricity and this amount was projected to increase. Phil Radford of Greenpeace said "we are concerned that this new explosion in electricity use could lock us into old, polluting energy sources instead of the clean energy available today," and called on "Amazon, Microsoft and other leaders of the information-technology industry must embrace clean energy to power their cloud-based data centers." In 2013, Microsoft agreed to buy power generated by a Texas wind project to power one of its data centers. Microsoft is ranked on the 17th place in Greenpeace's Guide to Greener Electronics (16th Edition) that ranks 18 electronics manufacturers according to their policies on toxic chemicals, recycling and climate change. Microsoft's timeline for phasing out brominated flame retardant (BFRs) and phthalates in all products is 2012 but its commitment to phasing out PVC is not clear. As of January 2011, it has no products that are completely free from PVC and BFRs. Microsoft's main U.S. campus received a silver certification from the Leadership in Energy and Environmental Design (LEED) program in 2008, and it installed over 2,000 solar panels on top of its buildings at its Silicon Valley campus, generating approximately 15 percent of the total energy needed by the facilities in April 2005. Microsoft makes use of alternative forms of transit. It created one of the world's largest private bus systems, the "Connector", to transport people from outside the company; for on-campus transportation, the "Shuttle Connect" uses a large fleet of hybrid cars to save fuel. The company also subsidizes regional public transport, provided by Sound Transit and King County Metro, as an incentive. In February 2010, however, Microsoft took a stance against adding additional public transport and high-occupancy vehicle (HOV) lanes to the State Route 520 and its floating bridge connecting Redmond to Seattle; the company did not want to delay the construction any further. Microsoft was ranked number 1 in the list of the World's Best Multinational Workplaces by the Great Place to Work Institute in 2011. In January 2020, the company promised the carbon dioxide removal of all carbon that it has emitted since its foundation in 1975. On October 9, 2020, Microsoft permanently allowed remote work. In January 2021, the company announced on Twitter to join the Climate Neutral Data Centre Pact, which engages the cloud infrastructure and data centers industries to reach carbon neutrality in Europe by 2030. Headquarters The corporate headquarters, informally known as the Microsoft Redmond campus, is located at One Microsoft Way in Redmond, Washington. Microsoft initially moved onto the grounds of the campus on February 26, 1986, weeks before the company went public on March 13. The headquarters has since experienced multiple expansions since its establishment. It is estimated to encompass over 8 million ft2 (750,000 m2) of office space and 30,000–40,000 employees. Additional offices are located in Bellevue and Issaquah, Washington (90,000 employees worldwide). The company is planning to upgrade its Mountain View, California, campus on a grand scale. The company has occupied this campus since 1981. In 2016, the company bought the campus, with plans to renovate and expand it by 25%. Microsoft operates an East Coast headquarters in Charlotte, North Carolina. Flagship stores On October 26, 2015, the company opened its retail location on Fifth Avenue in New York City. The location features a five-story glass storefront and is 22,270 square feet. As per company executives, Microsoft had been on the lookout for a flagship location since 2009. The company's retail locations are part of a greater strategy to help build a connection with its consumers. The opening of the store coincided with the launch of the Surface Book and Surface Pro 4. On November 12, 2015, Microsoft opened a second flagship store, located in Sydney's Pitt Street Mall. Logo Microsoft adopted the so-called "Pac-Man Logo," designed by Scott Baker, in 1987. Baker stated "The new logo, in Helvetica italic typeface, has a slash between the o and s to emphasize the "soft" part of the name and convey motion and speed." Dave Norris ran an internal joke campaign to save the old logo, which was green, in all uppercase, and featured a fanciful letter O, nicknamed the blibbet, but it was discarded. Microsoft's logo with the tagline "Your potential. Our passion."—below the main corporate name—is based on a slogan Microsoft used in 2008. In 2002, the company started using the logo in the United States and eventually started a television campaign with the slogan, changed from the previous tagline of "Where do you want to go today?" During the private MGX (Microsoft Global Exchange) conference in 2010, Microsoft unveiled the company's next tagline, "Be What's Next." They also had a slogan/tagline "Making it all make sense." On August 23, 2012, Microsoft unveiled a new corporate logo at the opening of its 23rd Microsoft store in Boston, indicating the company's shift of focus from the classic style to the tile-centric modern interface, which it uses/will use on the Windows Phone platform, Xbox 360, Windows 8 and the upcoming Office Suites. The new logo also includes four squares with the colors of the then-current Windows logo which have been used to represent Microsoft's four major products: Windows (blue), Office (red), Xbox (green) and Bing (yellow). The logo resembles the opening of one of the commercials for Windows 95. Sponsorship The company was the official jersey sponsor of Finland's national basketball team at EuroBasket 2015. The company was a Major sponsor of the Toyota Gazoo Racing WRT (2017-2020). The company was a sponsor of the Renault F1 Team (2016-2020) Philanthropy During the COVID-19 pandemic, Microsoft's president, Brad Smith, announced that an initial batch of supplies, including 15,000 protection goggles, infrared thermometers, medical caps, and protective suits, were donated to Seattle, with further aid to come soon. Criticism Criticism of Microsoft has followed various aspects of its products and business practices. Frequently criticized are the ease of use, robustness, and security of the company's software. They've also been criticized for the use of permatemp employees (employees employed for years as "temporary," and therefore without medical benefits), the use of forced retention tactics, which means that employees would be sued if they tried to leave. Historically, Microsoft has also been accused of overworking employees, in many cases, leading to burnout within just a few years of joining the company. The company is often referred to as a "Velvet Sweatshop", a term which originated in a 1989 Seattle Times article, and later became used to describe the company by some of Microsoft's own employees. This characterization is derived from the perception that Microsoft provides nearly everything for its employees in a convenient place, but in turn overworks them to a point where it would be bad for their (possibly long-term) health. "Embrace, extend, and extinguish" (EEE), also known as "embrace, extend, and exterminate", is a phrase that the U.S. Department of Justice found that was used internally by Microsoft to describe its strategy for entering product categories involving widely used standards, extending those standards with proprietary capabilities, and then using those differences to strongly disadvantage competitors. Microsoft is frequently accused of using anticompetitive tactics and abusing its monopolistic power. People who use their products and services often end up becoming dependent on them, a process known as vendor lock-in. Microsoft was the first company to participate in the PRISM surveillance program, according to leaked NSA documents obtained by The Guardian and The Washington Post in June 2013, and acknowledged by government officials following the leak. The program authorizes the government to secretly access data of non-US citizens hosted by American companies without a warrant. Microsoft has denied participation in such a program. Jesse Jackson believes Microsoft should hire more minorities and women. Jackson has urged other companies to diversify their workforce. He believes that Microsoft made some progress when it appointed two women to its board of directors in 2015. Licensing arrangements for service providers The Microsoft Services Provider License Agreement, or SPLA,  is a mechanism by which service providers and independent software vendors (ISVs), who license Microsoft products on a monthly basis, are able to provide software services and hosting services to end-users. The SPLA can be customised to suit the solution being offered and the customers using it. See also List of Microsoft software List of Microsoft hardware List of investments by Microsoft Corporation List of mergers and acquisitions by Microsoft Microsoft engineering groups Microsoft Enterprise Agreement References External links 1975 establishments in New Mexico 1980s initial public offerings American brands American companies established in 1975 Business software companies Cloud computing providers Companies based in Redmond, Washington Companies in the Dow Jones Industrial Average Companies in the NASDAQ-100 Companies in the PRISM network Companies listed on the Nasdaq Computer companies established in 1975 Computer hardware companies CRM software companies Electronics companies established in 1975 Electronics companies of the United States ERP software companies Mobile phone manufacturers Multinational companies headquartered in the United States Portmanteaus Software companies based in Washington (state) Software companies established in 1975 Software companies of the United States Supply chain software companies Technology companies established in 1975 Technology companies of the United States Web service providers
Operating System (OS)
616
Windows Registry The Windows Registry is a hierarchical database that stores low-level settings for the Microsoft Windows operating system and for applications that opt to use the registry. The kernel, device drivers, services, Security Accounts Manager, and user interfaces can all use the registry. The registry also allows access to counters for profiling system performance. In other words, the registry or Windows Registry contains information, settings, options, and other values for programs and hardware installed on all versions of Microsoft Windows operating systems. For example, when a program is installed, a new subkey containing settings such as a program's location, its version, and how to start the program, are all added to the Windows Registry. When introduced with Windows 3.1, the Windows Registry primarily stored configuration information for COM-based components. Windows 95 and Windows NT extended its use to rationalize and centralize the information in the profusion of INI files, which held the configurations for individual programs, and were stored at various locations. It is not a requirement for Windows applications to use the Windows Registry. For example, .NET Framework applications use XML files for configuration, while portable applications usually keep their configuration files with their executables. Rationale Prior to the Windows Registry, .INI files stored each program's settings as a text file or binary file, often located in a shared location that did not provide user-specific settings in a multi-user scenario. By contrast, the Windows Registry stores all application settings in one logical repository (but a number of discrete files) and in a standardized form. According to Microsoft, this offers several advantages over .INI files. Since file parsing is done much more efficiently with a binary format, it may be read from or written to more quickly than a text INI file. Furthermore, strongly typed data can be stored in the registry, as opposed to the text information stored in .INI files. This is a benefit when editing keys manually using regedit.exe, the built-in Windows Registry Editor. Because user-based registry settings are loaded from a user-specific path rather than from a read-only system location, the registry allows multiple users to share the same machine, and also allows programs to work for less privileged users. Backup and restoration is also simplified as the registry can be accessed over a network connection for remote management/support, including from scripts, using the standard set of APIs, as long as the Remote Registry service is running and firewall rules permit this. Because the registry is a database, it offers improved system integrity with features such as atomic updates. If two processes attempt to update the same registry value at the same time, one process's change will precede the other's and the overall consistency of the data will be maintained. Where changes are made to .INI files, such race conditions can result in inconsistent data that does not match either attempted update. Windows Vista and later operating systems provide transactional updates to the registry by means of the Kernel Transaction Manager, extending the atomicity guarantees across multiple key and/or value changes, with traditional commit–abort semantics. (Note however that NTFS provides such support for the file system as well, so the same guarantees could, in theory, be obtained with traditional configuration files.) Structure Keys and values The registry contains two basic elements: keys and values. Registry keys are container objects similar to folders. Registry values are non-container objects similar to files. Keys may contain values and subkeys. Keys are referenced with a syntax similar to Windows' path names, using backslashes to indicate levels of hierarchy. Keys must have a case insensitive name without backslashes. The hierarchy of registry keys can only be accessed from a known root key handle (which is anonymous but whose effective value is a constant numeric handle) that is mapped to the content of a registry key preloaded by the kernel from a stored "hive", or to the content of a subkey within another root key, or mapped to a registered service or DLL that provides access to its contained subkeys and values. E.g. HKEY_LOCAL_MACHINE\Software\Microsoft\Windows refers to the subkey "Windows" of the subkey "Microsoft" of the subkey "Software" of the HKEY_LOCAL_MACHINE root key. There are seven predefined root keys, traditionally named according to their constant handles defined in the Win32 API, or by synonymous abbreviations (depending on applications): HKEY_LOCAL_MACHINE or HKLM HKEY_CURRENT_CONFIG or HKCC HKEY_CLASSES_ROOT or HKCR HKEY_CURRENT_USER or HKCU HKEY_USERS or HKU HKEY_PERFORMANCE_DATA (only in Windows NT, but invisible in the Windows Registry Editor) HKEY_DYN_DATA (only in Windows 9x, and visible in the Windows Registry Editor) Like other files and services in Windows, all registry keys may be restricted by access control lists (ACLs), depending on user privileges, or on security tokens acquired by applications, or on system security policies enforced by the system (these restrictions may be predefined by the system itself, and configured by local system administrators or by domain administrators). Different users, programs, services or remote systems may only see some parts of the hierarchy or distinct hierarchies from the same root keys. Registry values are name/data pairs stored within keys. Registry values are referenced separately from registry keys. Each registry value stored in a registry key has a unique name whose letter case is not significant. The Windows API functions that query and manipulate registry values take value names separately from the key path and/or handle that identifies the parent key. Registry values may contain backslashes in their names, but doing so makes them difficult to distinguish from their key paths when using some legacy Windows Registry API functions (whose usage is deprecated in Win32). The terminology is somewhat misleading, as each registry key is similar to an associative array, where standard terminology would refer to the name part of each registry value as a "key". The terms are a holdout from the 16-bit registry in Windows 3, in which registry keys could not contain arbitrary name/data pairs, but rather contained only one unnamed value (which had to be a string). In this sense, the Windows 3 registry was like a single associative array, in which the keys (in the sense of both 'registry key' and 'associative array key') formed a hierarchy, and the registry values were all strings. When the 32-bit registry was created, so was the additional capability of creating multiple named values per key, and the meanings of the names were somewhat distorted. For compatibility with the previous behavior, each registry key may have a "default" value, whose name is the empty string. Each value can store arbitrary data with variable length and encoding, but which is associated with a symbolic type (defined as a numeric constant) defining how to parse this data. The standard types are: Root keys The keys at the root level of the hierarchical database are generally named by their Windows API definitions, which all begin "HKEY". They are frequently abbreviated to a three- or four-letter short name starting with "HK" (e.g. HKCU and HKLM). Technically, they are predefined handles (with known constant values) to specific keys that are either maintained in memory, or stored in hive files stored in the local filesystem and loaded by the system kernel at boot time and then shared (with various access rights) between all processes running on the local system, or loaded and mapped in all processes started in a user session when the user logs on the system. The HKEY_LOCAL_MACHINE (local machine-specific configuration data) and HKEY_CURRENT_USER (user-specific configuration data) nodes have a similar structure to each other; user applications typically look up their settings by first checking for them in "HKEY_CURRENT_USER\Software\Vendor's name\Application's name\Version\Setting name", and if the setting is not found, look instead in the same location under the HKEY_LOCAL_MACHINE key. However, the converse may apply for administrator-enforced policy settings where HKLM may take precedence over HKCU. The Windows Logo Program has specific requirements for where different types of user data may be stored, and that the concept of least privilege be followed so that administrator-level access is not required to use an application. HKEY_LOCAL_MACHINE (HKLM) Abbreviated HKLM, HKEY_LOCAL_MACHINE stores settings that are specific to the local computer. The key located by HKLM is actually not stored on disk, but maintained in memory by the system kernel in order to map all the other subkeys. Applications cannot create any additional subkeys. On Windows NT, this key contains four subkeys, "SAM", "SECURITY", "SYSTEM", and "SOFTWARE", that are loaded at boot time within their respective files located in the %SystemRoot%\System32\config folder. A fifth subkey, "HARDWARE", is volatile and is created dynamically, and as such is not stored in a file (it exposes a view of all the currently detected Plug-and-Play devices). On Windows Vista and above, a sixth and seventh subkey, "COMPONENTS" and "BCD", are mapped in memory by the kernel on-demand and loaded from %SystemRoot%\system32\config\COMPONENTS or from boot configuration data, \boot\BCD on the system partition. The "HKLM\SAM" key usually appears as empty for most users (unless they are granted access by administrators of the local system or administrators of domains managing the local system). It is used to reference all "Security Accounts Manager" (SAM) databases for all domains into which the local system has been administratively authorized or configured (including the local domain of the running system, whose SAM database is stored in a subkey also named "SAM": other subkeys will be created as needed, one for each supplementary domain). Each SAM database contains all builtin accounts (mostly group aliases) and configured accounts (users, groups and their aliases, including guest accounts and administrator accounts) created and configured on the respective domain, for each account in that domain, it notably contains the user name which can be used to log on that domain, the internal unique user identifier in the domain, a cryptographic hash of each user's password for each enabled authentication protocol, the location of storage of their user registry hive, various status flags (for example if the account can be enumerated and be visible in the logon prompt screen), and the list of domains (including the local domain) into which the account was configured. The "HKLM\SECURITY" key usually appears empty for most users (unless they are granted access by users with administrative privileges) and is linked to the Security database of the domain into which the current user is logged on (if the user is logged on the local system domain, this key will be linked to the registry hive stored by the local machine and managed by local system administrators or by the builtin "System" account and Windows installers). The kernel will access it to read and enforce the security policy applicable to the current user and all applications or operations executed by this user. It also contains a "SAM" subkey which is dynamically linked to the SAM database of the domain onto which the current user is logged on. The "HKLM\SYSTEM" key is normally only writable by users with administrative privileges on the local system. It contains information about the Windows system setup, data for the secure random number generator (RNG), the list of currently mounted devices containing a filesystem, several numbered "HKLM\SYSTEM\Control Sets" containing alternative configurations for system hardware drivers and services running on the local system (including the currently used one and a backup), a "HKLM\SYSTEM\Select" subkey containing the status of these Control Sets, and a "HKLM\SYSTEM\CurrentControlSet" which is dynamically linked at boot time to the Control Set which is currently used on the local system. Each configured Control Set contains: an "Enum" subkey enumerating all known Plug-and-Play devices and associating them with installed system drivers (and storing the device-specific configurations of these drivers), a "Services" subkey listing all installed system drivers (with non device-specific configuration, and the enumeration of devices for which they are instantiated) and all programs running as services (how and when they can be automatically started), a "Control" subkey organizing the various hardware drivers and programs running as services and all other system-wide configuration, a "Hardware Profiles" subkey enumerating the various profiles that have been tuned (each one with "System" or "Software" settings used to modify the default profile, either in system drivers and services or in the applications) as well as the "Hardware Profiles\Current" subkey which is dynamically linked to one of these profiles. The "HKLM\SOFTWARE" subkey contains software and Windows settings (in the default hardware profile). It is mostly modified by application and system installers. It is organized by software vendor (with a subkey for each), but also contains a "Windows" subkey for some settings of the Windows user interface, a "Classes" subkey containing all registered associations from file extensions, MIME types, Object Classes IDs and interfaces IDs (for OLE, COM/DCOM and ActiveX), to the installed applications or DLLs that may be handling these types on the local machine (however these associations are configurable for each user, see below), and a "Policies" subkey (also organized by vendor) for enforcing general usage policies on applications and system services (including the central certificates store used for authenticating, authorizing or disallowing remote systems or services running outside the local network domain). The "HKLM\SOFTWARE\Wow6432Node" key is used by 32-bit applications on a 64-bit Windows OS, and is equivalent to but separate from "HKLM\SOFTWARE". The key path is transparently presented to 32-bit applications by WoW64 as HKLM\SOFTWARE (in a similar way that 32-bit applications see %SystemRoot%\Syswow64 as %SystemRoot%\System32) HKEY_CLASSES_ROOT (HKCR) Abbreviated HKCR, HKEY_CLASSES_ROOT contains information about registered applications, such as file associations and OLE Object Class IDs, tying them to the applications used to handle these items. On Windows 2000 and above, HKCR is a compilation of user-based HKCU\Software\Classes and machine-based HKLM\Software\Classes. If a given value exists in both of the subkeys above, the one in HKCU\Software\Classes takes precedence. The design allows for either machine- or user-specific registration of COM objects. HKEY_USERS (HKU) Abbreviated HKU, HKEY_USERS contains subkeys corresponding to the HKEY_CURRENT_USER keys for each user profile actively loaded on the machine, though user hives are usually only loaded for currently logged-in users. HKEY_CURRENT_USER (HKCU) Abbreviated HKCU, HKEY_CURRENT_USER stores settings that are specific to the currently logged-in user. The HKEY_CURRENT_USER key is a link to the subkey of HKEY_USERS that corresponds to the user; the same information is accessible in both locations. The specific subkey referenced is "(HKU)\(SID)\..." where (SID) corresponds to the Windows SID; if the "(HKCU)" key has the following suffix "(HKCU)\Software\Classes\..." then it corresponds to "(HKU)\(SID)_CLASSES\..." i.e. the suffix has the string "_CLASSES" is appended to the (SID). On Windows NT systems, each user's settings are stored in their own files called NTUSER.DAT and USRCLASS.DAT inside their own Documents and Settings subfolder (or their own Users sub folder in Windows Vista and above). Settings in this hive follow users with a roaming profile from machine to machine. HKEY_PERFORMANCE_DATA This key provides runtime information into performance data provided by either the NT kernel itself, or running system drivers, programs and services that provide performance data. This key is not stored in any hive and not displayed in the Registry Editor, but it is visible through the registry functions in the Windows API, or in a simplified view via the Performance tab of the Task Manager (only for a few performance data on the local system) or via more advanced control panels (such as the Performances Monitor or the Performances Analyzer which allows collecting and logging these data, including from remote systems). HKEY_DYN_DATA This key is used only on Windows 95, Windows 98 and Windows ME. It contains information about hardware devices, including Plug and Play and network performance statistics. The information in this hive is also not stored on the hard drive. The Plug and Play information is gathered and configured at startup and is stored in memory. Hives Even though the registry presents itself as an integrated hierarchical database, branches of the registry are actually stored in a number of disk files called hives. (The word hive constitutes an in-joke.) Some hives are volatile and are not stored on disk at all. An example of this is the hive of the branch starting at HKLM\HARDWARE. This hive records information about system hardware and is created each time the system boots and performs hardware detection. Individual settings for users on a system are stored in a hive (disk file) per user. During user login, the system loads the user hive under the HKEY_USERS key and sets the HKCU (HKEY_CURRENT_USER) symbolic reference to point to the current user. This allows applications to store/retrieve settings for the current user implicitly under the HKCU key. Not all hives are loaded at any one time. At boot time, only a minimal set of hives are loaded, and after that, hives are loaded as the operating system initializes and as users log in or whenever a hive is explicitly loaded by an application. File locations The registry is physically stored in several files, which are generally obfuscated from the user-mode APIs used to manipulate the data inside the registry. Depending upon the version of Windows, there will be different files and different locations for these files, but they are all on the local machine. The location for system registry files in Windows NT is %SystemRoot%\System32\Config; the user-specific HKEY_CURRENT_USER user registry hive is stored in Ntuser.dat inside the user profile. There is one of these per user; if a user has a roaming profile, then this file will be copied to and from a server at logout and login respectively. A second user-specific registry file named UsrClass.dat contains COM registry entries and does not roam by default. Windows NT Windows NT systems store the registry in a binary file format which can be exported, loaded and unloaded by the Registry Editor in these operating systems. The following system registry files are stored in %SystemRoot%\System32\Config\: Sam – HKEY_LOCAL_MACHINE\SAM Security – HKEY_LOCAL_MACHINE\SECURITY Software – HKEY_LOCAL_MACHINE\SOFTWARE System – HKEY_LOCAL_MACHINE\SYSTEM Default – HKEY_USERS\.DEFAULT Userdiff – Not associated with a hive. Used only when upgrading operating systems. The following file is stored in each user's profile folder: %USERPROFILE%\Ntuser.dat – HKEY_USERS\<User SID> (linked to by HKEY_CURRENT_USER) For Windows 2000, Server 2003 and Windows XP, the following additional user-specific file is used for file associations and COM information: %USERPROFILE%\Local Settings\Application Data\Microsoft\Windows\Usrclass.dat (path is localized) – HKEY_USERS\<User SID>_Classes (HKEY_CURRENT_USER\Software\Classes) For Windows Vista and later, the path was changed to: %USERPROFILE%\AppData\Local\Microsoft\Windows\Usrclass.dat (path is not localized) alias %LocalAppData%\Microsoft\Windows\Usrclass.dat – HKEY_USERS\<User SID>_Classes (HKEY_CURRENT_USER\Software\Classes) Windows 2000 keeps an alternate copy of the registry hives (.ALT) and attempts to switch to it when corruption is detected. Windows XP and Windows Server 2003 do not maintain a System.alt hive because NTLDR on those versions of Windows can process the System.log file to bring up to date a System hive that has become inconsistent during a shutdown or crash. In addition, the %SystemRoot%\Repair folder contains a copy of the system's registry hives that were created after installation and the first successful startup of Windows. Each registry data file has an associated file with a ".log" extension that acts as a transaction log that is used to ensure that any interrupted updates can be completed upon next startup. Internally, Registry files are split into 4 kB "bins" that contain collections of "cells". Windows 9x The registry files are stored in the %WINDIR% directory under the names USER.DAT and SYSTEM.DAT with the addition of CLASSES.DAT in Windows ME. Also, each user profile (if profiles are enabled) has its own USER.DAT file which is located in the user's profile directory in %WINDIR%\Profiles\<Username>\. Windows 3.11 The only registry file is called REG.DAT and it is stored in the %WINDIR% directory. Windows 10 Mobile Note: To access the registry files, the Phone needs to be set in a special mode using either:  WpInternals ( Put the mobile device into flash mode. ) InterOp Tools ( mount the MainOS Partition with MTP. ) If any of above Methods worked - The Device Registry Files can be found in the following location: {Phone}\EFIESP\Windows\System32\config Note: InterOp Tools also includes a registry editor. Editing Registry editors The registry contains important configuration information for the operating system, for installed applications as well as individual settings for each user and application. A careless change to the operating system configuration in the registry could cause irreversible damage, so it is usually only installer programs which perform changes to the registry database during installation/configuration and removal. If a user wants to edit the registry manually, Microsoft recommends that a backup of the registry be performed before the change. When a program is removed from control panel, it may not be completely removed and, in case of errors or glitches caused by references to missing programs, the user might have to manually check inside directories such as program files. After this, the user might need to manually remove any reference to the uninstalled program in the registry. This is usually done by using RegEdit.exe. Editing the registry is sometimes necessary when working around Windows-specific issues e.g. problems when logging onto a domain can be resolved by editing the registry. Windows Registry can be edited manually using programs such as RegEdit.exe, although these tools do not expose some of the registry's metadata such as the last modified date. The registry editor for the 3.1/95 series of operating systems is RegEdit.exe and for Windows NT it is RegEdt32.exe; the functionalities are merged in Windows XP. Optional and/or third-party tools similar to RegEdit.exe are available for many Windows CE versions. Registry Editor allows users to perform the following functions: Creating, manipulating, renaming and deleting registry keys, subkeys, values and value data Importing and exporting .REG files, exporting data in the binary hive format Loading, manipulating and unloading registry hive format files (Windows NT systems only) Setting permissions based on ACLs (Windows NT systems only) Bookmarking user-selected registry keys as Favorites Finding particular strings in key names, value names and value data Remotely editing the registry on another networked computer .REG files .REG files (also known as Registration entries) are text-based human-readable files for exporting and importing portions of the registry using a INI-based syntax. On Windows 2000 and later, they contain the string Windows Registry Editor Version 5.00 at the beginning and are Unicode-based. On Windows 9x and NT 4.0 systems, they contain the string REGEDIT4 and are ANSI-based. Windows 9x format .REG files are compatible with Windows 2000 and later. The Registry Editor on Windows on these systems also supports exporting .REG files in Windows 9x/NT format. Data is stored in .REG files using the following syntax: [<Hive name>\<Key name>\<Subkey name>] "Value name"=<Value type>:<Value data> The Default Value of a key can be edited by using "@" instead of "Value Name": [<Hive name>\<Key name>\<Subkey name>] @=<Value type>:<Value data> String values do not require a <Value type> (see example), but backslashes ('\') need to be written as a double-backslash ('\\'), and quotes ('"') as backslash-quote ('\"'). For example, to add the values "Value A", "Value B", "Value C", "Value D", "Value E", "Value F", "Value G", "Value H", "Value I", "Value J", "Value K", "Value L", and "Value M" to the HKLM\SOFTWARE\Foobar key: Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\SOFTWARE\Foobar] "Value A"="<String value data with escape characters>" "Value B"=hex:<Binary data (as comma-delimited list of hexadecimal values)> "Value C"=dword:<DWORD value integer> "Value D"=hex(0):<REG_NONE (as comma-delimited list of hexadecimal values)> "Value E"=hex(1):<REG_SZ (as comma-delimited list of hexadecimal values representing a UTF-16LE NUL-terminated string)> "Value F"=hex(2):<Expandable string value data (as comma-delimited list of hexadecimal values representing a UTF-16LE NUL-terminated string)> "Value G"=hex(3):<Binary data (as comma-delimited list of hexadecimal values)> ; equal to "Value B" "Value H"=hex(4):<DWORD value (as comma-delimited list of 4 hexadecimal values, in little endian byte order)> "Value I"=hex(5):<DWORD value (as comma-delimited list of 4 hexadecimal values, in big endian byte order)> "Value J"=hex(7):<Multi-string value data (as comma-delimited list of hexadecimal values representing UTF-16LE NUL-terminated strings)> "Value K"=hex(8):<REG_RESOURCE_LIST (as comma-delimited list of hexadecimal values)> "Value L"=hex(a):<REG_RESOURCE_REQUIREMENTS_LIST (as comma-delimited list of hexadecimal values)> "Value M"=hex(b):<QWORD value (as comma-delimited list of 8 hexadecimal values, in little endian byte order)> Data from .REG files can be added/merged with the registry by double-clicking these files or using the /s switch in the command line. REG files can also be used to remove registry data. To remove a key (and all subkeys, values and data), the key name must be preceded by a minus sign ("-"). For example, to remove the HKLM\SOFTWARE\Foobar key (and all subkeys, values and data), [-HKEY_LOCAL_MACHINE\SOFTWARE\Foobar] To remove a value (and its data), the values to be removed must have a minus sign ("-") after the equal sign ("="). For example, to remove only the "Value A" and "Value B" values (and their data) from the HKLM\SOFTWARE\Foobar key: [HKEY_LOCAL_MACHINE\SOFTWARE\Foobar] "Value A"=- "Value B"=- To remove only the Default value of the key HKLM\SOFTWARE\Foobar (and its data): [HKEY_LOCAL_MACHINE\SOFTWARE\Foobar] @=- Lines beginning with a semicolon are considered comments: ; This is a comment. This can be placed in any part of a .reg file [HKEY_LOCAL_MACHINE\SOFTWARE\Foobar] "Value"="Example string" Group policies Windows group policies can change registry keys for a number of machines or individual users based on policies. When a policy first takes effect for a machine or for an individual user of a machine, the registry settings specified as part of the policy are applied to the machine or user settings. Windows will also look for updated policies and apply them periodically, typically every 90 minutes. Through its scope a policy defines which machines and/or users the policy is to be applied to. Whether a machine or user is within the scope of a policy or not is defined by a set of rules which can filter on the location of the machine or user account in organizational directory, specific users or machine accounts or security groups. More advanced rules can be set up using Windows Management Instrumentation expressions. Such rules can filter on properties such as computer vendor name, CPU architecture, installed software, or networks connected to. For instance, the administrator can create a policy with one set of registry settings for machines in the accounting department and policy with another (lock-down) set of registry settings for kiosk terminals in the visitors area. When a machine is moved from one scope to another (e.g. changing its name or moving it to another organizational unit), the correct policy is automatically applied. When a policy is changed it is automatically re-applied to all machines currently in its scope. The policy is edited through a number of administrative templates which provides a user interface for picking and changing settings. The set of administrative templates is extensible and software packages which support such remote administration can register their own templates. Command line editing The registry can be manipulated in a number of ways from the command line. The Reg.exe and RegIni.exe utility tools are included in Windows XP and later versions of Windows. Alternative locations for legacy versions of Windows include the Resource Kit CDs or the original Installation CD of Windows. Also, a .REG file can be imported from the command line with the following command: RegEdit.exe /s file The /s means the file will be silent merged to the registry. If the /s parameter is omitted the user will be asked to confirm the operation. In Windows 98, Windows 95 and at least some configurations of Windows XP the /s switch also causes RegEdit.exe to ignore the setting in the registry that allows administrators to disable it. When using the /s switch RegEdit.exe does not return an appropriate return code if the operation fails, unlike Reg.exe which does. RegEdit.exe /e file exports the whole registry in V5 format to a UNICODE .REG file, while any of RegEdit.exe /e file HKEY_CLASSES_ROOT[\<key>] RegEdit.exe /e file HKEY_CURRENT_CONFIG[\<key>] RegEdit.exe /e file HKEY_CURRENT_USER[\<key>] RegEdit.exe /e file HKEY_LOCAL_MACHINE[\<key>] RegEdit.exe /e file HKEY_USERS[\<key>] export the specified (sub)key (which has to be enclosed in quotes if it contains spaces) only. RegEdit.exe /a file exports the whole registry in V4 format to an ANSI .REG file. RegEdit.exe /a file <key> exports the specified (sub)key (which has to be enclosed in quotes if it contains spaces) only. It is also possible to use Reg.exe. Here is a sample to display the value of the registry value Version: Reg.exe QUERY HKLM\Software\Microsoft\ResKit /v Version Other command line options include a VBScript or JScript together with CScript, WMI or WMIC.exe and Windows PowerShell. Registry permissions can be manipulated through the command line using RegIni.exe and the SubInACL.exe tool. For example, the permissions on the HKEY_LOCAL_MACHINE\SOFTWARE key can be displayed using: SubInACL.exe /keyreg HKEY_LOCAL_MACHINE\SOFTWARE /display PowerShell commands and scripts Windows PowerShell comes with a registry provider which presents the registry as a location type similar to the file system. The same commands used to manipulate files and directories in the file system can be used to manipulate keys and values of the registry. Also like the file system, PowerShell uses the concept of a current location which defines the context on which commands by default operate. The Get-ChildItem (also available through the aliases ls, dir or gci) retrieves the child keys of the current location. By using the Set-Location (or the alias cd) command the user can change the current location to another key of the registry. Commands which rename items, remove items, create new items or set content of items or properties can be used to rename keys, remove keys or entire sub-trees or change values. Through PowerShell scripts files, an administrator can prepare scripts which, when executed, make changes to the registry. Such scripts can be distributed to administrators who can execute them on individual machines. The PowerShell Registry provider supports transactions, i.e. multiple changes to the registry can be bundled into a single atomic transaction. An atomic transaction ensures that either all of the changes are committed to the database, or if the script fails, none of the changes are committed to the database. Programs or scripts The registry can be edited through the APIs of the Advanced Windows 32 Base API Library (advapi32.dll). Many programming languages offer built-in runtime library functions or classes that wrap the underlying Windows APIs and thereby enable programs to store settings in the registry (e.g. Microsoft.Win32.Registry in VB.NET and C#, or TRegistry in Delphi and Free Pascal). COM-enabled applications like Visual Basic 6 can use the WSH WScript.Shell object. Another way is to use the Windows Resource Kit Tool, Reg.exe by executing it from code, although this is considered poor programming practice. Similarly, scripting languages such as Perl (with Win32::TieRegistry), Python (with winreg), TCL (which comes bundled with the registry package), Windows Powershell and Windows Scripting Host also enable registry editing from scripts. Offline editing The offreg.dll available from the Windows Driver Kit offers a set of APIs for the creation and manipulation of currently not loaded registry hives similar to those provided by advapi32.dll. It is also possible to edit the registry (hives) of an offline system from Windows PE or Linux (in the latter case using open source tools). COM self-registration Prior to the introduction of registration-free COM, developers were encouraged to add initialization code to in-process and out-of-process binaries to perform the registry configuration required for that object to work. For in-process binaries such as .DLL and .OCX files, the modules typically exported a function called DllInstall() that could be called by installation programs or invoked manually with utilities like Regsvr32.exe; out-of-process binaries typically support the commandline arguments /Regserver and /Unregserver that created or deleted the required registry settings. COM applications that break because of DLL Hell issues can commonly be repaired with RegSvr32.exe or the /RegServer switch without having to re-invoke installation programs. Advanced functionality Windows exposes APIs that allows user-mode applications to register to receive a notification event if a particular registry key is changed. APIs are also available to allow kernel-mode applications to filter and modify registry calls made by other applications. Windows also supports remote access to the registry of another computer via the RegConnectRegistry function if the Remote Registry service is running, correctly configured and its network traffic is not firewalled. Security Each key in the registry of Windows NT versions can have an associated security descriptor. The security descriptor contains an access control list (ACL) that describes which user groups or individual users are granted or denied access permissions. The set of registry permissions include 10 rights/permissions which can be explicitly allowed or denied to a user or a group of users. As with other securable objects in the operating system, individual access control entries (ACE) on the security descriptor can be explicit or inherited from a parent object. Windows Resource Protection is a feature of Windows Vista and later versions of Windows that uses security to deny Administrators and the system WRITE access to some sensitive keys to protect the integrity of the system from malware and accidental modification. Special ACEs on the security descriptor can also implement mandatory integrity control for the registry key and subkeys. A process running at a lower integrity level cannot write, change or delete a registry key/value, even if the account of the process has otherwise been granted access through the ACL. For instance, Internet Explorer running in Protected Mode can read medium and low integrity registry keys/values of the currently logged on user, but it can only modify low integrity keys. Outside security, registry keys cannot be deleted or edited due to other causes. Registry keys containing NUL characters cannot be deleted with standard registry editors and require a special utility for deletion, such as RegDelNull. Backups and recovery Different editions of Windows have supported a number of different methods to back up and restore the registry over the years, some of which are now deprecated: System Restore can back up the registry and restore it as long as Windows is bootable, or from the Windows Recovery Environment (starting with Windows Vista). NTBackup can back up the registry as part of the System State and restore it. Automated System Recovery in Windows XP can also restore the registry. On Windows NT, the Last Known Good Configuration option in startup menu relinks the HKLM\SYSTEM\CurrentControlSet key, which stores hardware and device driver information. Windows 98 and Windows ME include command line (Scanreg.exe) and GUI (Scanregw.exe) registry checker tools to check and fix the integrity of the registry, create up to five automatic regular backups by default and restore them manually or whenever corruption is detected. The registry checker tool backs up the registry, by default, to %Windir%\Sysbckup Scanreg.exe can also run from MS-DOS. The Windows 95 CD-ROM included an Emergency Recovery Utility (ERU.exe) and a Configuration Backup Tool (Cfgback.exe) to back up and restore the registry. Additionally Windows 95 backs up the registry to the files system.da0 and user.da0 on every successful boot. Windows NT 4.0 included RDISK.EXE, a utility to back up and restore the entire registry. Windows 2000 Resource Kit contained an unsupported pair of utilities called Regback.exe and RegRest.exe for backup and recovery of the registry. Periodic automatic backups of the registry are now disabled by default on Windows 10 May 2019 Update (version 1903). Microsoft recommends System Restore be used instead. Policy Group policy Windows 2000 and later versions of Windows use Group Policy to enforce registry settings through a registry-specific client extension in the Group Policy processing engine. Policy may be applied locally to a single computer using gpedit.msc, or to multiple users and/or computers in a domain using gpmc.msc. Legacy systems With Windows 95, Windows 98, Windows ME and Windows NT 4.0, administrators can use a special file to be merged into the registry, called a policy file (POLICY.POL). The policy file allows administrators to prevent non-administrator users from changing registry settings like, for instance, the security level of Internet Explorer and the desktop background wallpaper. The policy file is primarily used in a business with a large number of computers where the business needs to be protected from rogue or careless users. The default extension for the policy file is .POL. The policy file filters the settings it enforces by user and by group (a "group" is a defined set of users). To do that the policy file merges into the registry, preventing users from circumventing it by simply changing back the settings. The policy file is usually distributed through a LAN, but can be placed on the local computer. The policy file is created by a free tool by Microsoft that goes by the filename poledit.exe for Windows 95/Windows 98 and with a computer management module for Windows NT. The editor requires administrative permissions to be run on systems that uses permissions. The editor can also directly change the current registry settings of the local computer and if the remote registry service is installed and started on another computer it can also change the registry on that computer. The policy editor loads the settings it can change from .ADM files, of which one is included, that contains the settings the Windows shell provides. The .ADM file is plain text and supports easy localisation by allowing all the strings to be stored in one place. Virtualization INI file virtualization Windows NT kernels support redirection of INI file-related APIs into a virtual file in a registry location such as HKEY_CURRENT_USER using a feature called "InifileMapping". This functionality was introduced to allow legacy applications written for 16-bit versions of Windows to be able to run under Windows NT platforms on which the System folder is no longer considered an appropriate location for user-specific data or configuration. Non-compliant 32-bit applications can also be redirected in this manner, even though the feature was originally intended for 16-bit applications. Registry virtualization Windows Vista introduced limited registry virtualization, whereby poorly written applications that do not respect the principle of least privilege and instead try to write user data to a read-only system location (such as the HKEY_LOCAL_MACHINE hive), are silently redirected to a more appropriate location, without changing the application itself. Similarly, application virtualization redirects all of an application's invalid registry operations to a location such as a file. Used together with file virtualization, this allows applications to run on a machine without being installed on it. Low integrity processes may also use registry virtualization. For example, Internet Explorer 7 or 8 running in "Protected Mode" on Windows Vista and above will automatically redirect registry writes by ActiveX controls to a sandboxed location in order to frustrate some classes of security exploits. The Application Compatibility Toolkit provides shims that can transparently redirect HKEY_LOCAL_MACHINE or HKEY_CLASSES_ROOT Registry operations to HKEY_CURRENT_USER to address "LUA" bugs that cause applications not to work for users with insufficient rights. Disadvantages Critics labeled the registry in Windows 95 a single point of failure, because re-installation of the operating system was required if the registry became corrupt. However, Windows NT uses transaction logs to protect against corruption during updates. Current versions of Windows use two levels of log files to ensure integrity even in the case of power failure or similar catastrophic events during database updates. Even in the case of a non-recoverable error, Windows can repair or re-initialize damaged registry entries during system boot. Equivalents and alternatives In Windows, use of the registry for storing program data is a matter of developer's discretion. Microsoft provides programming interfaces for storing data in XML files (via MSXML) or database files (via SQL Server Compact) which developers can use instead. Developers are also free to use non-Microsoft alternatives or develop their own proprietary data stores. In contrast to Windows Registry's binary-based database model, some other operating systems use separate plain-text files for daemon and application configuration, but group these configurations together for ease of management. In Unix-like operating systems (including Linux) that follow the Filesystem Hierarchy Standard, system-wide configuration files (information similar to what would appear in HKEY_LOCAL_MACHINE on Windows) are traditionally stored in files in /etc/ and its subdirectories, or sometimes in /usr/local/etc. Per-user information (information that would be roughly equivalent to that in HKEY_CURRENT_USER) is stored in hidden directories and files (that start with a period/full stop) within the user's home directory. However XDG-compliant applications should refer to the environment variables defined in the Base Directory specification. In macOS, system-wide configuration files are typically stored in the /Library/ folder, whereas per-user configuration files are stored in the corresponding ~/Library/ folder in the user's home directory, and configuration files set by the system are in /System/Library/. Within these respective directories, an application typically stores a property list file in the Preferences/ sub-directory. RISC OS (not to be confused with MIPS RISC/os) uses directories for configuration data, which allows applications to be copied into application directories, as opposed to the separate installation process that typifies Windows applications; this approach is also used on the ROX Desktop for Linux. This directory-based configuration also makes it possible to use different versions of the same application, since the configuration is done "on the fly". If one wishes to remove the application, it is possible to simply delete the folder belonging to the application. This will often not remove configuration settings which are stored independently from the application, usually within the computer's !Boot structure, in !Boot.Choices or potentially anywhere on a network fileserver. It is possible to copy installed programs between computers running RISC OS by copying the application directories belonging to the programs, however some programs may require re-installing, e.g. when shared files are placed outside an application directory. IBM AIX (a Unix variant) uses a registry component called Object Data Manager (ODM). The ODM is used to store information about system and device configuration. An extensive set of tools and utilities provides users with means of extending, checking, correcting the ODM database. The ODM stores its information in several files, default location is /etc/objrepos. The GNOME desktop environment uses a registry-like interface called dconf for storing configuration settings for the desktop and applications. The Elektra Initiative provides alternative back-ends for various different text configuration files. While not an operating system, the Wine compatibility layer, which allows Windows software to run on a Unix-like system, also employs a Windows-like registry as text files in the WINEPREFIX folder: system.reg (HKEY_LOCAL_MACHINE), user.reg (HKEY_CURRENT_USER) and userdef.reg. See also Registry cleaner Application virtualization LogParser – SQL-like querying of various types of log files List of Shell Icon Overlay Identifiers Ransomware attack that uses Registry Notes Footnotes References External links * Windows Registry info & reference in the MSDN Library Registry Configuration files
Operating System (OS)
617
Input/Output Supervisor The Input/Output Supervisor (IOS) is that portion of the control program in the IBM mainframe OS/360 and successors operating systems which issues the privileged I/O instructions and supervises the resulting I/O interruptions for any program which requests I/O device operations until the normal or abnormal conclusion of those operations. Purposes IOS has two purposes: To handle I/O requests, which are requests for the execution of channel programs To handle I/O interruptions, which result from the execution of channel programs and from operator intervention Program sections To facilitate the handling of the I/O requests and interruptions, IOS is divided into two primary program sections (CSECTs): Execute channel program supervisor (EXCP in PCP, MFT/MFT-II and MVT; EXCP/EXCPVR, in SVS; STARTIO in MVS/370 and later instances of the OS) Input/output interruption supervisor These primary sections are resident in main storage and provide control program support for the normal execution of channel programs. The secondary program sections (also CSECTs), termed Error Recovery Procedures (ERPs), are, with but one exception, located on external storage, and are brought into main storage for recovery from the abnormal execution of channel programs. In the early instances of the OS, these sections were brought into the Input/Output Supervisor's "transient area", not unlike the OS/360 Control Program's Supervisor Call "transient areas". In post-MVT instances of the OS, these sections are located in the pageable linkpack area (PLPA) and are demand-paged. The sole exception is, of course, the ERP for direct access storage devices, which must always remain resident in order to recover from possible I/O errors on the IPL volume and on other volumes which contain datasets which may be concatenated with certain system datasets. Multiprogramming IOS is designed around a multi-programming concept whereby operations on different I/O channels, control units and devices may be managed concurrently and apparently simultaneously. This concurrency and apparent simultaneity is present even in the most basic version of the OS, PCP, which otherwise supports only one user task, as the underlying hardware architecture has but one set of I/O instructions and but one I/O interruption, for accessing the devices and for accessing the resulting device status, respectively, available to support all attached I/O devices, hence all I/O device operations must be synchronously multiplexed in to the half-dozen privileged I/O instructions and asynchronously de-multiplexed out from the single I/O interruption by IOS yet this entire process, from start to finish, is made to appear to be synchronous to the application. Essentially, IOS is a hypervising operating system built on top of the OS itself, and entirely within it, not as a separable function. A very specialized hypervisor, to be sure, as the hypervisation is restricted to the several I/O instructions and the one I/O interruption. Multiprocessing In MVS/370 and later instances of the OS, IOS is also designed around a multi-processing concept whereby all available processors, as many as two in MVS/370 and as many as sixteen in later instances of the OS (twelve were supported by IBM; sixteen were supported by Amdahl), are effectively and efficiently utilized. And, to best utilize this multi-processing capability, IOS's multi-programming implementation was partitioned into smaller executable units, in particular those which may be executed under the control of an SRB. Initiation/Completion IOS is not invoked directly by the programmer. Rather, IOS is invoked through "branch entries" to start I/O requests and through "interrupt handlers" to complete I/O requests. Notes References IBM mainframe operating systems Computer file formats
Operating System (OS)
618
Fatal system error A fatal system error (also known as a system crash, stop error, kernel error, or bug check) occurs when an operating system halts, because it has reached a condition where it can no longer operate safely (i.e. where critical data could be lost or the system damaged in other ways). In Microsoft Windows, a fatal system error can be deliberately caused from a kernel-mode driver with either the or function. However, this should only be done as a last option when a critical driver is corrupted and is impossible to recover. This design parallels that in OpenVMS. The Unix kernel panic concept is very similar. In Windows When a bug check is issued, a crash dump file will be created if the system is configured to create them. This file contains a "snapshot" of useful low-level information about the system that can be used to debug the root cause of the problem and possibly other things in the background. If the user has enabled it, the system will also write an entry to the system event log. The log entry contains information about the bug check (including the bug check code and its parameters) as well as a link that will report the bug and provide the user with prescriptive suggestions if the cause of the check is definitive and well-known. Next, if a kernel debugger is connected and active when the bug check occurs, the system will break into the debugger where the cause of the crash can be investigated. If no debugger is attached, then a blue text screen is displayed that contains information about why the error occurred, which is commonly known as a blue screen or bug check screen. The user will only see the blue screen if the system is not configured to Automatically Restart (which became the default setting in Windows XP SP2). Otherwise, it appears as though the system simply rebooted (though a blue screen may be visible briefly). In Windows, bug checks are only supported by the Windows NT kernel. The corresponding system routine in Windows 9x, named , does not halt the system like bug checks do. Instead, it displays the infamous BSoD (Blue Screen of Death) and allows the user to attempt to continue. The Windows DDK and the WinDbg documentation both have reference information about most bug checks. The WinDbg package is available as a free download and can be installed by most users. The Windows DDK is larger and more complicated to install. See also Screen of death References External links Debugging Tools for Windows Bug Check Code Reference at Microsoft Docs Computer errors
Operating System (OS)
619
Rhapsody (operating system) Rhapsody was the code name given to Apple Computer's next-generation operating system during the period of its development between Apple's purchase of NeXT in late 1996 and the announcement of Mac OS X (now called "macOS") in 1998. At first more than an operating system, Rhapsody represented a new strategy for Apple, who intended the operating system to run on x86-based PCs and DEC Alpha workstations as well as on PowerPC-based Macintosh hardware. In addition, the underlying API frameworks would be ported to run natively on Windows NT. Eventually, the non-Apple platforms were dropped, and later versions consisted primarily of the OPENSTEP operating system ported to the Power Macintosh, along with a new GUI to make it appear more Mac-like. Several existing "classic" Mac OS technologies were also ported to Rhapsody, including QuickTime and AppleSearch. Rhapsody could also run Mac OS 8 in a "Blue Box" emulation layer. History Rhapsody was announced at the MacWorld Expo in San Francisco on January 7, 1997 and first demonstrated at the 1997 Worldwide Developers Conference (WWDC). There were two subsequent general Developer Releases for computers with x86 or PowerPC processors. After this there was to be a "Premier" version somewhat analogous to the Mac OS X Public Beta, followed by the full "Unified" version in the second quarter of 1998. Apple's development schedule in integrating the features of two very different systems made it difficult to forecast the features of upcoming releases. At the 1998 MacWorld Expo in New York, Steve Jobs announced that Rhapsody would be released as Mac OS X Server 1.0 (which shipped in 1999). No home version of Rhapsody would be released. Its code base was forked into Darwin, the open source underpinnings of Mac OS X. In a meeting with Michael Dell, owner of PC maker Dell, Steve Jobs demonstrated an x86 version of Rhapsody that could run on Intel-compatible computers, and offered to license the operating system to Dell for distribution on its PCs. The deal fell through, however, when Jobs insisted that all of its computers ship with both Mac OS and Windows so that consumers could choose the platform they prefer (which would have resulted in Dell having to pay royalties to Apple for every computer it sells), as opposed to Dell's preference that the choice of OS be a factory option. Design Defining features of the Rhapsody operating system included a heavily modified "hybrid" OSFMK 7.3 (Open Software Foundation Mach Kernel) from the OSF, a BSD operating system layer (based on 4.4BSD), the object-oriented Yellow Box API framework, the Blue Box compatibility environment for running "classic" Mac OS applications, and a Java Virtual Machine. The user interface was modeled after Mac OS 8's "Platinum" appearance. The file management functions served by the Finder in previous Mac OS versions were instead handled by a port of OPENSTEP's Workspace Manager. Additional features inherited from OPENSTEP and not found in the classic Mac OS Finder were included, such as the Shelf and column view. Although the Shelf was dropped in favor of Dock functionality, column view would later make its way to macOS's Finder. Rhapsody's Blue Box environment, available only when running on the PowerPC architecture, was responsible for providing runtime compatibility with existing Mac OS applications. Compared to the more streamlined and integrated Classic compatibility layer that was later featured in Mac OS X, Blue Box's interface presented users with a distinct barrier between emulated legacy software and native Rhapsody applications. All emulated applications and their associated windows were encapsulated within a single Blue Box emulation window instead of being interspersed with the other applications using the native Yellow Box API. This limited cross-environment interoperability and caused various user interface inconsistencies. To avoid the pitfalls of running within the emulation environment and take full advantage of Rhapsody's features, software needed to be rewritten to use the new Yellow Box API. Inherited from OPENSTEP, Yellow Box used an object-oriented model completely unlike the procedural model used by the Classic APIs. The large difference between the two frameworks meant transition of legacy code required significant changes and effort on the part of the developer. The consequent lack of adoption as well as objections by prominent figures in the Macintosh software market, including Adobe Systems and Microsoft, became major factors in Apple's decision to cancel the Rhapsody project in 1998. However, most of Yellow Box and other Rhapsody technologies went on to be used in macOS's Cocoa API. Bowing to developers' wishes, Apple also ported existing Classic Mac OS technologies into the new operating system and implemented the Carbon API to provide Classic Mac OS API compatibility. Widely used Mac OS libraries like QuickTime and AppleScript were ported and made available to developers. Carbon allowed developers to maintain full compatibility and native functionality using their current codebases, while enabling them to take advantage of new features at their discretion. Name The name Rhapsody followed a pattern of music-related code names that Apple designated for operating system releases during the 1990s. Another next-generation operating system, which was to be the successor to the never-completed Copland operating system, was code-named Gershwin after George Gershwin, composer of Rhapsody in Blue. Copland itself was named after another American composer, Aaron Copland. Other musical code names include Harmony (Mac OS 7.6), Tempo (Mac OS 8), Allegro (Mac OS 8.5), and Sonata (Mac OS 9). Release history See also NeXTSTEP Mac OS X Server 1.0 References External links Shaw's Rhapsody Resource Page Toastytech GUI Gallery — Screenshots of Rhapsody Developer Release 2 GUIdebook > Screenshots > Rhapsody DR2 — Screenshots of Rhapsody (Intel version) and its components. "Apple shows off Rhapsody OS" — An article written shortly after Apple first demonstrated Rhapsody. "Overall summary on Apple Rhapsody: A User Overview" — An overview of Rhapsody's technologies. "Rhapsody" at OSData.com — Technical specifications on the operating system. First Impressions On Apple Rhapsody Blue Box, Beta Version 1 TidBITS: Yellow Box, Blue Box, Rhapsody & WWDC Cocoa and the Death of Yellow Box and Rhapsody, By Daniel Eran Dilger, 2007-02-19, RoughlyDrafted Apple Inc. operating systems Berkeley Software Distribution MacOS Mach (kernel) Discontinued operating systems
Operating System (OS)
620
Winlogon In computing, Winlogon (Windows Logon) is the component of Microsoft Windows operating systems that is responsible for handling the secure attention sequence, loading the user profile on logon, and optionally locking the computer when a screensaver is running (requiring another authentication step). The actual obtainment and verification of user credentials is left to other components. Winlogon is a common target for several threats that could modify its function and memory usage. Increased memory usage for this process might indicate that it has been "hijacked". In Windows Vista and later operating systems, Winlogon's roles and responsibilities have changed significantly. Overview Winlogon handles interface functions that are independent of authentication policy. It creates the desktops for the window station, implements time-out operations, and in versions of Windows prior to Windows Vista, provides a set of support functions for the GINA and takes responsibility for configuring machine and user Group Policy. Winlogon also checks if the copy of Windows is a legitimate license starting in Windows XP and later. Winlogon has the following responsibilities: Window station and desktop protection Winlogon sets the protection of the window station and corresponding desktops to ensure that each is properly accessible. In general, this means that the local system will have full access to these objects and that an interactively logged-on user will have read access to the window station object and full access to the application desktop object. Standard SAS recognition Winlogon has special hooks into the User32 server that allow it to monitor Control-Alt-Delete secure attention sequence (SAS) events. Winlogon makes this SAS event information available to GINAs to use as their SAS, or as part of their SAS. In general, GINAs should monitor SASs on their own; however, any GINA that has the standard ++ SAS as one of the SASs it recognizes should use the Winlogon support provided for this purpose. SAS routine dispatching When Winlogon encounters a SAS event or when a SAS is delivered to Winlogon by the GINA, Winlogon sets the state accordingly, changes to the Winlogon desktop, and calls one of the SAS processing functions of the GINA. User profile loading When users log on, their user profiles are loaded into the registry. In this way, the processes of the user can use the special registry key HKEY_CURRENT_USER. Winlogon does this automatically after a successful logon but before activation of the shell for the newly logged-on user. Assignment of security to user shell When a user logs on, the GINA is responsible for creating one or more initial processes for that user. Winlogon provides a support function for the GINA to apply the security of the newly logged-on user to these processes. However, the preferred way to do this is for the GINA to call the Windows function CreateProcessAsUser, and let the system provide the service. Screen saver control Winlogon monitors keyboard and mouse activity to determine when to activate screen savers. After the screen saver is activated, Winlogon continues to monitor keyboard and mouse activity to determine when to terminate the screen saver. If the screen saver is marked as secure, Winlogon treats the workstation as locked. When there is mouse or keyboard activity, Winlogon invokes the WlxDisplayLockedNotice function of the GINA and locked workstation behavior resumes. If the screen saver is not secure, any keyboard or mouse activity terminates the screen saver without notification to the GINA. Multiple network provider support Multiple networks installed on a Windows system can be included in the authentication process and in password-updating operations. This inclusion lets additional networks gather identification and authentication information all at once during normal logon, using the secure desktop of Winlogon. Some of the parameters required in the Winlogon services available to GINAs explicitly support these additional network providers. See also List of Microsoft Windows components Architecture of the Windows NT operating system line Vundo, a trojan that attaches itself to winlogon.exe getty, a similar process in UNIX In the Windows XP Source code leak in September 2020, Winlogon was the only piece missing from the source code, rendering the leaked operating system incomplete. References External links Customizing GINA - Part 1, Developer tutorial for writing a custom GINA Customizing GINA - Part 2, Developer tutorial for writing a custom GINA MSKB:193361 MSGINA.DLL does not Reset WINLOGON Structure Windows Vista and Windows Server 2008: Understanding, Enhancing and Extending Security End-to-end — Microsoft PowerPoint presentation that includes information on changes to Winlogon in Windows Vista and Windows Server 2008 Windows components Computer security software
Operating System (OS)
621
System console One meaning of system console, computer console, root console, operator's console, or simply console is the text entry and display device for system administration messages, particularly those from the BIOS or boot loader, the kernel, from the init system and from the system logger. It is a physical device consisting of a keyboard and a screen, and traditionally is a text terminal, but may also be a graphical terminal. System consoles are generalized to computer terminals, which are abstracted respectively by virtual consoles and terminal emulators. Today communication with system consoles is generally done abstractly, via the standard streams (stdin, stdout, and stderr), but there may be system-specific interfaces, for example those used by the system kernel. Another, older, meaning of system console, computer console, hardware console, operator's console or simply console is a hardware component used by an operator to control the hardware, typically some combination of front panel, keyboard/printer and keyboard/display. History Prior to the development of alphanumeric CRT system consoles, some computers such as the IBM 1620 had console typewriters and front panels while the very first programmable computer, the Manchester Baby, used a combination of electromechanical switches and a CRT to provide console functions—the CRT displaying memory contents in binary by mirroring the machine's Williams-Kilburn tube CRT-based RAM. Some early operating systems supported either a single keyboard/print or keyboard/display device for controlling the OS. Some also supported a single alternate console, and some supported a hardcopy console for retaining a record of commands, responses and other console messages. However, in the late 1960s it became common for operating systems to support many more consoles than 3, and operating systems began appearing in which the console was simply any terminal with a privileged user logged on. On early minicomputers, the console was a serial console, an RS-232 serial link to a terminal such as a ASR-33 or later a DECWriter or DEC VT100. This terminal was usually kept in a secured room since it could be used for certain privileged functions such as halting the system or selecting which media to boot from. Large midrange systems, e.g. those from Sun Microsystems, Hewlett-Packard and IBM, still use serial consoles. In larger installations, the console ports are attached to multiplexers or network-connected multiport serial servers that let an operator connect a terminal to any of the attached servers. Today, serial consoles are often used for accessing headless systems, usually with a terminal emulator running on a laptop. Also, routers, enterprise network switches and other telecommunication equipment have RS-232 serial console ports. On PCs and workstations, the computer's attached keyboard and monitor have the equivalent function. Since the monitor cable carries video signals, it cannot be extended very far. Often, installations with many servers therefore use keyboard/video multiplexers (KVM switches) and possibly video amplifiers to centralize console access. In recent years, KVM/IP devices have become available that allow a remote computer to view the video output and send keyboard input via any TCP/IP network and therefore the Internet. Some PC BIOSes, especially in servers, also support serial consoles, giving access to the BIOS through a serial port so that the simpler and cheaper serial console infrastructure can be used. Even where BIOS support is lacking, some operating systems, e.g. FreeBSD and Linux, can be configured for serial console operation either during bootup, or after startup. Starting with the IBM 9672, IBM large systems hves used a Hardware Management Console (HMC), consisting of a PC and a specialized application, instead of a 3270 or serial link. Other IBM product lines also use an HMC, e.g., System p. It is usually possible to log in from the console. Depending on configuration, the operating system may treat a login session from the console as being more trustworthy than a login session from other sources. See also Command-line interface (CLI) Console application Console server Linux console Virtual console Win32 console References External links BIOS Computer systems Computer terminals Out-of-band management Console User interfaces System console ru:Консоль#Программное обеспечение
Operating System (OS)
622
DOSBox DOSBox is a free and open-source emulator which runs software for MS-DOS compatible disk operating systems—primarily video games. It was first released in 2002. Development Before Windows XP, consumer-oriented versions of Windows were based on MS-DOS. Windows 3.0 and its updates were operating environments that ran on top of MS-DOS, and the Windows 9x series consisted of operating systems that were still based on MS-DOS. These versions of Windows could run DOS applications. Conversely, the Windows NT operating systems were not based on DOS. A member of the series is Windows XP, which debuted on October 25, 2001, to become the first consumer-oriented version of Windows to not use DOS. Although Windows XP could emulate DOS, it could not run many of its applications, as those applications ran only in real mode to directly access the computer's hardware, and Windows XP's protected mode prevented such direct access for security reasons. MS-DOS continued to receive support until the end of 2001, and all support for any DOS-based Windows operating system ended on July 11, 2006. The development of DOSBox began around the launch of Windows 2000—a Windows NT system—when its creators, Dutch programmers Peter Veenstra and Sjoerd van der Berg, discovered that the operating system had dropped much of its support for DOS software. The two knew of solutions at the time, but they could not run the applications in windowed mode or scale the graphics. The project was first uploaded to SourceForge and released for beta testing on July 22, 2002. Features DOSBox is a command-line program, configured either by a set of command-line arguments or by editing a plain text configuration file. For ease of use, several graphical front ends have been developed by the user community. The DOSBox project aims to be fully compatible with all DOS programs, and tries to replicate the experience as accurately as possible. In the vanilla version, long filenames are not supported; because most versions of DOS do not support them, and in these cases filenames must follow the 8.3 naming convention, with a maximum of 8 characters before the full stop, followed by up to 3 characters for the file extension. Otherwise, they will be aliased to follow the convention. There are versions available on the DOSBox website that support long filenames, at the cost of possible incompatibility with some older programs. The focus of the vanilla version is on gaming, and features such as support for Ctrl-Break may be missing. Some of the alternative versions support features not present in the vanilla version such as APM power off, direct parallel port passthrough for printing, and support for East Asian characters. Because DOSBox accesses the host computer's file system, there thus is a risk of DOS malware exploiting the emulator's security vulnerabilities and causing damage to the host machine, although these vulnerabilities continue to be patched with new DOSBox updates. Users can also capture screenshots and record videos of DOS sessions, although a codec is required to play the videos. It is also possible to record OPL sound card and MIDI commands, as well as save sound output on a WAV file. Keyboard keys and the buttons of a game controller can be mapped to other keys and combinations thereof. OS emulation DOSBox is a full-system emulator that provides BIOS interrupts and contains its own internal DOS-like shell. This means that it can be used without owning a license to any real DOS operating system. Most commands that are found in COMMAND.COM are supported, but many of the more advanced commands found in the latest MS-DOS versions are not. In addition to its internal shell, it also supports running image files of games and software originally intended to start without any operating system. Besides emulating DOS, users can also run Windows 3.0 and applications designed for it, as well as versions of Windows within the Windows 9x family. When the DOSBox application is opened, it automatically mounts to a virtual, permanent Z: drive that stores DOSBox commands and utilities. The reasons for the virtual drive are related to security, but the user can mount a different drive letter in the emulator to a directory, image file, floppy disk drive, or CD-ROM drive on the host to access its data. A configuration file and its AUTOEXEC section can be used to respectively configure DOSBox settings and execute DOS commands at startup. Hardware emulation DOSBox is capable of running DOS programs that require the CPU to be in real mode or protected mode. Since DOSBox can emulate its CPU by interpretation, the environment it emulates is completely independent of the host CPU. On systems which provide the x86, ARM, or other RISC instruction sets, however, DOSBox can use dynamic instruction translation to accelerate execution. The emulated CPU speed of DOSBox is also manually adjustable by the user to accommodate the speed of the systems for which DOS programs were originally written. DOSBox uses the Simple DirectMedia Layer external library to not only build new versions of DOSBox from source, but also handle graphics, audio, and input devices. Graphically, it can use the DirectDraw or OpenGL APIs, and can also use bilinear interpolation and scale graphics for computers with modern displays. Graphical emulation includes text mode, Hercules, CGA, EGA, VGA, VESA, S3 Trio 64, and Tandy. Sound emulation includes the PC speaker, AdLib, Gravis Ultrasound, Sound Blaster, Disney Sound Source, Tandy, and MPU-401. However, because DOSBox does not come packaged with Gravis Ultrasound drivers, they need to be installed separately for full support. DOSBox can simulate serial null modems using the TCP/IP protocol and IPX network tunneling, which allows for DOS multiplayer games using one of them to be played over local area networks or the Internet. It can also simulate the PC joystick port, with limited options being to emulate one joystick with four axes and four buttons; one gamepad with two axes and six buttons; two joysticks each with two axes and two buttons; a Thrustmaster Flight Control System joystick that has three axes, four buttons, and a hat switch; and a CH Flightstick with four axes, six buttons that can be pressed only one at a time, and a hat switch. Newer joysticks and gamepads will need to use one of these configurations to function. Reception DOSBox has become the de facto standard for running DOS games. Rock, Paper, Shotgun positively remarked on the project's continual reception of updates, its influence on PC gaming, and some front ends designed to facilitate using it. Freelance writer Michael Reed lauded the quality of scaled graphics and the project's overall focus on compatibility and accurate emulation, but criticized the lack of both save states and user-friendly control over the emulator during runtime, even with the front ends available at the time of his review. DOSBox was named SourceForge's Project of the Month in May 2009 and again in January 2013, making it the first project in the website's history to receive two Project of the Month awards. On the SourceForge website, it reached 10 million downloads on July 21, 2008, and was downloaded more than 25 million times as of October 2015. Usage Since January 2011, the developers of the Wine compatibility layer have integrated DOSBox into Wine to facilitate running DOS programs that are not supported natively by the Wine Virtual DOS machine. Since January 2015, the Internet Archive has added thousands of DOS games to its software library. Its DOSBox fork, Em-DOSBox, uses Emscripten to convert the emulator's C++ code to JavaScript, making the games playable on a web browser. The collection is provided for "scholarship and research purposes only". , the DOS library contained 6,934 games. Commercial DOSBox has also been both the most used DOS emulator and, because of the straightforward process of making the games work on modern computers, the most popular emulation software for developers re-releasing legacy versions of their games. id Software has used DOSBox to re-release vintage games such as Wolfenstein 3D and Commander Keen on Valve's Steam. In the process, it was reported they violated the program's license, the GNU GPL; the breach, which was reported as an oversight, was promptly resolved. Activision Blizzard has also used it to re-release Sierra Entertainment's DOS games. LucasArts used it to rerelease Star Wars: Dark Forces and Star Wars: TIE Fighter for modern machines on Steam and GOG.com. 2K Games producer Jason Bergman stated the company used DOSBox for Steam rereleases of certain installments of the XCOM series. Bethesda Softworks has recommended DOSBox and provided a link to the DOSBox website on the downloads page for The Elder Scrolls: Arena and The Elder Scrolls II: Daggerfall. It also included DOSBox with both games in The Elder Scrolls Anthology release. Electronic Arts' Origin client uses DOSBox for the platform's DOS games, including Electronic Arts titles such as Syndicate and SimCity 2000. Notes References External links 2002 software BeOS software Cross-platform free software DOS emulators Free software programmed in C++ Free software that uses SDL MacOS emulation software Linux emulation software Portable software RISC OS emulation software Solaris software Windows emulation software X86 emulators
Operating System (OS)
623
Operational database Operational database management systems (also referred to as OLTP On Line Transaction Processing databases), are used to update data in real-time. These types of databases allow users to do more than simply view archived data. Operational databases allow you to modify that data (add, change or delete data), doing it in real-time. OLTP databases provide transactions as main abstraction to guarantee data consistency that guarantee the so-called ACID properties. Basically, the consistency of the data is guaranteed in the case of failures and/or concurrent access to the data. Since the early 90s, the operational database software market has been largely taken over by SQL engines. Today, the operational DBMS market (formerly OLTP) is evolving dramatically, with new, innovative entrants and incumbents supporting the growing use of unstructured data and NoSQL DBMS engines, as well as XML databases and NewSQL databases. NoSQL databases typically have focused on scalability and have renounced to data consistency by not providing transactions as OLTP system do. Operational databases are increasingly supporting distributed database architecture that can leverage distribution to provide high availability and fault tolerance through replication and scale out ability. The growing role of operational databases in the IT industry is moving fast from legacy databases to real-time operational databases capable to handle distributed web and mobile demand and to address Big data challenges. Recognizing this, Gartner started to publish the Magic Quadrant for Operational Database Management Systems in October 2013. List of operational databases Notable operational databases include: Use in business Operational databases are used to store, manage and track real-time business information. For example, a company might have an operational database used to track warehouse/stock quantities. As customers order products from an online web store, an operational database can be used to keep track of how many items have been sold and when the company will need to reorder stock. An operational database stores information about the activities of an organization, for example customer relationship management transactions or financial operations, in a computer database. Operational databases allow a business to enter, gather, and retrieve large quantities of specific information, such as company legal data, financial data, call data records, personal employee information, sales data, customer data, data on assets and many other information. An important feature of storing information in an operational database is the ability to share information across the company and over the Internet. Operational databases can be used to manage mission-critical business data, to monitor activities, to audit suspicious transactions, or to review the history of dealings with a particular customer. They can also be part of the actual process of making and fulfilling a purchase, for example in e-commerce. Data warehouse terminology In data warehousing, the term is even more specific: the operational database is the one which is accessed by an operational system (for example a customer-facing website or the application used by the customer service department) to carry out regular operations of an organization. Operational databases usually use an online transaction processing database which is optimized for faster transaction processing (create, read, update and delete operations). An operational database is the source for a data warehouse. See also HTAP databases Document-oriented databases NewSQL databases NoSQL databases XML databases SQL databases Distributed databases References O’Brien, Jason., and Marakas, Gorila., (2008). Management Information Technology Systems. Computer Software (pp. 185). New York, New York: McGraw-Hill Data warehousing Data management Information technology management Business intelligence Types of databases
Operating System (OS)
624
Macintosh Performa The Macintosh Performa is a family of personal computers designed, manufactured and sold by Apple Computer, Inc. from 1992 to 1997. The Performa brand re-used models from Apple's Quadra, Centris, LC, and Power Macintosh families with model numbers that denoted included software packages or hard drive sizes. Whereas non-Performa Macintosh computers were sold by Apple Authorized Resellers, the Performa was sold through big-box stores and mass-market retailers such as Good Guys, Circuit City, and Sears. The initial series of models consisted of the Macintosh Classic II-based Performa 200, the LC II-based Performa 400, and the IIvi-based Performa 600. After releasing a total of sixty-four different models, Apple retired the Performa brand in early 1997, shortly after release of the Power Macintosh 5500, 6500, 8600 and 9600, as well as the return of Steve Jobs to the company. The Performa brand's lifespan coincided with a period of significant financial turmoil at Apple due in part to low sales of Performa machines. Overview With a strong education market share throughout the 1980s, Apple wanted to push its computers into the home, with the idea that a child would experience the same Macintosh computer both in the home and at school, and later grow to use Macintosh computers at work. In the early 1990s, Apple sold computers through a chain of authorized resellers, and through mail order catalogs such as those found in the latter third of MacWorld Magazine. A typical reseller sold Macintosh computers to professionals, who purchased high-level applications and required performance and expansion capabilities. Consumers, however, purchased computers based on the best value, and weren't as concerned about expansion or performance. To reach these customers, Apple wanted to sell their computers through department store chains (such as Sears), but this would conflict with existing authorized reseller agreements, in which a geographic area had only one reseller. To prevent these conflicts, Apple split the Macintosh line into professional and consumer models. The professional line included the Classic, LC, Centris, Quadra, and Power Macintosh lines, and continued to be sold as-is (i.e., no consumer software bundles or limited features). The consumer line was given the name "Performa", and included computers similar to the professional line. Early Performa models were not sold with the "Macintosh" brand in order to get around the authorized reseller agreements. The Performa line was marketed differently from the professional line. To satisfy consumer-level budgets, the computers were sold bundled with home and small business applications. Most models were also bundled with a keyboard, mouse, an external modem and either a dot-29 or dot-39 pitch shadow mask CRT monitor. Professional models, in contrast, were sold à la carte with keyboard and mouse bundles chosen by the dealer or sold separately; monitors sold with high-end Macintosh models typically used Trinitron tubes based on aperture grille technology. While the Performa models resembled their professional counterpart on the system software and hardware level, certain features were tweaked or removed. The Performa 600, for instance, lacked the level-2 cache of the Macintosh IIvx it was based on. Unlike the professional Macintosh lines, each individual Performa bundle was given a unique model number, in some cases varying only by the software bundle or the specific retailer that sold that model. This was intended to accommodate retailers, who could advertise that they could beat their competitors' price on equivalent models while at the same time ensuring that they did not actually carry the same models as their competitors. To help consumers choose between the options available to them, Apple created multiple paid advertisements including "The Martinettis Bring Home a Computer", a thirty-minute "storymercial" about a fictional family that purchases a Performa computer that aired in December 1994. Apple's strategy for selling Performa machines in department and electronics retail stores did not include the sort of specialized training Apple offered to its dealers. This resulted in situations where Performa display models were often poorly taken care of; the demo computers crashed, the self-running demo software not running or the display models not even powered on. Apple tried to address the training issue by hiring their own sales people to aid the store sales staff, most of them recruited from Macintosh user groups. Despite this, however, many returned Performa computers could not be serviced properly because the stores were not authorized Apple service centers. The problem was compounded by retailers favoring Microsoft Windows, especially after the introduction of Windows 95. Computers running Windows were generally cheaper, and encouraged by manufacturer spiffs, advertising co-ops, and other promotion programs. In addition, many stores preferred to sell their own branded white box PCs, something Apple would not allow. As a consequence of these issues, Apple overestimated demand for Performa machines in 1995 while also underestimating demand for high-end Power Macintosh models, leading to significant oversupply issues. Introduction of new Performa models slowed as a result: whereas Apple had introduced 20 different Performa models around the world from May to December 1995, the number dropped to four in the first seven months of 1996. For the late-1996 holiday period, sales of Performa-branded machines had dropped year-over-year by 15 percent, reflective of a company-wide drop in fourth-quarter revenues by one-third compared with 1995. In February 1997, just days after Steve Jobs returned to the company, Apple refreshed its entire line of desktop computers, retiring a dozen Performa models based on the Power Macintosh 6200 and 6400 with no replacement, and reducing the range of Power Macintosh to six computers (plus a few Apple Workgroup Server variants). The official end of the Performa brand was announced on March 15 as part of sweeping changes at the company that included layoffs of a third of the company's workforce and the cancellation of several software products. By early 1998, Apple's lineup was reduced to four computers: One desktop, one all-in-one, and two minitowers (one of which was sold as a server product). As part of the restructuring of how Apple sold its computers in retail channels, it partnered with CompUSA to implement a "Store within a store" concept. Apple and related products were displayed and sold in a physically separate location by specialized employees (currently done at select Best Buy stores). Performa-specific software The Performa versions of the Macintosh System software introduced some features that were not available on non-Performa Macintoshes. The most notable of these are At Ease (parental controls), the Launcher (an application launcher similar to the macOS Dock), and the Performa Control Panel, which included several unique configuration options. The functionality of all three components were eventually folded into the operating system itself. Versions of System 7 with the additional software had a 'P' appended to the end, such as 7.1.2P which was included with the Performa 630 in mid-1994. Software bundles usually included ClarisWorks, Quicken, a calendar/contact manager such as Touchbase and Datebook Pro, America Online, educational software such as American Heritage Dictionary, The New Grolier Multimedia Encyclopedia, TIME Almanac (on models equipped with a CD-ROM drive), Mavis Beacon Teaches Typing, or Mario Teaches Typing, and a selection of games such as Spectre Challenger, Diamonds, and Monopoly. Another software package that only the Performa was equipped with was called Megaphone, by Cypress Research. MegaPhone is a screen based telephony application (SBT) that provides a powerful way to manage your telephone calls from your computer desktop. In addition to drag-and-drop dialling, callerID display, and call logging, MegaPhone includes features like VoiceMail, TouchTone Navigator, and Smart Speed Dial and facilitates Fax communications via a separate software package included in the Performa. https://www.megaphoneco.com/mpmac.htm Models See also List of Macintosh models grouped by CPU type List of Macintosh models by case type Timeline of Macintosh models References External links List of Performa and equivalent models Performa
Operating System (OS)
625
Organizational Systems Security Analyst The Organizational Systems Security Analyst (OSSA) is a technical vendor-neutral Information Security certification programme which is being offered in Asia. It is developed by ThinkSECURE Pte Ltd, an information-security certification body and consultancy. The programme consists of a specialized technical information security training and certification course and practical examination which technical Information Technology professionals can attend in order to become skilled and effective technical Information Security professionals and to prove their level of competence and skill by undergoing the examination. Technical staff enrolling in the programme are taught and trained how to address the technical security issues they encounter in daily operations and how to methodically establish, operate and maintain security for their organization's computer network and computer systems infrastructure. The OSSA programme does not focus on hackers' software as these quickly become obsolete as software patches are released. It first looks at security from a methodological thinking perspective and draws lessons from Sun Tzu's "The Art of War" to generate a security framework and then introduces example resources and tools by which the various security aims and objectives, such as "how to defend your server against a hacker's attacks" can be met. Sun Tzu's 'Art of War' treatise is used to provide a guiding philosophy throughout the programme, addressing both offensive threats and the defensive measures needed to overcome them. The philosophy also extends to the sections on incident response methodology (i.e. how to respond to security breaches), computer forensics and the impact of law on security-related activities such as the recovery of information from a computer crime suspect's hard drive. Under the programme, students are given coursework and experience how to set up and maintain a complete enterprise-class security monitoring and defence infrastructure which includes firewalls, network intrusion detection systems, file-integrity checkers, honeypots and encryption. A unique attacker's methodology is also introduced to assist the technical staff with identifying the modus operandi of an attacker and his arsenal and to conduct auditing against computer systems by using that methodology. The generic title sections under the programme appear to comprise the following: What is Information Security Network 101 Defending your Turf & Security Policy Formulation Defensive Tools & Lockdown The 5E Attacker Methodology: Attacker Methods & Exploits Wireless (In)Security Incident Response & Computer Forensics The Impact Of Law Under each section are many modules, for example the defensive section covers the setting up of firewalls, NIDS, HIDS, honeypots, cryptographic software, etc. The OSSA programme consists of both practical hands-on lab-based coursework and a practical hands-on lab-based certification examination. According to the ThinkSECURE website, the rationale for this is that only those who prove they can apply their skills and knowledge to a completely new and unknown exam setup will get certified and those who only know how to do exam-cramming by memorizing facts and figures and visiting brain dump sites will not be able to get certified. Compared to non-practical multiple-choice-question exam formats, this method of examination is beneficial for the Information Security industry and employers as a whole because it provides the following benefits: makes sure only candidates who can prove ability to apply skills in a practical examination are certified. stops brain-dumpers from attaining and devaluing the certification as a basis of competency evaluation. protects people's and companies' money and time investment in getting certified. helps employers identify technical staff who are more skilled. provides the industry with a pool of competent, qualified technical staff. External links Organizational Systems Security Analyst (OSSA) Definition of OSSA acronym OSSA Programme Outline ThinkSECURE Association of Information Security Professionals list of certifications Information technology qualifications Professional titles and certifications
Operating System (OS)
626
TRIX (operating system) TRIX is a network-oriented research operating system developed in the late 1970s at MIT's Laboratory for Computer Science (LCS) by Professor Steve Ward and his research group. It ran on the NuMachine and had remote procedure call functionality built into its kernel, but was otherwise a Version 7 Unix workalike. Design and implementation On startup, the NuMachine would load the same program on each CPU in the system, passing each instance the numeric ID of the CPU it was running on. TRIX relied on this design to have the first CPU set up global data structures and then set a flag to signal that initialization was complete. After that, each instance of the kernel was able to access global data. The system also supported data private to each CPU. Access to the filesystem was provided by a program in user space. The kernel supported unnamed threads running in domains. A domain was the equivalent of a Unix process without a stack pointer (each thread in a domain had a stack pointer). A thread could change domains, and the system scheduler would migrate threads between CPUs in order to keep all processors busy. Threads had access to a single kind of mutual exclusion primitive, and one of seven priorities. The scheduler was designed to avoid priority inversion. User space programs could create threads through a spawn system call. A garbage collector would periodically identify and free unused domains. The shared memory model used to coordinate work between the various CPUs caused memory bus contention and was known to be a source of inefficiency. The designers were aware of designs that would have alleviated the contention. Indeed, TRIX's original design used a nonblocking message passing mechanism, but "this implementation was found to have deficiencies often overlooked in the literature," including poor performance. Although the TRIX operating system was first implemented on the NuMachine, this was more a result of the availability of the NuMachine at MIT than any characteristic of the architecture. The system was designed to be easily portable. It was implemented largely in C with little assembly code. The mutual exclusion primitive could be ported to any architecture with an atomic test and set instruction. Attempted use by the GNU Project Richard Stallman mentions in the 1985 GNU Manifesto that "an initial kernel exists" for the GNU operating system, "but many more features are needed to emulate Unix." This was a reference to TRIX's kernel, which TRIX's authors had decided to distribute as free software. In a speech in October 1986, Stallman elaborated that "the TRIX kernel runs, and it has a certain limited amount of Unix compatibility, but it needs a lot more. Currently it has a file system that uses the same structure on disk as the ancient Unix file system does. This made it easier to debug the thing, because they could set up the files with Unix, and then they could run TRIX, but that file system doesn't have any of the features that I believe are necessary." The features Stallman wished to add (file versioning, undeletion, information on when and how and where the file was backed up on tape, atomic file updates) were not generally associated with Unix. In December 1986, developers used TRIX's kernel as a base in their first attempt to create a kernel for GNU. They eventually decided Trix was unusable as a starting point, primarily because: it only ran on "an obscure, expensive 68000 box", and would therefore require porting to other architectures, and it was decided that the Mach microkernel was a better underlying design for a server-based operating system. This second attempt evolved into the GNU Hurd. See also GNU Mach Comparison of kernels References Further reading Ward, S.A. TRIX: a Network-oriented Operating System. COMPCON, Spring 1980, pp. 344–349. External links TRIX kernel source code (can also be browsed online) Hurd history on the GNU Project web site 1986 software Free software operating systems GNU Project software Massachusetts Institute of Technology software Monolithic kernels Unix variants 68k architecture
Operating System (OS)
627
Version 7 Unix Seventh Edition Unix, also called Version 7 Unix, Version 7 or just V7, was an important early release of the Unix operating system. V7, released in 1979, was the last Bell Laboratories release to see widespread distribution before the commercialization of Unix by AT&T Corporation in the early 1980s. V7 was originally developed for Digital Equipment Corporation's PDP-11 minicomputers and was later ported to other platforms. Overview Unix versions from Bell Labs were designated by the edition of the user's manual with which they were accompanied. Released in 1979, the Seventh Edition was preceded by Sixth Edition, which was the first version licensed to commercial users. Development of the Research Unix line continued with the Eighth Edition, which incorporated development from 4.1BSD, through the Tenth Edition, after which the Bell Labs researchers concentrated on developing Plan 9. V7 was the first readily portable version of Unix. As this was the era of minicomputers, with their many architectural variations, and also the beginning of the market for 16-bit microprocessors, many ports were completed within the first few years of its release. The first Sun workstations (then based on the Motorola 68000) ran a V7 port by UniSoft; the first version of Xenix for the Intel 8086 was derived from V7 and Onyx Systems soon produced a Zilog Z8000 computer running V7. The VAX port of V7, called UNIX/32V, was the direct ancestor of the popular 4BSD family of Unix systems. The group at the University of Wollongong that had ported V6 to the Interdata 7/32 ported V7 to that machine as well. Interdata sold the port as Edition VII, making it the first commercial UNIX offering. DEC distributed their own PDP-11 version of V7, called V7M (for modified). V7M, developed by DEC's original Unix Engineering Group (UEG), contained many enhancements to the kernel for the PDP-11 line of computers including significantly improved hardware error recovery and many additional device drivers. UEG evolved into the group that later developed Ultrix. Reception Due to its power yet elegant simplicity, many old-time Unix users remember V7 as the pinnacle of Unix development and have dubbed it "the last true Unix", an improvement over all preceding and following Unices. At the time of its release, though, its greatly extended feature set came at the expense of a decrease in performance compared to V6, which was to be corrected largely by the user community. The number of system calls in Version 7 was only around 50, while later Unix and Unix-like systems continued to add many more: Released as free software In 2002, Caldera International released V7 as FOSS under a permissive BSD-like software license. Bootable images for V7 can still be downloaded today, and can be run on modern hosts using PDP-11 emulators such as SIMH. An x86 port has been developed by Nordier & Associates. Paul Allen maintained several publicly accessible historic computer systems, including a PDP-11/70 running Unix Version 7. New features in Version 7 Many new features were introduced in Version 7. Programming tools: lex, lint, and make.<p>The Portable C Compiler (pcc) was provided along with the earlier, PDP-11-specific, C compiler by Ritchie.<p>These first appeared in the Research Unix lineage in Version 7, although early versions of some of them had already been picked up by PWB/UNIX. New commands: the Bourne shell, at, awk, calendar, f77, fortune, tar (replacing the tp command), touch Networking support, in the form of uucp and Datakit New system calls: access, acct, alarm, chroot (originally used to test the V7 distribution during preparation), exece, ioctl, lseek (previously only 24-bit offsets were available), umask, utime New library calls: The new stdio routines, malloc, getenv, popen/system Environment variables A maximum file size of just over one gigabyte, through a system of indirect addressing Multiplexed files A feature that did not survive long was a second way (besides pipes) to do inter-process communication: multiplexed files. A process could create a special type of file with the mpx system call; other processes could then open this file to get a "channel", denoted by a file descriptor, which could be used to communicate with the process that created the multiplexed file. Mpx files were considered experimental, not enabled in the default kernel, and disappeared from later versions, which offered sockets (BSD) or CB UNIX's IPC facilities (System V) instead (although mpx files were still present in 4.1BSD). See also Version 6 Unix Seventh Edition Unix terminal interface Ancient UNIX References External links Unix Seventh Edition manual (Bell Labs) Browsable source code PDP Unix Preservation Society Bell Labs Unices Berkeley Software Distribution Discontinued operating systems Free software operating systems 1979 software
Operating System (OS)
628
TwisterOS Twister OS (Twister for short) is an 32-bit Operating System created by Pi Labs for the Raspberry Pi single board computer originally, with a x86_64 PC version released a few months later. Twister is based on Raspberry Pi OS Lite and uses the XFCE desktop environment. Twister OS also has a version called "Twister OS Armbian" designed for ARM SBCs with the RK3399 CPU. There are four versions of the operating system, TwisterOS Full (for the Raspberry Pi 4), Twister OS Lite (a stripped-down version with only themes), Twister UI (For x86_64 PCs running Linux Mint or Xubuntu) and Twisters OS Armbian (for RK3399 CPUs) . Features TwisterOS has 7 main desktop themes, 5 out of those have dark modes. Twister OS has its own theme called "Twister OS theme" which is similar to Ubuntu's desktop theme. The Twister 95, XP, 7, 10, and 11 themes are similar to the themes on the Windows 95, XP, 7, 10 and 11 operating systems. iTwister and iTwister Sur desktop themes are similar to the themes on macOS. Box86 is a emulator used to run x86 software and games on ARM systems. Wine is a compatibility layer that lets the user to run Windows applications on non-Windows systems. CommanderPi is a system monitoring and configuration tool designed to check system information and overclock the CPU. Other Twister versions Twister OS Lite Twister OS Lite is for the Raspberry Pi as well. The Lite version only comes with the themes in Twister OS, as well as Box86 and Wine. Twister UI Twister UI is very similar to Twister OS, the only difference is that Twister UI is used for non-single board computers. Twister UI is designed to be installed by running a setup script on a already running installation of Linux Mint (XFCE) or Xubuntu. Twister OS Armbian Twister OS Armbian is a version of Twister OS that can run on SBCs with RK3399 CPUs, like the Rock Pi 4B. Twister OS Armbian also comes preinstalled on emmc chips inside the Rock Pi 4 Plus models. Twister OS Armbian is based on the Armbian Linux operating system. References Computers Linux
Operating System (OS)
629
System time In computer science and computer programming, system time represents a computer system's notion of the passage of time. In this sense, time also includes the passing of days on the calendar. System time is measured by a system clock, which is typically implemented as a simple count of the number of ticks that have transpired since some arbitrary starting date, called the epoch. For example, Unix and POSIX-compliant systems encode system time ("Unix time") as the number of seconds elapsed since the start of the Unix epoch at 1 January 1970 00:00:00 UT, with exceptions for leap seconds. Systems that implement the 32-bit and 64-bit versions of the Windows API, such as Windows 9x and Windows NT, provide the system time as both , represented as a year/month/day/hour/minute/second/milliseconds value, and , represented as a count of the number of 100-nanosecond ticks since 1 January 1601 00:00:00 UT as reckoned in the proleptic Gregorian calendar. System time can be converted into calendar time, which is a form more suitable for human comprehension. For example, the Unix system time seconds since the beginning of the epoch translates into the calendar time 9 September 2001 01:46:40 UT. Library subroutines that handle such conversions may also deal with adjustments for time zones, daylight saving time (DST), leap seconds, and the user's locale settings. Library routines are also generally provided that convert calendar times into system times. Other time measurements Closely related to system time is process time, which is a count of the total CPU time consumed by an executing process. It may be split into user and system CPU time, representing the time spent executing user code and system kernel code, respectively. Process times are a tally of CPU instructions or clock cycles and generally have no direct correlation to wall time. File systems keep track of the times that files are created, modified, and/or accessed by storing timestamps in the file control block (or inode) of each file and directory. History Most first-generation personal computers did not keep track of dates and times. These included systems that ran the CP/M operating system, as well as early models of the Apple II, the BBC Micro, and the Commodore PET, among others. Add-on peripheral boards that included real-time clock chips with on-board battery back-up were available for the IBM PC and XT, but the IBM AT was the first widely available PC that came equipped with date/time hardware built into the motherboard. Prior to the widespread availability of computer networks, most personal computer systems that did track system time did so only with respect to local time and did not make allowances for different time zones. With current technology, most modern computers keep track of local civil time, as do many other household and personal devices such as VCRs, DVRs, cable TV receivers, PDAs, pagers, cell phones, fax machines, telephone answering machines, cameras, camcorders, central air conditioners, and microwave ovens. Microcontrollers operating within embedded systems (such as the Raspberry Pi, Arduino, and other similar systems) do not always have internal hardware to keep track of time. Many such controller systems operate without knowledge of the external time. Those that require such information typically initialize their base time upon rebooting by obtaining the current time from an external source, such as from a time server or external clock, or by prompting the user to manually enter the current time. Implementation The system clock is typically implemented as a programmable interval timer that periodically interrupts the CPU, which then starts executing a timer interrupt service routine. This routine typically adds one tick to the system clock (a simple counter) and handles other periodic housekeeping tasks (preemption, etc.) before returning to the task the CPU was executing before the interruption. Retrieving system time The following tables illustrate methods for retrieving the system time in various operating systems, programming languages, and applications. Values marked by (*) are system-dependent and may differ across implementations. All dates are given as Gregorian or proleptic Gregorian calendar dates. Note that the resolution of an implementation's measurement of time does not imply the same precision of such measurements. For example, a system might return the current time as a value measured in microseconds, but actually be capable of discerning individual clock ticks with a frequency of only 100 Hz (10 ms). Operating systems Programming languages and applications See also Notes References External links Critical and Significant Dates, J. R. Stockton (retrieved 3 December 2015) The Boost Date/Time Library (C++) The Boost Chrono Library (C++) The Chronos Date/Time Library (Smalltalk) Joda Time, The Joda Date/Time Library (Java) The Perl DateTime Project (Perl) date: Ruby Standard Library Documentation (Ruby) Operating system technology Computer programming Computer real-time clocks
Operating System (OS)
630
IOS (disambiguation) iOS is a mobile operating system developed by Apple Inc. IOS or ios may also refer to: Technology Instructor operating station, in flight simulators International operator services, making an international call via a live telephone operator Computing Cisco IOS, router and switch operating system Cisco IOS XE Cisco IOS XR ios (C++), a C++ header file ("input/output stream") I/O System (86-DOS), DOS BIOS in 86-DOS I/O System (MS-DOS), DOS BIOS in MS-DOS Input/Output Supervisor, a part of the control program in the IBM mainframe OS/360 operating system and its successors IOS (Wii firmware), firmware that runs on the Nintendo Wii used with Wii homebrew Internet operating system, any operating system designed to run all of its applications and services through an Internet client, generally a web browser Interorganizational system, a system between organization iOS, the operating system by Apple Inc for the iPhone and other mobile hardware Organizations Illinois Ornithological Society, American state-based bird club Institute for Objectivist Studies, the former name of The Atlas Society Institute of Oriental Studies of the Russian Academy of Sciences International Organizations for Succulent Plant Research, in Zürich, Switzerland Interorbital Systems, an aerospace design firm in Mojave, California, US Investors Overseas Service, an investment company Places Ios, a Greek island Ilhéus Jorge Amado Airport (IATA airport code), Ilhéus, Brazil Media IOS Press, a Dutch scientific and medical publisher Independent on Sunday, a UK newspaper Ireland on Sunday, an Irish newspaper published from 1996 to 2006 Other uses International Open Series, an amateur snooker tour See also eyeOS I/O System (disambiguation) IO (disambiguation) Input/Output Control System (IOCS) BIOS (disambiguation) XIOS, Extended Input/Output System
Operating System (OS)
631
OS X Mavericks OS X Mavericks (version 10.9) is the 10th major release of macOS, Apple Inc.'s desktop and server operating system for Macintosh computers. OS X Mavericks was announced on June 10, 2013, at WWDC 2013, and was released on October 22, 2013, worldwide. The update emphasized battery life, Finder improvements, other improvements for power users, and continued iCloud integration, as well as bringing more of Apple's iOS apps to OS X. Mavericks, which was named after the surfing location in Northern California, was the first in the series of OS X releases named for places in Apple's home state; earlier releases used the names of big cats. OS X Mavericks was the first OS X major release to be a free upgrade and the second overall since Mac OS X 10.1 "Puma". It is the final Mac operating system to feature the Lucida Grande typeface as the standard system font since Mac OS X Public Beta in 2000. History Apple announced OS X Mavericks on June 10, 2013, during the company's Apple Worldwide Developers Conference (WWDC) keynote (which also introduced iOS 7, a revised MacBook Air, the sixth-generation AirPort Extreme, the fifth-generation AirPort Time Capsule, and a redesigned Mac Pro). During a keynote on October 22, 2013, Apple announced that the official release of 10.9 on the Mac App Store would be available immediately, and that unlike previous versions of OS X, 10.9 would be available at no charge to all users running Snow Leopard (10.6.8) or later. On October 22, 2013, Apple offered free upgrades for life on OS X and iWork. System requirements OS X Mavericks can run on any Mac that can run OS X Mountain Lion; as with Mountain Lion, 2 GB of RAM, 8 GB of available storage, and OS X 10.6.8 (Snow Leopard) or later are required. Mavericks and later versions are all available for free. The full list of compatible models: iMac (Mid 2007 or later) MacBook (13-inch Aluminum, Late 2008), (13-inch Polycarbonate, Early 2009 or later) MacBook Pro (13-inch, Mid/Late 2007 or later), (15-inch or 17-inch, Mid/Late 2007 or later) MacBook Air (Late 2008 or later) Mac Mini (Early 2009 or later) Mac Pro (Early 2008 or later) Xserve (Early 2009) System features The menu bar and the Dock are available on each display. Additionally, AirPlay compatible displays such as the Apple TV can be used as an external display. Mission Control has been updated to organize and switch between Desktop workspaces independently between multiple displays. OS X Mavericks introduced App Nap, which sleeps apps that are not currently visible. Any app running on Mavericks can be eligible for this feature by default. Compressed Memory is a virtual memory compression system which automatically compresses data from inactive apps when approaching maximum memory capacity. Timer coalescing is a feature that enhances energy efficiency by reducing CPU usage by up to 72 percent. This allows MacBooks to run for longer periods of time and desktop Macs to run cooler. Apple now supports OpenGL 4.1 Core Profile and OpenCL 1.2. Server Message Block version 2 (SMB2) is now the default protocol for sharing files, rather than AFP. This is to increase performance and cross-platform compatibility. Some skeuomorphs, such as the leather texture in Calendar, the legal pad theme of Notes, and the book-like appearance of Contacts, have been removed from the UI. iCloud Keychain stores a user's usernames, passwords and Wi-Fi passwords to allow the user to fill this information into forms when needed. The system has native LinkedIn sharing integration. IPoTB (Internet Protocol over Thunderbolt Bridge) Thunderbolt networking is supported in Mavericks. This feature allows the user to quickly transfer a large amount of data between two Macs. Notification Center allows the user to reply to notifications instantly, allows websites to send notifications, and, when the user wakes up a Mac that was in a sleep state, displays a summary of missed notifications before the machine is unlocked. Some system alerts, such as low battery, removal of drives without ejecting, and a failed Time Machine backup have been moved to Notification Center. The "traffic light" close, minimize, and maximize window buttons have appeared somewhat brighter than Mac OS X Lion and OS X Mountain Lion. App features Finder gets enhancements such as tabs, full-screen support, and document tags. Pinch-to-zoom and swipe-to-navigate-history gestures have been removed, although both are supported anywhere else. The new iBooks application allows the user to read books purchased through the iBooks Store. The app also allows the user to purchase new content from the iBooks Store, and a night mode to make it easier to read in dark environments. The new Maps application allows the user the same functionality as in iOS Maps. The Calendar app has enhancements such as being able to add Facebook events, and an estimate for the travel time to an event. The Safari browser has a significantly enhanced JavaScript performance which Apple claims is faster than Chrome and Firefox. A Top Sites view allows the user to quickly access the most viewed sites by default. However, the user can pin or remove websites from the view. The sidebar now allows the user to view their bookmarks, reading list and shared links. Safari can also auto-generate random passwords and remember them through iCloud Keychain. Other applications found in Mavericks AirPort Utility App Store Archive Utility Audio MIDI Setup Automator Bluetooth File Exchange Boot Camp Assistant Calculator Chess ColorSync Utility Console Contacts Dictionary Digital Color Meter Disk Utility DVD Player FaceTime Font Book Game Center GarageBand (may not be pre-installed) Grab Grapher iMovie (may not be pre-installed) iTunes Image Capture Ink (can only be accessed by connecting a graphics tablet to your Mac) Keychain Access Keynote (may not be pre-installed) Mail Messages Migration Assistant Notes Notification Center Numbers (may not be pre-installed) Pages (may not be pre-installed) Photo Booth Preview QuickTime Player Reminders Script Editor Stickies System Information Terminal TextEdit Time Machine VoiceOver Utility X11/XQuartz (may not be pre-installed) Removed functionality The Open Transport API has been removed. USB syncing of calendar, contacts and other information to iOS devices has been removed, instead requiring the use of iCloud. QuickTime 10 no longer supports many older video codecs and converts them to the ProRes format when opened. Older video codecs cannot be viewed in Quick Look. Apple also removed the ability to sync mobile iCloud Notes if iOS devices were upgraded from iOS 8 to iOS 9, effectively forcing all Mavericks users to update or upgrade their computers. Reception OS X Mavericks has received mixed reviews. One complaint is that Apple removed the local sync services, which forces users to get iCloud to sync iOS devices with the desktop OS. However, this feature has since returned in the 10.9.3 and iTunes 11.2 updates. The Verge stated that OS X Mavericks was “a gentle evolution of the Mac operating system”. Release history See also Aqua (user interface) macOS version history List of Macintosh software References 9 X86-64 operating systems 2013 software Computer-related introductions in 2013
Operating System (OS)
632
Berkeley Software Distribution The Berkeley Software Distribution or Berkeley Standard Distribution (BSD) is a discontinued operating system based on Research Unix, developed and distributed by the Computer Systems Research Group (CSRG) at the University of California, Berkeley. The term "BSD" commonly refers to its descendants, including FreeBSD, OpenBSD, NetBSD, and DragonFly BSD. BSD was initially called Berkeley Unix because it was based on the source code of the original Unix developed at Bell Labs. In the 1980s, BSD was widely adopted by workstation vendors in the form of proprietary Unix variants such as DEC Ultrix and Sun Microsystems SunOS due to its permissive licensing and familiarity to many technology company founders and engineers. Although these proprietary BSD derivatives were largely superseded in the 1990s by UNIX SVR4 and OSF/1, later releases provided the basis for several open-source operating systems including FreeBSD, OpenBSD, NetBSD, DragonFly BSD, Darwin, and TrueOS. These, in turn, have been used by proprietary operating systems, including Apple's macOS and iOS, which derived from them, and Microsoft Windows, which used (at least) part of its TCP/IP code, which was legal. Code from FreeBSD was also used to create the operating system for the PlayStation 4 and Nintendo Switch. History The earliest distributions of Unix from Bell Labs in the 1970s included the source code to the operating system, allowing researchers at universities to modify and extend Unix. The operating system arrived at Berkeley in 1974, at the request of computer science professor Bob Fabry who had been on the program committee for the Symposium on Operating Systems Principles where Unix was first presented. A PDP-11/45 was bought to run the system, but for budgetary reasons, this machine was shared with the mathematics and statistics groups at Berkeley, who used RSTS, so that Unix only ran on the machine eight hours per day (sometimes during the day, sometimes during the night). A larger PDP-11/70 was installed at Berkeley the following year, using money from the Ingres database project. Understanding BSD requires delving far back into the history of Unix, the operating system first released by AT&T Bell Labs in 1969. BSD began life as a variant of Unix that programmers at the University of California at Berkeley, initially led by Bill Joy, began developing in the late 1970s. At first, BSD was not a clone of Unix, or even a substantially different version of it. It just included some extra features, which were intertwined with code owned by AT&T. In 1975, Ken Thompson took a sabbatical from Bell Labs and came to Berkeley as a visiting professor. He helped to install Version 6 Unix and started working on a Pascal implementation for the system. Graduate students Chuck Haley and Bill Joy improved Thompson's Pascal and implemented an improved text editor, ex. Other universities became interested in the software at Berkeley, and so in 1977 Joy started compiling the first Berkeley Software Distribution (1BSD), which was released on March 9, 1978. 1BSD was an add-on to Version 6 Unix rather than a complete operating system in its own right. Some thirty copies were sent out. The second Berkeley Software Distribution (2BSD), released in May 1979, included updated versions of the 1BSD software as well as two new programs by Joy that persist on Unix systems to this day: the vi text editor (a visual version of ex) and the C shell. Some 75 copies of 2BSD were sent out by Bill Joy. A VAX computer was installed at Berkeley in 1978, but the port of Unix to the VAX architecture, UNIX/32V, did not take advantage of the VAX's virtual memory capabilities. The kernel of 32V was largely rewritten to include Berkeley graduate student Ozalp Babaoglu's virtual memory implementation, and a complete operating system including the new kernel, ports of the 2BSD utilities to the VAX, and the utilities from 32V was released as 3BSD at the end of 1979. 3BSD was also alternatively called Virtual VAX/UNIX or VMUNIX (for Virtual Memory Unix), and BSD kernel images were normally called /vmunix until 4.4BSD. After 4.3BSD was released in June 1986, it was determined that BSD would move away from the aging VAX platform. The Power 6/32 platform (codenamed "Tahoe") developed by Computer Consoles Inc. seemed promising at the time, but was abandoned by its developers shortly thereafter. Nonetheless, the 4.3BSD-Tahoe port (June 1988) proved valuable, as it led to a separation of machine-dependent and machine-independent code in BSD which would improve the system's future portability. In addition to portability, the CSRG worked on an implementation of the OSI network protocol stack, improvements to the kernel virtual memory system and (with Van Jacobson of LBL) new TCP/IP algorithms to accommodate the growth of the Internet. Until then, all versions of BSD used proprietary AT&T Unix code, and were therefore subject to an AT&T software license. Source code licenses had become very expensive and several outside parties had expressed interest in a separate release of the networking code, which had been developed entirely outside AT&T and would not be subject to the licensing requirement. This led to Networking Release 1 (Net/1), which was made available to non-licensees of AT&T code and was freely redistributable under the terms of the BSD license. It was released in June 1989. After Net/1, BSD developer Keith Bostic proposed that more non-AT&T sections of the BSD system be released under the same license as Net/1. To this end, he started a project to reimplement most of the standard Unix utilities without using the AT&T code. Within eighteen months, all of the AT&T utilities had been replaced, and it was determined that only a few AT&T files remained in the kernel. These files were removed, and the result was the June 1991 release of Networking Release 2 (Net/2), a nearly complete operating system that was freely distributable. Net/2 was the basis for two separate ports of BSD to the Intel 80386 architecture: the free 386BSD by William Jolitz and the proprietary BSD/386 (later renamed BSD/OS) by Berkeley Software Design (BSDi). 386BSD itself was short-lived, but became the initial code base of the NetBSD and FreeBSD projects that were started shortly thereafter. BSDi soon found itself in legal trouble with AT&T's Unix System Laboratories (USL) subsidiary, then the owners of the System V copyright and the Unix trademark. The USL v. BSDi lawsuit was filed in 1992 and led to an injunction on the distribution of Net/2 until the validity of USL's copyright claims on the source could be determined. The lawsuit slowed development of the free-software descendants of BSD for nearly two years while their legal status was in question, and as a result systems based on the Linux kernel, which did not have such legal ambiguity, gained greater support. The lawsuit was settled in January 1994, largely in Berkeley's favor. Of the 18,000 files in the Berkeley distribution, only three had to be removed and 70 modified to show USL copyright notices. A further condition of the settlement was that USL would not file further lawsuits against users and distributors of the Berkeley-owned code in the upcoming 4.4BSD release. The final release from Berkeley was 1995's 4.4BSD-Lite Release 2, after which the CSRG was dissolved and development of BSD at Berkeley ceased. Since then, several variants based directly or indirectly on 4.4BSD-Lite (such as FreeBSD, NetBSD, OpenBSD and DragonFly BSD) have been maintained. The permissive nature of the BSD license has allowed many other operating systems, both open-source and proprietary, to incorporate BSD source code. For example, Microsoft Windows used BSD code in its implementation of TCP/IP and bundles recompiled versions of BSD's command-line networking tools since Windows 2000. Darwin, the basis for Apple's macOS and iOS, is based on 4.4BSD-Lite2 and FreeBSD. Various commercial Unix operating systems, such as Solaris, also incorporate BSD code. Relationship to Research Unix Starting with the 8th Edition, versions of Research Unix at Bell Labs had a close relationship to BSD. This began when 4.1cBSD for the VAX was used as the basis for Research Unix 8th Edition. This continued in subsequent versions, such as the 9th Edition, which incorporated source code and improvements from 4.3BSD. The result was that these later versions of Research Unix were closer to BSD than they were to System V. In a Usenet posting from 2000, Dennis Ritchie described this relationship between BSD and Research Unix: Relationship to System V Eric S. Raymond summarizes the longstanding relationship between System V and BSD, stating, "The divide was roughly between longhairs and shorthairs; programmers and technical people tended to line up with Berkeley and BSD, more business-oriented types with AT&T and System V." In 1989, David A. Curry wrote about the differences between BSD and System V. He characterized System V as being often regarded as the "standard Unix." However, he described BSD as more popular among university and government computer centers, due to its advanced features and performance: Technology Berkeley sockets Berkeley's Unix was the first Unix to include libraries supporting the Internet Protocol stacks: Berkeley sockets. A Unix implementation of IP's predecessor, the ARPAnet's NCP, with FTP and Telnet clients, had been produced at the University of Illinois in 1975, and was available at Berkeley. However, the memory scarcity on the PDP-11 forced a complicated design and performance problems. By integrating sockets with the Unix operating system's file descriptors, it became almost as easy to read and write data across a network as it was to access a disk. The AT&T laboratory eventually released their own STREAMS library, which incorporated much of the same functionality in a software stack with a different architecture, but the wide distribution of the existing sockets library reduced the impact of the new API. Early versions of BSD were used to form Sun Microsystems' SunOS, founding the first wave of popular Unix workstations. Binary compatibility Some BSD operating systems can run much native software of several other operating systems on the same architecture, using a binary compatibility layer. Much simpler and faster than emulation, this allows, for instance, applications intended for Linux to be run at effectively full speed. This makes BSDs not only suitable for server environments, but also for workstation ones, given the increasing availability of commercial or closed-source software for Linux only. This also allows administrators to migrate legacy commercial applications, which may have only supported commercial Unix variants, to a more modern operating system, retaining the functionality of such applications until they can be replaced by a better alternative. Standards Current BSD operating system variants support many of the common IEEE, ANSI, ISO, and POSIX standards, while retaining most of the traditional BSD behavior. Like AT&T Unix, the BSD kernel is monolithic, meaning that device drivers in the kernel run in privileged mode, as part of the core of the operating system. BSD descendants Several operating systems are based on BSD, including FreeBSD, OpenBSD, NetBSD, MidnightBSD, GhostBSD, Darwin and DragonFly BSD. Both NetBSD and FreeBSD were created in 1993. They were initially derived from 386BSD (also known as "Jolix"), and merged the 4.4BSD-Lite source code in 1994. OpenBSD was forked from NetBSD in 1995, and DragonFly BSD was forked from FreeBSD in 2003. BSD was also used as the basis for several proprietary versions of Unix, such as Sun's SunOS, Sequent's DYNIX, NeXT's NeXTSTEP, DEC's Ultrix and OSF/1 AXP (now Tru64 UNIX). NeXTSTEP later became the foundation for Apple Inc.'s macOS. See also BSD Daemon BSD licenses Comparison of BSD operating systems List of BSD operating systems Unix wars References Bibliography Marshall K. McKusick, Keith Bostic, Michael J. Karels, John S. Quartermain, The Design and Implementation of the 4.4BSD Operating System (Addison Wesley, 1996; ) Marshall K. McKusick, George V. Neville-Neil, The Design and Implementation of the FreeBSD Operating System (Addison Wesley, August 2, 2004; ) Samuel J. Leffler, Marshall K. McKusick, Michael J. Karels, John S. Quarterman, The Design and Implementation of the 4.3BSD UNIX Operating System (Addison Wesley, November 1989; ) Peter H. Salus, The Daemon, the GNU & The Penguin (Reed Media Services, September 1, 2008; ) Peter H. Salus, A Quarter Century of UNIX (Addison Wesley, June 1, 1994; ) Peter H. Salus, Casting the Net (Addison-Wesley, March 1995; ) External links A timeline of BSD and Research UNIX UNIX History – History of UNIX and BSD using diagrams The Design and Implementation of the 4.4BSD Operating System The Unix Tree: Source code and manuals for old versions of Unix EuroBSDCon, an annual event in Europe in September, October or November, founded in 2001 BSDCan, a conference in Ottawa, Ontario, Canada, held annually in May since 2004, in June since 2015 AsiaBSDCon, a conference in Tokyo, held annually in March of each year, since 2007 mdoc.su – short manual page URLs for FreeBSD, OpenBSD, NetBSD and DragonFly BSD, a web-service written in nginx BXR.SU – Super User's BSD Cross Reference, a userland and kernel source code search engine based on OpenGrok and nginx 1977 software Free software operating systems Free software programmed in C Operating system families Science and technology in the San Francisco Bay Area University of California, Berkeley
Operating System (OS)
633
Dave Cutler David Neil Cutler Sr. (born March 13, 1942) is an American software engineer. He developed several computer operating systems, namely Microsoft's Windows NT, and Digital Equipment Corporation's RSX-11M, VAXELN, and VMS. Personal history Cutler was born in Lansing, Michigan and grew up in DeWitt, Michigan. After graduating from Olivet College, Michigan, in 1965, he went to work for DuPont. Cutler holds at least 20 patents, and is affiliate faculty in the Computer Science Department at the University of Washington. Cutler is an avid auto racing driver. He competed in the Atlantic Championship from 1996 to 2002, scoring a career best of 8th on the Milwaukee Mile in 2000. Cutler was elected a member of the National Academy of Engineering in 1994 for the design and engineering of commercially successful operating systems. Cutler is a member of Adelphic Alpha Pi Fraternity at Olivet College, Michigan. DuPont (1965 to 1971) Cutler's first exposure to computers came when he was tasked to perform a computer simulations model for one of DuPont's customers using IBM's GPSS-3 language on an IBM model 7044. This work led to an interest in how computers and their operating systems worked. Digital Equipment Corporation (1971 to 1988) Cutler left DuPont to pursue his interest in computer systems, beginning with Digital Equipment Corporation in 1971. He worked at the famous "Mill" facility in Maynard, Massachusetts. RSX-11M See RSX-11. VMS In April 1975, Digital began a hardware project, code-named Star, to design a 32-bit virtual address extension to its PDP-11. In June 1975, Cutler, together with Dick Hustvedt and Peter Lipman, were appointed the technical project leaders for the software project, code-named Starlet, to develop a totally new operating system for the Star family of processors. These two projects were tightly integrated from the beginning. The three technical leaders of the Starlet project together with three technical leaders of the Star project formed the "Blue Ribbon Committee" at Digital who produced the fifth design evolution for the programs. The design featured simplifications to the memory management and process scheduling schemes of the earlier proposals and the architecture was accepted. The Star and Starlet projects culminated in the development of the VAX-11/780 superminicomputer and the VAX/VMS operating system, respectively. PRISM and MICA projects Digital began working on a new CPU using reduced instruction set computer (RISC) design principles in 1986. Cutler, who was working in DEC's DECwest facility in Bellevue, Washington, was selected to head PRISM, a project to develop the company's RISC machine. Its operating system, code named MICA, was to embody the next generation of design principles and have a compatibility layer for Unix and VMS. The RISC machine was to be based on emitter coupled logic (ECL) technology, and was one of three ECL projects Digital was undertaking at the time. Funding the Research and Development (R&D) costs of multiple ECL projects yielding products that would ultimately compete against each other was a strain. Of the three ECL projects, the VAX 9000 was the only one that was directly commercialized. Primarily because of the early successes of the PMAX advanced development project and the need for differing business models, PRISM was canceled in 1988 in favor of PMAX. PRISM later surfaced as the basis of Digital's Alpha family of computer systems. Attitude towards Unix Cutler is known for his disdain for Unix. Said one team member who worked with Cutler: Microsoft (1988 to Present) Microsoft Windows NT Cutler left Digital for Microsoft in October 1988 and led the development of Windows NT. Later, he worked on targeting Windows NT to Digital's 64-bit Alpha architecture then on Windows 2000. After the demise of Windows on Alpha (and the demise of Digital), he was instrumental in porting Windows to AMD's new 64-bit AMD64 architecture. He was officially involved with the Windows XP Pro x64 and Windows Server 2003 SP1 x64 releases. He moved to working on Microsoft's Live Platform in August 2006. Dave Cutler was awarded the prestigious status of Senior Technical Fellow at Microsoft. Microsoft Windows Azure At the 2008 Professional Developers Conference, Microsoft announced Azure Services Platform, a cloud-based operating system which Microsoft is developing. During the conference keynote, Cutler was mentioned as a lead developer on the project, along with Amitabh Srivastava. Microsoft Xbox , a spokesperson for Microsoft confirmed that Cutler was no longer working on Windows Azure, and had joined the Xbox team. No further information was provided as to what Cutler's role was, nor what he was working on within the team. In May 2013, Microsoft announced the Xbox One console, and Cutler was mentioned as having worked in developing the host OS part of the system running inside the new gaming device. Apparently his work was focused in creating an optimized version of Microsoft's Hyper-V Host OS specifically designed for Xbox One. Awards Recognized as a 2007 National Medal of Technology and Innovation Laureate, awarded on 29 September 2008 at a White House ceremony in Washington, DC. Honored as a Computer History Museum Fellow on 16 April 2016 at the Computer History Museum in Mountain View, California. References Bibliography External links Dave Cutler video on his career as part of his Computer History Museum Fellow award on YouTube Dave Cutler race driving career statistics 1942 births Living people American computer programmers American computer scientists Microsoft technical fellows Microsoft Windows people Digital Equipment Corporation people Kernel programmers Atlantic Championship drivers People from Lansing, Michigan Racing drivers from Michigan Operating system people Olivet Comets football players
Operating System (OS)
634
BootX (Apple) BootX is a software-based bootloader designed and developed by Apple Inc. for use on the company's Macintosh computer range. BootX is used to prepare the computer for use, by loading all required device drivers and then starting-up Mac OS X by booting the kernel on all PowerPC Macintoshes running the Mac OS X 10.2 operating system or later versions. Using BootROM, a read-only memory (ROM) computer chip containing OpenFirmware, a graphical bootsplash is shown briefly on all compatible Macintosh computers as a grey Apple logo with a spinning cursor that appears during the startup sequence. The program is freely available as part of the Darwin operating system under the open-source Apple Public Source License. BootX was superseded by another nearly identical bootloader named boot.efi and an Extensible Firmware Interface ROM on the release of the Intel-based Mac. History Older Macintoshes dating from 1983 until 1998 utilized a basic bootloader; the bootloader was solely a ROM chip varying in sizes up to 4 megabytes (MB), which contained both the computer code to boot the computer and to run the Mac OS operating system. The ROM-resident portion of the Mac OS is the Macintosh Toolbox and the boot-ROM part of that ROM was retroactively named Old World ROM upon the release of the New World ROM Macs, starting with the first iMac. The ROM-resident Macintosh Toolbox differs greatly from the design of the modern Macintosh, which generally use a hard drive of large capacity to store the operating system. This bootloader was used in all Macintosh computers until mid-1998. With the advent of the iMac series of Macintoshes, the firmware was updated. The ROM was reduced in size to 1 MB, called BootROM, and the remainder of the ROM was moved to the file Mac OS ROM in the Mac OS System Folder, stored on the hard drive. This ROM used a full implementation of the OpenFirmware standard (contained in BootROM) and was named the New World ROM. In 2001, with the release of Mac OS X 10.0, the Mac OS ROM file was replaced with the BootX bootloader file. In 2002, with the release of Mac OS X 10.2, the historical "Happy Mac" start-up picture was replaced with a grey apple logo. By introducing the Intel Mac in 2006, BootROM was replaced by the near identical Extensible Firmware Interface ROM (although Apple still calls it BootROM) and the boot.efi file. Features To make the boot loader appealing to other operating system developers, Apple added features to allow flexibility in the booting process such as network boot using TFTP and load Mach-O and ELF formatted kernels. BootX can also boot from HFS, HFS+, UFS and ext2 formatted volumes. The boot loader can be manipulated at startup by holding down various key combinations to alter the booting process. Such functions include Verbose Mode, achieved by holding down the Command and V key at startup, which replaces the default Apple logo with text-based information on the boot process and Single User Mode, achieved by holding down the Command and S, which, depending on the operating system, may boot into a more basic command-line or text-based version of the operating system, to facilitate maintenance and recovery action. The ROM can also be set to require a password to access these technical functions using the OpenFirmware interface. Boot process In PowerPC-based Macintoshes, the boot process starts with the activation of BootROM, the basic Macintosh ROM, which performs a Power On Self Test to test hardware essential to startup. On the passing of this test, the startup chime is played and control of the computer is passed to OpenFirmware. OpenFirmware initializes the Random Access Memory, Memory Management Unit and hardware necessary for the ROM's operation. The OpenFirmware then checks settings, stored in NVRAM, and builds a list of all devices on a device tree by gathering their stored FCode information. On the completion of this task, BootX takes over the startup process configuring the keyboard and display, claiming and reserving memory for various purposes and checking to see if various key combinations are being pressed. After this process has been completed BootX displays the grey Apple logo, spins the spinning wait cursor, and proceeds to load the kernel and some kernel extensions and start the kernel. References External links Mac OS X at osxbook.com MacOS Boot loaders
Operating System (OS)
635
Universal Windows Platform Universal Windows Platform (UWP) is a computing platform created by Microsoft and first introduced in Windows 10. The purpose of this platform is to help develop universal apps that run on Windows 10, Windows 10 Mobile, Windows 11, Xbox One, Xbox Series X/S and HoloLens without the need to be rewritten for each. It supports Windows app development using C++, C#, VB.NET, and XAML. The API is implemented in C++, and supported in C++, VB.NET, C#, F# and JavaScript. Designed as an extension to the Windows Runtime (WinRT) platform first introduced in Windows Server 2012 and Windows 8, UWP allows developers to create apps that will potentially run on multiple types of devices. UWP does not target non-Microsoft systems. Microsoft's solution for other platforms is .NET MAUI (previously "Xamarin.Forms"), an open-source API created by Xamarin, a Microsoft subsidiary since 2016. Community solutions also exist for non-targeted platforms, such as the Uno Platform. Compatibility UWP is a part of Windows 10, Windows 10 Mobile and Windows 11. UWP apps do not run on earlier Windows versions. Apps that are capable of implementing this platform are natively developed using Visual Studio 2015, Visual Studio 2017 or Visual Studio 2019. Older Metro-style apps for Windows 8.1, Windows Phone 8.1 or for both (universal 8.1) need modifications to migrate to UWP. Some Windows platform features in later versions have been exclusive to UWP and software specifically packaged for it, and are not usable in other architectures such as the existing WinAPI, WPF, and Windows Forms. However, as of 2019, Microsoft has taken steps to increase the parity between these application platforms and make UWP features usable inside non-UWP software. Microsoft introduced XAML Islands (a method for embedding UWP controls and widgets into non-UWP software) as part of the Windows 10 May 2019 update, and stated that it would also allow UWP functions and Windows Runtime components to be invoked within non-packaged software. API bridges UWP Bridges translate calls in other application programming interfaces (APIs) to the UWP interface, so that applications written in these APIs would run on UWP. Two bridges are announced during the 2015 Build keynote for Android and iOS apps to be ported to Windows 10 Mobile. , Microsoft maintains support for bridges for Windows desktop apps, progressive web apps, Microsoft Silverlight, and iOS's Cocoa Touch API. iOS Windows Bridge for iOS (codenamed "Islandwood") is an open-source middleware toolkit that allows iOS apps developed in Objective-C to be ported to Windows 10 by using Visual Studio 2015 to convert the Xcode project into a Visual Studio project. An early build of Windows Bridge for iOS was released as open-source software under the MIT License on August 6, 2015, while the Android version was in closed beta. This "WinObjC" project is open source on GitHub. It contains code from various existing implementations of Cocoa Touch like Cocotron and GNUstep as well as Microsoft's own code that implements iOS frameworks using UWP methods. It uses a version of the LLVM clang compiler. Android Windows Bridge for Android (codenamed "Astoria") was a runtime environment that would allow for Android apps written in Java or C++ to run on Windows 10 Mobile and published to Microsoft Store. Kevin Gallo, technical lead of Windows Developer Platform, explained that the layer contained some limitations: Google Mobile Services and certain core APIs are not available, and apps that have "deep integration into background tasks", such as messaging software, would not run well in this environment. In February 2016, Microsoft announced that it had ceased development on Windows Bridge for Android, citing redundancies due to iOS already being a primary platform for multi-platform development, and that Windows Bridge for iOS produced native code and did not require an OS-level emulator. Instead, Microsoft encouraged the use of C# for multi-platform app development using tools from Xamarin, which they had acquired prior to the announcement. Deployment UWP provides an application model based upon its CoreApplication class and the Windows Runtime (WinRT). Universal Windows apps that are created using the UWP no longer indicate having been written for a specific OS in their manifest build; instead, they target one or more device families, such as a PC, smartphone, tablet, or Xbox One, using Universal Windows Platform Bridges. These extensions allow the app to automatically utilize the capabilities that are available to the particular device it is currently running on. A universal app may run on either a mobile phone or a tablet and provide suitable experiences on each. A universal app running on a smartphone may start behaving the way it would if it were running on a PC when the phone is connected to a desktop computer or a suitable docking station. Reception Games developed for UWP are subject to technical restrictions, including incompatibility with multi-video card setups, difficulties modding the game, overlays for gameplay-oriented chat clients, or key binding managers. UWP will only support DirectX 11.1 or later, so games built on older DirectX versions will not work. During Build 2016, Microsoft Xbox division head Phil Spencer announced that the company was attempting to address issues which would improve the viability of UWP for PC games, stating that Microsoft was "committed to ensuring we meet or exceed the performance expectations of full-screen games as well as the additional features including support for overlays, modding, and more." Support for AMD FreeSync and Nvidia G-Sync technologies, and disabling V-sync, was later added to UWP. Epic Games founder Tim Sweeney criticized UWP for being a walled garden, since by default UWP software may only be published and installed via Windows Store, requiring changes in system settings to enable the installation of external software (similarly to Android). Additionally, certain operating system features are exclusive to UWP and cannot be used in non-UWP software such as most video games. Sweeney characterized these moves as "the most aggressive move Microsoft has ever made" in attempting to transform PCs into a closed platform, and felt that these moves were meant to put third-party games storefronts such as Steam at a disadvantage as Microsoft is "curtailing users' freedom to install full-featured PC software and subverting the rights of developers and publishers to maintain a direct relationship with their customers". As such, Sweeney argued that end-users should be able to download UWP software and install it in the same manner as non-UWP software. Windows VP Kevin Gallo addressed Sweeney's concerns, stating that "in the Windows 10 November Update, we enabled people to easily side-load apps by default, with no UX required. We want to make Windows the best development platform regardless of technologies used, and offer tools to help developers with existing code bases of HTML/JavaScript, .NET and Win32, C++ and Objective-C bring their code to Windows, and integrate UWP capabilities. With Xamarin, UWP developers can not only reach all Windows 10 devices, but they can now use a large percentage of their C# code to deliver a fully native mobile app experiences for iOS and Android." In a live interview with Giant Bomb during its E3 2016 coverage, Spencer defended the mixed reception of its UWP-exclusive releases, stating that "they all haven't gone swimmingly. Some of them have gone well", and that "there's still definitely concern that UWP and our store are somehow linked in a way that is nefarious. It's not." He also discussed Microsoft's relationships with third-party developers and distributors such as Steam, considering the service to be "a critical part of gaming's success on Windows" and stating that Microsoft planned to continue releasing games through the platform as well as its own, but that "There's going to be areas where we cooperate and there's going to be areas where we compete. The end result is better for gamers." Spencer also stated that he was a friend of Sweeney and had been in frequent contact with him. On May 30, 2019, Microsoft announced that it would support distribution of Win32 games on Microsoft Store; Spencer (who had since been promoted to head of all games operations at Microsoft, reporting directly to CEO Satya Nadella) explained that developers preferred the architecture, and that it "allow[s] for the customization and control [developers and players] come to expect from the open Windows gaming ecosystem." It was also announced that future Xbox Game Studios releases on Windows would be made available on third-party storefronts such as Steam, rather than be exclusive to Microsoft Store. References External links Guide to Universal Windows Platform (UWP) apps Comparison of UWP, Android, and iOS from a Programmer's point of view .NET Windows APIs Windows technology Xbox One Microsoft application programming interfaces
Operating System (OS)
636
System deployment The deployment of a mechanical device, electrical system, computer program, etc., is its assembly or transformation from a packaged form to an operational working state. Deployment implies moving a product from a temporary or development state to a permanent or desired state. See also IT infrastructure deployment Development Innovation Software deployment References Systems engineering
Operating System (OS)
637
Windows Runtime Windows Runtime (WinRT) is a platform-agnostic component and application architecture first introduced in Windows 8 and Windows Server 2012 in 2012. It is implemented in C++ and officially supports development in C++ (via C++/WinRT, C++/CX or WRL), Rust/WinRT, Python/WinRT, JavaScript-TypeScript, and the managed code languages C# and Visual Basic .NET (VB.NET). WinRT is not a runtime in a traditional sense but rather a language-independent application binary interface based on COM to allow APIs be consumed from multiple languages, but it does provide certain services usually provided by a full-blown runtime, such as type activation. Apps using the Windows Runtime may run inside a sandboxed environment to allow greater security and stability and can natively support both x86 and ARM. WinRT components are designed with interoperability among multiple languages and APIs in mind, including native, managed and scripting languages. Built-in APIs provided by Windows which use the WinRT ABI are commonly known as WinRT APIs; however, anyone can use the WinRT ABI for their own APIs. Technology WinRT is implemented in the programming language C++ and is object-oriented by design. Its underlying technology, the Windows API (Win32 API), is written mostly in the language C. It is an unmanaged application binary interface based on Component Object Model (COM) that allows interfacing from multiple languages, as does COM. However, the API definitions are stored in .winmd files, which are encoded in ECMA 335 metadata format, which .NET Framework also uses with a few modifications. For WinRT components implemented in native code, the metadata file only contains the definition of methods, classes, interfaces and enumerations and the implementation is provided in a separate DLL. This common metadata format makes it easier to consume WinRT APIs from .NET apps with simpler syntax than P/Invoke. Windows provides a set of built-in APIs which are built on the WinRT ABI which provide everything from the XAML-based WinUI library and device access such as camera, microphone etc... The previous C++/CX (Component Extensions) language, which borrows some C++/CLI syntax, was introduced for writing and consuming WinRT components with less glue code visible to the programmer, relative to classic COM programming in C++, and imposes fewer restrictions relative to C++/CLI on mixing types. The Component Extensions of C++/CX are recommended for use at the API-boundary only, not for other purposes. Regular C++ (with COM-specific discipline) can also be used to program with WinRT components, with the help of the Windows Runtime C++ Template Library (WRL), which is similar in purpose to what Active Template Library provides for COM. In 2019, Microsoft deprecated C++/CX in favor of the C++/WinRT header library. Most WinRT applications run within a sandbox and need explicit user approval to access critical OS features and underlying hardware. By default, file access is restricted to several predetermined locations, such as the directories Documents or Pictures. WinRT applications are packaged in the .appx and later the .msix file format; based upon Open Packaging Conventions, it uses a ZIP format with added XML files. WinRT applications are distributed mostly through an application store named Microsoft Store, where Windows apps (termed Windows Store apps) can be purchased and downloaded by users. WinRT apps can only be sideloaded from outside Windows Store on Windows 8 or RT systems that are part of a Windows domain, or equipped with a special activation key obtained from Microsoft. These restrictions were lifted in the Windows 10 November Update, where users can freely sideload any app signed with a trusted certificate by enabling a setting. In a major departure from Win32 and similarly to .NET Framework 4.5, most APIs which are expected to take significant time to complete are implemented as asynchronous. When calling a Windows Runtime ABI, the task is started on another thread or process and returns immediately, freeing the app to perform other tasks while waiting for results. The asynchronous model requires new programming language constructs. Each language provides their own way to consume asynchronous APIs. Parts of the built-in API needing asynchronous access include on-screen messages and dialogs, file access, Internet connectivity, sockets, streams, devices and services, and calendar, contacts and appointments. Services Metadata The metadata describes the APIs written using the WinRT ABI. It defines a programming model that makes it possible to write object-oriented code that can be shared across programming languages, and enables services like reflection. Herb Sutter, C++ expert at Microsoft, explained during his session on C++ at the 2011 Build conference that the WinRT metadata is in the same format as CLI metadata. Native code (i.e., processor-specific machine code) cannot contain metadata, so it is stored in a separate metadata file that can be reflected like ordinary CLI assemblies. Since it is the same format as CLI metadata, WinRT APIs can be used from managed CLI languages as if it was just a .NET API. Type system WinRT has a rich object-oriented class-based type system that is built on the metadata. It supports constructs with corresponding constructs in the .NET framework: classes, methods, properties, delegates, and events. One of the major additions to WinRT relative to COM is the cross-application binary interface (ABI), .NET-style generics. Only interfaces and delegates can be generic, runtime classes and methods in them can't. Generic interfaces are also known as parameterized interfaces. In C++/CX, they are declared using the keyword generic with a syntax very similar to that of keyword template. WinRT classes (ref classes) can also be genericized using C++ templates, but only template instantiations can be exported to .winmd metadata (with some name mangling), unlike WinRT generics which preserve their genericity in the metadata. WinRT also provides a set of interfaces for generic containers that parallel those in the C++ Standard Library, and languages provide some reciprocal (back-and-forth) conversion functions. The consumption of WinRT collections in .NET languages (e.g., C# and VB) and in JavaScript is more transparent than in C++, with automated mappings into their natural equivalents occurring behind the scenes. When authoring a WinRT component in a managed language, some extra, COM-style rules must be followed, e.g. .NET framework collection types cannot be declared as return types, but only the WinRT interfaces that they implement can be used at the component boundary. WinRT components Classes that are compiled to target the WinRT are called WinRT components. They are classes that can be written in any supported language and for any supported platform. The key is the metadata. This metadata makes it possible to interface with the component from any other WinRT language. The runtime requires WinRT components that are built with .NET Framework to use the defined interface types or .NET type interfaces, which automatically map to the first named. Inheritance is as yet not supported in managed WinRT components, except for XAML classes. Programming interfaces Programs and libraries targeted for the WinRT runtime can be created and consumed from several platforms and programming languages. Notably C/C++ (either with language extensions offering first-class support for WinRT concepts, or with a lower-level template library allowing to write code in standard C++), .NET (C# and Visual Basic .NET (VB.NET)) and JavaScript. This is made possible by the metadata. In WinRT terminology, a language binding is termed a language projection. C++ (C++/WinRT, Component Extensions, WRL) Standard C++ is a first-class citizen of the WinRT platform. As of Windows 10, version 1803, the Windows SDK contains C++/WinRT. C++/WinRT is an entirely standard modern C++17 language projection for Windows Runtime (WinRT) APIs, implemented as a header-file-based library, and designed to provide first-class access to the modern Windows API. With C++/WinRT, Windows Runtime APIs can be authored and consumed using any standards-compliant C++17 compiler. WinRT is a native platform and supports any native (and standard) C++ code, so that a C++ developer can reuse existing native C/C++ libraries. With C++/WinRT, there are no language extensions. Prior to C++/WinRT being officially released in the Windows SDK, from October 2016, Microsoft offered on GitHub C++/WinRT. It does not rely on C++/CX code, with the result of producing smaller binaries and faster code. There are two other legacy options for using WinRT from C++: Windows Runtime C++ Template Library (WRL), an ATL-style template library (similar to Windows Template Library or WTL), and C++/CX (C++ with Component Extensions) which resembles C++/CLI. Because of the internal consumption requirements at Microsoft, WRL is exception-free, meaning its return-value discipline is HRESULT-based just like that of COM. C++/CX on the other hand wraps-up calls to WinRT with code that does error checking and throws exceptions as appropriate. C++/CX has several extensions that enable integration with the platform and its type system. The syntax resembles the one of C++/CLI although it produces native (although not standard) code and metadata that integrates with the runtime. For example, WinRT objects may be allocated with ref new, which is the counterpart of gcnew from C++/CLI. The hat operator ^ retains its meaning, however in the case where both the caller and callee are written in C++ and living in the same process, a hat reference is simply a pointer to a vptr to a virtual method table (vtable, VMT). Along with C++/CX, relative to traditional C++ COM programming, are partial classes, again inspired by .NET. These allow instance WinRT XAML code to be translated into C++ code by tools, and then combined with human-written code to produce the complete class while allowing clean separation of the machine-generated and human-edited parts of a class implementation into different files. .NET The .NET Framework and the Common Language Runtime (CLR) are integrated into the WinRT as a subplatform. It has influenced and set the standards for the ecosystem through the metadata format and libraries. The CLR provides services like JIT-compilation code and garbage collection. WinRT applications using .NET languages use the XAML-based WinUI, and are primarily written in C#, VB.NET, and for the first time for XAML, with native code using C++/CX. Although not yet officially supported, programs can also be written in other .NET languages. With .NET 5, Microsoft removed the built-in WinRT support and instead created CsWinRT, a tool that generates interop code for accessing Windows Runtime APIs similar to how C++/WinRT works. Limitations Classes defined in WinRT components that are built in managed .NET languages must be declared as sealed, so they cannot be derived from. However, non-sealed WinRT classes defined elsewhere can be inherited from in .NET, their virtual methods overridden, and so on; but the inherited managed class must still be sealed. Members that interface with another language must have a signature with WinRT types or a managed type that is convertible to these. JavaScript WinRT applications can also be coded using HTML with JavaScript in code-behind, which are run using the Trident rendering engine and Chakra JavaScript engine, both of which are also used by Internet Explorer. When coding a WinRT app in JavaScript, its features are adapted to follow JavaScript naming conventions, and namespaces are also mapped to JavaScript objects. Other languages Microsoft is in the process of projecting WinRT APIs to languages other than C++. One example is Rust/WinRT, an interface for programs written in Rust to consume and author WinRT APIs. Rust/WinRT is part of Windows App SDK (formerly Project Reunion), a Microsoft effort to reconcile traditional Windows desktop and the UWP app model. Bridges With the introduction of the Universal Windows Platform (UWP), the platform has received many API bridges that allow programs originally developed for other platforms to be ported easily while taking advantage of UWP features. Microsoft has provided bridges for Android (defunct since 2016), iOS (Cocoa Touch), Progressive Web Apps, Silverlight, as well as traditional Windows desktop apps (using MSIX packaging from the Windows App SDK). API WinRT comes with an application programming interface (API) in the form of a class library that exposes the features of Windows 8 for the developer, like its immersive interface API. It is accessible and consumable from any supported language. Runtime classes The Windows Runtime classes is a set SDKs that provide access to all functionality from the XAML parser to the camera function. The SDKs are implemented as native C/C++ libraries (unmanaged). Naming conventions The naming conventions for the components (classes and other members) in the API are heavily influenced by the .NET naming conventions which uses camel case (specifically PascalCase). Microsoft recommends users to follow these rules in case where no others are given. These conventions are projected differently in some languages, like JavaScript, which converts it to its conventions and the other way around. This is to give a native and consistent experience regardless of the programming language. Restrictions and rules Since Windows Runtime is projected to various languages, some restrictions on fundamental data types exist so as to host all such languages. Programmers must be careful with the behavior of those types when used with public access (for method parameters, method return values, properties, etc.). Basic types In .NET languages and C++, a rich set of data types exists, representing various numerals. In JavaScript, a Number can only represent up to 53 bits of precision. In WinRT, the only lacking numeral data type is 8-bit signed integer relative to .NET and C++. JavaScript developers must be careful when dealing with big numbers while coding for WinRT. Strings Strings are immutable in .NET and JavaScript, but mutable in C++. A null pointer passed as a string to WinRT by C++ is converted to an empty string In .Net, null being passed as a string to WinRT is converted to an empty string In JavaScript, null being passed as a string to WinRT is converted to a string with the word null. This is due to JavaScript's keyword null being represented as a null object. Similar results occur when passing undefined to WinRT from JavaScript. Structs In .NET and C++, structs are value types, and such a struct can contain any type in it. JavaScript does not directly support structs. In WinRT, use of structs is allowed only for containing types that have value semantics, including numerals, strings, and other structs. Pointers or interface references are disallowed. References In .NET, objects are passed by reference, whereas numerals and structs are passed by value. In C++, all types can be passed by reference or value. In WinRT, interfaces are passed by reference; all other types are passed by value. Arrays In .NET, C++, and JavaScript arrays are reference types. In WinRT, arrays are value types. Events In .NET and C++, clients subscribe to events using += operator. In JavaScript, addEventListener function or setting on<EventName> property is used to subscribe to events. In WinRT, all languages can use their own way to subscribe to events. Collections Some .NET collections map directly to WinRT collections. WinRT Vector type resembles arrays and the array syntax is used to consume them. WinRT Map type is a key/value pair collection, and is projected as Dictionary in .NET languages. Method overloading All WinRT languages (.NET, C++, JavaScript) support overloading on parameters .NET and C++ also support overloading on type. In WinRT, only parameter number is used for overloading. Asynchrony All WinRT methods are designed such that any method taking longer than 50 milliseconds is an async method. The established naming pattern to distinguish asynchronous methods is <Verb>[<Noun>]Async. For the full runtime library, all methods that have a chance to last longer than 50 ms are implemented as asynchronous methods only. Windows Phone Runtime Windows Phone 8.1 uses a version of the Windows Runtime named the Windows Phone Runtime. It enables developing applications in C# and VB.NET, and Windows Runtime components in C++/CX. Although WP8 brought limited support, the platform did eventually converge with Windows 8.1 in Windows Phone 8.1. Windows Phone 8 Windows Phone 8 has limited support for developing and consuming Windows Runtime components through Windows Phone Runtime. Many of the Windows Runtime APIs in Windows 8 that handle core operating system functions have been ported to Windows Phone 8. Support for developing native games using C++/CX and DirectX has been added, by request from the game development industry. However, the Windows Phone XAML Framework is still based on the same Microsoft Silverlight framework, as in Windows Phone 7, for backward compatibility. Thus, , XAML development is impossible in C++/CX. Development using either HTML5 or WinJS is unsupported on Windows Phone 8. Windows Phone 8.1 Windows Runtime support on Windows Phone 8.1 converges with Windows 8.1. The release brings a full Windows Runtime API to the platform, including support for WinRT XAML, and language bindings for C++/CX, and HTML5-JavaScript. There is also a project type called Universal apps to enable apps to share code across 8.1 versions of Windows Phone and Windows. The Windows Phone 8 Silverlight Framework has been updated. It can exploit some of the new features in the Windows Runtime. Windows Phone Runtime uses the AppX package format from Windows 8, after formerly using Silverlight XAP. References External links .NET Framework implementations Computer-related introductions in 2012 Windows APIs Windows technology Application programming interfaces
Operating System (OS)
638
Wintel Wintel is the partnership of Microsoft Windows and Intel producing personal computers using Intel x86-compatible processors running Microsoft Windows. The word Wintel is a portmanteau of Windows and Intel. Background By the early 1980s, the chaos and incompatibility that was rife in the early microcomputer market had given way to a smaller number of de facto industry standards, including the S-100 bus, CP/M, the Apple II, Microsoft BASIC in read-only memory (ROM), and the inch floppy drive. No single firm controlled the industry, and fierce competition ensured that innovation in both hardware and software was the rule rather than the exception. Microsoft Windows and Intel processors gained ascendance and their ongoing alliance gave them market dominance. Intel claimed that this partnership has enabled the two companies to give customers the benefit of "a seemingly unending spiral of falling prices and rising performance". In addition, they claim a "history of innovation" and "a shared vision of flexible computing for the agile business". IBM In 1981 IBM entered the microcomputer market. The IBM PC was created by a small subdivision of the firm. It was unusual for an IBM product because it was largely sourced from outside component suppliers and was intended to run third-party operating systems and software. IBM published the technical specifications and schematics of the PC, which allowed third-party companies to produce compatible hardware, the so-called open architecture. The IBM PC became one of the most successful computers of all time. The key feature of the IBM PC was that it had IBM's enormous public respect behind it. It was an accident of history that the IBM PC happened to have an Intel CPU (instead of the technically superior Motorola 68000 that had been tipped for it, or an IBM in-house design), and that it shipped with IBM PC DOS (a licensed version of Microsoft's MS-DOS) rather than the CP/M-86 operating system, but these accidents were to have enormous significance in later years. Because the IBM PC was an IBM product with the IBM badge, personal computers became respectable. It became easier for a business to justify buying a microcomputer than it had been even a year or two before, and easiest of all to justify buying the IBM Personal Computer. Since the PC architecture was well documented in IBM's manuals, and PC DOS was designed to be similar to earlier CP/M operating system, the PC soon had thousands of different third-party add-in cards and software packages available. This made the PC the preferred option for many, since the PC supported the hardware and software they needed. Competitors Industry competitors took one of several approaches to the changing market. Some (such as Apple, Amiga, Atari, and Acorn) persevered with their independent and quite different systems. Of those systems, Apple's Macintosh is the only one remaining on the market. Others (such as Digital, then the world's second-largest computer company, Hewlett-Packard, and Apricot) concentrated on making similar but technically superior models. Other early market leaders (such as Tandy-Radio Shack or Texas Instruments) stayed with outdated architectures and proprietary operating systems for some time before belatedly realizing which way market trends were going and switching to the most successful long-term business strategy, which was to build a machine that duplicated the IBM PC as closely as possible and sell it for a slightly lower price, or with higher performance. Given the very conservative engineering of the early IBM personal computers and their higher than average prices, this was not a terribly difficult task at first, bar only the great technical challenge of crafting a BIOS that duplicated the function of the IBM BIOS exactly but did not infringe on copyrights. The two early leaders in this last strategy were both start-up companies: Columbia Data Products and Compaq. They were the first to achieve reputations for very close compatibility with the IBM machines, which meant that they could run software written for the IBM machine without recompilation. Before long, IBM had the best-selling personal computer in the world and at least two of the next-best sellers were, for practical purposes, identical. For the software industry, the effect was profound. First, it meant that it was rational to write for the IBM PC and its clones as a high priority, and port versions for less common systems at leisure. Second (and even more importantly), when a software writer in pre-IBM days had to be careful to use as plain a subset of the possible techniques as practicable (so as to be able to run on any hardware that ran CP/M), with a major part of the market now all using the same exact hardware (or a very similar clone of it) it was practical to take advantage of any and every hardware-specific feature offered by the IBM. Independent BIOS companies like Award, Chips and Technologies, and Phoenix began to market a clean room BIOS that was 100% compatible with IBM's, and from that time on any competent computer manufacturer could achieve IBM compatibility as a matter of routine. From around 1984, the market was fast growing but relatively stable. There was as yet no sign of the "Win" half of "Wintel," though Microsoft was achieving enormous revenues from DOS sales both to IBM and to an ever-growing list of other manufacturers who had agreed to buy an MS-DOS license for every machine they made, even those that shipped with competing products. As for Intel, every PC made either had an Intel processor or one made by a second source supplier under license from Intel. Intel and Microsoft had enormous revenues, Compaq and many other makers between them made far more machines than IBM, but the power to decide the shape of the personal computer rested firmly in IBM's hands. In 1987, IBM introduced the PS/2 computer line. Although the open architecture of the PC and its successors had been a great success for them, and they were the biggest single manufacturer, most of the market was buying faster and cheaper IBM-compatible machines made by other firms. The PS/2s remained software compatible, but the hardware was quite different. It introduced the technically superior Micro Channel architecture bus for higher speed communication within the system, but failed to maintain the open AT bus (later called the ISA bus), which meant that none of the millions of existing add-in cards would function. In other words, the new IBM machines were not IBM-compatible. Further, IBM planned the PS/2 in such a way that for both technical and legal reasons it would be very difficult to clone. Instead, IBM offered to sell a PS/2 licence to anyone who could afford the royalty. They would not only require a royalty for every PS/2-compatible machine sold, but also a payment for every IBM-compatible machine the particular maker had ever made in the past. Many PC manufacturers signed up as PS/2 licensees. (Apricot, who had lost badly by persevering with their "better PC than IBM" strategy up until this time, was one of them, but there were many others.) Many others decided to hold off before committing themselves. Some major manufacturers, known as the Gang of Nine, decided to group together and decide on a bus type that would be open to all manufacturers, as fast as or faster than IBM's Microchannel, and yet still retain backward compatibility with ISA. This was the crucial turning point: the industry as a whole was no longer content to let IBM make all the major decisions about technical direction. In the event, the new EISA bus was itself a commercial failure beyond the high end: By the time the cost of implementing EISA was reduced to the extent that it would be implemented in most desktop PCs, the much cheaper VESA Local Bus had removed most of the need for it in desktop PCs (though it remained common in servers due to for example the possibility of data corruption on hard disk drives attached to VLB controllers), and Intel's PCI bus was just around the corner. But although very few EISA systems were sold, it had achieved its purpose: IBM no longer controlled the computer industry. IBM would belatedly amend the PS/2 series with the PS/ValuePoint line, which tracked the features of the emerging ad hoc platform. At around this same time, the end of the 1980s and the beginning of the 1990s, Microsoft's Windows operating environment started to become popular, and Microsoft's competitor Digital Research started to recover a share of the DOS press and DOS market with DR-DOS. IBM planned to replace DOS with the vastly superior OS/2 (originally an IBM/Microsoft joint venture, and unlike the PS/2 hardware, highly backward compatible), but Microsoft preferred to push the industry in the direction of its own product, Windows. With IBM suffering its greatest ever public humiliation in the wake of the PS/2 disaster, massive financial losses, and a marked lack of company unity or direction, Microsoft's combination of a soft marketing voice and a big financial stick was effective: Windows became the de facto standard. For the competing computer manufacturers, large or small, the only common factors to provide joint technical leadership were operating software from Microsoft, and CPUs from Intel. Dominance Over the following years, both firms in the Wintel partnership would attempt to extend their monopolies. Intel made a successful major push into the motherboard and chipset markets—becoming the largest motherboard manufacturer in the world and, at one stage, almost the only chipset manufacturer—but badly fumbled its attempt to move into the graphics chip market, and (from 1991) faced sharp competition in its core CPU territory from AMD, Cyrix, VIA and Transmeta. Microsoft fared better. In 1990, Microsoft had two competitors in its core market (Digital Research and IBM), Intel had none. By 1996, Intel had two competitors in its core market (CPUs), while Microsoft had none. Microsoft had pursued a policy of insisting on per-processor royalties, thus making competing operating systems unattractive to computer manufacturers and provoking regulatory scrutiny from the European Commission and US authorities, leading to an undertaking by Microsoft to cease such practices. However, the integration of DOS into Windows 95 was the masterstroke: not only were the other operating system vendors frozen out, Microsoft could now require computer manufacturers to comply with its demands on pain of higher prices (as when it required IBM to stop actively marketing OS/2 or else pay more than twice as much for Windows 95 as its competitor Compaq) or by withholding "Designed for Windows 95" endorsement (which was regarded as an essential hardware marketing tool). Microsoft was also able to require that free publicity be given over to them by hardware makers. (For example, the Windows key advertising symbols on nearly all modern keyboards, or the strict license restrictions on what may or may not be displayed during system boot and on the Windows desktop.) Also, Microsoft was able to take over most of the networking market (formerly the domain of Artisoft's LANtastic and Novell*s NetWare) with Windows NT, and the business application market (formerly led by Lotus and WordPerfect) with Microsoft Office. Although Microsoft is by far the dominant player in the Wintel partnership now, Intel's continuing influence should not be underestimated. Intel and Microsoft, once the closest of partners, have operated at an uneasy distance from one another since their first major dispute, which had to do with Intel's heavy investment in the 32-bit optimized Pentium Pro and Microsoft's delivery of an unexpectedly high proportion of 16-bit code in Windows 95. Both firms talk with one another's competitors from time to time, most notably with Microsoft's close relationship with AMD and the development of Windows XP Professional x64 Edition utilizing AMD-designed 64-bit extensions to the x86 architecture, and Intel's decision to sell its processors to Apple Inc. The Wintel platform is still the dominant desktop and laptop computer architecture. There have been opinions that Microsoft Windows by its natural software bloat has eaten up much of the "hardware progress" that Intel processors gave to the "Wintel platform" via Moore's law. After the rise of smartphones and netbooks some media outlets have speculated predicting a possible end of Wintel dominance with more and more cheap devices employing other technologies. Intel is investing in Linux, and Microsoft has ported Windows to the ARM architecture with Windows 8. Modern usage of the term In the strictest sense, "Wintel" refers only to computers that run Windows on an Intel processor. However, Wintel is now commonly used to refer to a system running a modern Microsoft operating system on any modern x86 compatible CPU, manufactured by either Intel or AMD. That is because the PC applications that can run on an x86 Intel processor usually can run on an x86 AMD processor too. In mid-October 2017, Microsoft announced that Windows 10 on Qualcomm Snapdragon is at the final stage of testing. That would not be considered as "Wintel". Systems running a Microsoft operating system using an Intel processor based on the Itanium or ARM architecture, despite that fact that the processor is manufactured by Intel, are also not considered to be a Wintel system. See also Apple–Intel architecture AIM alliance Mac transition to Intel processors IBM PC compatible Pocket PC Windows Mobile Network effect Computing platform Commodity computing PowerPC De facto standard Dominant design Notes References External links Intel and Microsoft Alliance Computing platforms IBM PC compatibles X86 architecture
Operating System (OS)
639
OMF OMF may refer to: Object Module Format, an object-file format of the VME operating system or of the IBM personal computer Open Media Framework, a file format that aids in exchange of digital media across applications and platforms Relocatable Object Module Format, an object-file format used primarily on Intel 80x86 microprocessors or Apple IIGS Offshoring Management Framework Ohmefentanyl, a potent piperidine narcotic One Must Fall, a fighting computer-game Open Source Metadata Framework, a Document Type Definition based on Dublin Core used for describing document metadata Opposing Military Force Oracle-managed files, a feature controlling datafiles in Oracle databases Ostmecklenburgische Flugzeugbau, a former (1998–2003) manufacturer of light aircraft OMF International, formerly Overseas Missionary Fellowship, a Christian missionary-society Omnipresent Music Festival, an American music festival based in New York City. Options Market France a regulated stock index futures and options market with integrated clearing Gerry Wright Operations and Maintenance Facility, a light rail transit maintenance facility in Edmonton, Alberta, Canada
Operating System (OS)
640
IRIX IRIX ( ) is a discontinued operating system developed by Silicon Graphics (SGI) to run on the company's proprietary MIPS workstations and servers. It is based on UNIX System V with BSD extensions. In IRIX, SGI originated the XFS file system and the industry-standard OpenGL graphics system. History SGI originated the IRIX name in the 1988 release 3.0 of the operating system for the SGI IRIS 4D series of workstations and servers. Previous releases are identified only by the release number prefixed by "4D1-", such as "4D1-2.2". The "4D1-" prefix continued to be used in official documentation to prefix IRIX release numbers. IRIX 3.x is based on UNIX System V Release 3 with 4.3BSD enhancements, and incorporates the 4Sight windowing system, based on NeWS and IRIS GL. SGI's own Extent File System (EFS) replaces the System V filesystem. IRIX 4.0, released in 1991, replaces 4Sight with the X Window System (X11R4), the 4Dwm window manager providing a similar look and feel to 4Sight. IRIX 5.0, released in 1993, incorporates certain features of UNIX System V Release 4, including ELF executables. IRIX 5.3 introduced the XFS journaling file system. In 1994, IRIX 6.0 added support for the 64-bit MIPS R8000 processor, but is otherwise similar to IRIX 5.2. Later 6.x releases support other members of the MIPS processor family in 64-bit mode. IRIX 6.3 was released for the SGI O2 workstation only. IRIX 6.4 improved multiprocessor scalability for the Octane, Origin 2000, and Onyx2 systems. The Origin 2000 and Onyx2 IRIX 6.4 was marketed as "Cellular IRIX", although it only incorporates some features from the original Cellular IRIX distributed operating system project. The last major version of IRIX is IRIX 6.5, released in May 1998. New minor versions of IRIX 6.5 were released every quarter until 2005 and then there were four minor releases. Through version 6.5.22, there are two branches of each release: a maintenance release (identified by an "m" suffix) that includes only fixes to the original IRIX 6.5 code, and a feature release (with an "f" suffix) that includes improvements and enhancements. An overlay upgrade from 6.5.x to the 6.5.22 maintenance release was available as a free download, whereas versions 6.5.23 and higher required an active Silicon Graphics support contract. A 2001 Computerworld review found IRIX in a "critical" state. SGI had been moving its efforts to Linux and the Windows-based SGI Visual Workstation but MIPS and IRIX customers convinced SGI to continue to support its platform through 2006. On September 6, 2001, an SGI press release announced the end of the MIPS and IRIX product lines. Production ended on December 29, 2006, with final deliveries in March 2007, except by special arrangement. Support for these products ended in December 2013 and they will receive no further updates. Much of IRIX's core technology has been open sourced and ported by SGI to Linux, including XFS. In 2009, SGI filed bankruptcy and then was purchased by Rackable Systems, which was later purchased by Hewlett Packard Enterprise in 2016. All SGI hardware produced after 2007 is based on either IA-64 or x86-64 architecture, so it is incapable of running IRIX and is instead intended for Red Hat Enterprise Linux or SUSE Linux Enterprise Server. HPE has not stated any plans for IRIX development or source code release. Features IRIX 6.5 is compliant with UNIX System V Release 4, UNIX 95, and POSIX (including 1e/2c draft 15 ACLs and Capabilities). In the early 1990s, IRIX was a leader in Symmetric Multi-Processing (SMP), scalable from 1 to more than 1024 processors with a single system image. IRIX has strong support for real-time disk and graphics I/O. IRIX was widely used for the 1990s and 2000s, in the computer animation and scientific visualization industries due to its large application base and high performance. It still is relevant in a few legacy applications. IRIX is one of the first Unix versions to feature a graphical user interface for the main desktop environment. IRIX Interactive Desktop uses the 4Dwm X window manager with a custom look designed using the Motif widget toolkit. IRIX is the originator of the industry standard OpenGL for graphics chips and Image processing libraries. IRIX uses the MIPSPro Compiler for both its front end and back end. The compiler, also known in earlier versions as IDO (IRIS Development Option) was released in many versions, many of which are coupled to the OS version. The last version was 7.4.4m, designed for 6.5.19 or later. The compiler is designed to support parallel POSIX programming in C/C++, Fortran 77/90, and Ada. The Workshop GUI IDE is used for development. Other tools include Speedshop for performance tuning, and Performance Co-Pilot. See also Cray IRIX software Silicon Graphics Image format about .iris SGI Indy References External links Technical Publications Mirror Silicon Bunny - IRIX software and information Irix Network - IRIX software, information, forums, and archive IRIX Admin: Backup, Security, and Accounting Document Number: 007-2862-004 February 1999 Silicon Graphics User Group Discontinued operating systems MIPS operating systems UNIX System V
Operating System (OS)
641
One A110 The A110 is a netbook computer by One. It is built on a reference design by Quanta Computer and was announced to run Linpus Linux. However, some or all of the first batch have actually been delivered with a modified Ubuntu Linux installed, using SquashFS to fit the system in the 2GB Flash memory. Hardware specifications VIA C7-M-ULV Processor (1.0 GHz, 400-MHz FSB, max. 3.5 Watt) 7-inch display 800×480 (with external VGA port) 512 MB DDR2 PC400 RAM 64 MB VX800 S3 integrated graphics card 2 GB Flash Memory 2× USB 2.0 ports 1× Microphone-in jack 1× Speaker jack 56 kbit/s Modem 10/100 Mbit/s LAN WLAN 3-in-1 Cardreader, SD/MMC/MS Height: 2.8 cm Width: 24.3 cm Depth: 17.1 cm Weight: 950 g A second model called A120 is available with 4 GB of flash memory (compared to the 2 GB of the A110), a webcam and Windows XP. References External links One Official Site Official site Subnotebooks Linux-based devices Netbooks
Operating System (OS)
642
Xubuntu Xubuntu () is a Canonical Ltd.–recognized, community-maintained derivative of the Ubuntu operating system. The name Xubuntu is a portmanteau of Xfce and Ubuntu, as it uses the Xfce desktop environment, instead of Ubuntu's GNOME desktop. Xubuntu seeks to provide "a light, stable and configurable desktop environment with conservative workflows" using Xfce components. Xubuntu is intended for both new and experienced Linux users. Rather than explicitly targeting low-powered machines, it attempts to provide "extra responsiveness and speed" on existing hardware. History Xubuntu was originally intended to be released at the same time as Ubuntu 5.10 Breezy Badger, 13 October 2005, but the work was not complete by that date. Instead the Xubuntu name was used for the xubuntu-desktop metapackage available through the Synaptic Package Manager which installed the Xfce desktop. The first official Xubuntu release, led by Jani Monoses, appeared on 1 June 2006, as part of the Ubuntu 6.06 Dapper Drake line, which also included Kubuntu and Edubuntu. Cody A.W. Somerville developed a comprehensive strategy for the Xubuntu project named the Xubuntu Strategy Document. This document was approved by the Ubuntu Community Council in 2008. In February 2009 Mark Shuttleworth agreed that an official LXDE version of Ubuntu, Lubuntu, would be developed. The LXDE desktop uses the Openbox window manager and, like Xubuntu, is intended to be a low-system-requirement, low-RAM environment for netbooks, mobile devices and older PCs and will compete with Xubuntu in that niche. In November 2009, Cody A.W. Somerville stepped down as the project leader and made a call for nominations to help find a successor. Lionel Le Folgoc was confirmed by the Xubuntu community as the new project leader on 10 January 2010 and requested the formation of an official Xubuntu council. , discussions regarding the future of Xubuntu's governance and the role a council might play in it were still ongoing. In March 2012 Charlie Kravetz, a former Xubuntu project leader, formally resigned from the project. Despite this, the project members indicated that Xubuntu 12.04 would go ahead as scheduled. In the beginning of 2016, the Xubuntu team began the process to transition the project to become council run rather than having a single project leader. On 1 January 2017, an official post on the Xubuntu site's blog announced the official formation of the Xubuntu Council. The purpose of the council is not just to make decisions about the future of the project, but to make sure the direction of the project adheres to guidelines established in the Strategy Document. Performance The Xfce desktop environment is intended to use fewer system resources than the default Ubuntu GNOME desktop. In September 2010, the Xubuntu developers claimed that the minimum RAM Xubuntu could be run on was 128 MB, with 256 MB of RAM strongly recommended at that time. Testing conducted by Martyn Honeyford at IBM in January 2007 on Xubuntu 6.10 concluded that it "uses approximately 25 MB less application memory, and also eats significantly less into buffers and cache (which may imply that there is less file activity) than Ubuntu." Later testing showed that Xubuntu was at a disadvantage compared to Debian equipped with the Xfce desktop. Tests were conducted by DistroWatch on a Dell Dimension 4500 desktop machine, with an Intel 2 GHz processor and 384 MB of memory in April 2009, that compared Xubuntu 9.04 against an Xfce desktop version of Debian 5.0.1. These showed that Xubuntu used more than twice the RAM as Debian in simple tasks. Xubuntu also ran out of RAM doing everyday tasks, indicating that 384 MB of RAM was inadequate. The review concluded "It was obvious I had already run out of RAM and was starting to use swap space. Considering I wasn't doing very much, this was rather disappointing". Subsequent experimentation by Distrowatch concluded that the performance advantages observed in Debian were due to Xubuntu's inclusion of memory-hungry software not present in Debian's implementation of Xfce. In a September 2009 assessment in Linux Magazine, Christopher Smart noted, "the Xfce desktop is very lightweight and well suited to machines with small amounts of memory and processing power, but Xubuntu's implementation has essentially massacred it. They've taken the beautifully lightweight desktop and strangled it with various heavyweight components from GNOME. In all fairness to the project however, they do not claim that Xubuntu is designed for older machines - that's just something the community has assumed on their own. It might be more lightweight than Ubuntu itself, but if so it's not by much." Subsequent reviewers emphasized Xubuntu's perceived deficiencies in performance to highlight Lubuntu, a project with similar goals but using the LXDE desktop environment (now LXQt) as opposed to Xfce. For instance, Damien Oh of Make Tech Easier noted in May 2010, "So what about Xubuntu? isn't it supposed to be the lightweight equivalent of Ubuntu? Sadly, that is a thing of the past. The truth is, the supposed lightweight equivalent is not lightweight at all. While Xubuntu is using the lightweight XFCE desktop environment, it had been down by several heavyweight applications and also the integration with GNOME desktop also makes it lose its advantage." However, another reviewer, Laura Tucker also from Make Tech Easier, in her 2016 article What OS Are You Using and Why? as survey of her writing team's computers, noted that Xubuntu is the favourite OS of one member of her team for her older desktop computer, as the writer reported, "because it is lightweight and works great." She also noted that it is easy to customize. In July 2019, Jeff Mitchell of Make Tech Easier recommended Xubuntu as one option to speed up a Linux PC. Releases Xubuntu 6.06 LTS The first official stand-alone release of Xubuntu was version 6.06 Long Term Support (LTS), which was made available on 1 June 2006. It was introduced with the statement: The version used Linux kernel 2.6.15.7 and Xfce 4.4 beta 1. Applications included the Thunar file manager, GDM desktop manager, Abiword word processor and Gnumeric spread sheet, Evince PDF document viewer, Xarchiver archive manager, Xfburn CD burner, Firefox 1.5.0.3 web browser, Thunderbird 1.5.0.2 email client and the GDebi package manager. Caitlyn Martin reviewed Xubuntu 6.06 in June 2006. She singled out its "bare bones approach" to the applications included, indicating that she would rather add applications she wanted than clean out ones she didn't want. On her aging laptop Xubuntu 6.06 proved faster than Fedora Core 5. She stated that: "Under Fedora when I opened a couple of rather resource intensive applications, for example Open Office and Seamonkey, the system would begin to drag. While these apps still take a moment to get started on Xubuntu they are crisp and responsive and don't seem to slow anything else down. I never expected this sort of performance and that alone made Xubuntu an instant favorite of mine." She had praise for the Thunar file manager, as light and fast. She concluded: "Overall I am impressed and Xubuntu, for the moment anyway, is my favorite Linux distribution despite a few rough edges, probably largely due to the use of a beta desktop." Xubuntu 6.10 Xubuntu 6.10 was released on 26 October 2006. This version used Xfce 4.4 beta 2 and included Upstart, the Firefox 2.0 web browser, the Gaim 2.0.0 beta 3.1 instant messaging client along with new versions of AbiWord and Gnumeric. The media player was gxine which replaced Xfmedia. The previous xffm4 file manager was replaced by Thunar. It introduced redesigned artwork for the bootup splash screen, the login window and the desktop theme. The developers claimed that this version of Xubuntu could run on 64 MB of RAM, with 128 MB "strongly recommended". Reviewer Caitlyn Martin tested Xubuntu on a four-year-old Toshiba Satellite 1805-S204 laptop, with a 1 GHz Celeron processor and 512 MB of RAM in December 2006. She noted that Xubuntu ran faster than GNOME or KDE, which she described as "sluggish" and rated it as one of the two fastest distributions on her limited test hardware, placing with Vector Linux. She found the graphical installer to be less than acceptable and the text-based installer better. She concluded: Xubuntu 7.04 Xubuntu 7.04 was released on 19 April 2007. This release was based on Xfce 4.4. Michael Larabel of Phoronix carried out detailed benchmark testing of betas for Ubuntu 7.04, Kubuntu 7.04 and Xubuntu 7.04 in February 2007 on two different computers, one with dual Intel Clovertown processors and the other with an AMD Sempron. After a series of gzip compression, LAME compilation, and LAME encoding tasks he concluded, "in these tests with the dual Clovertown setup we found the results to be indistinguishable. However, with the AMD Sempron, Ubuntu 7.04 Feisty Fawn Herd 4 had outperformed both Kubuntu and the lighter-weight Xubuntu. Granted on a slower system the lightweight Xubuntu should have a greater performance advantage." In one Review Linux look at Xubuntu 7.04 it was faulted for not including OpenOffice.org. The reviewer noted: "If you do decide to keep the default software, it will cover your basic needs. Xubuntu comes with light weight desktop in XFCE 4.4 and also less tasking programs. If you are thinking that OpenOffice will come pre-installed on your desktop, you will be greatly surprised as AbiWord and Gnumeric are your default processor and spreadsheet program." He indicated though that installing applications from the repositories was easy and made for simple customization of an installation. Xubuntu 7.10 Xubuntu 7.10 was released on 18 October 2007. It was based upon Xfce, 4.4.1 and added updated translations along with a new theme, MurrinaStormCloud, using the Murrine Engine. Application updates included Pidgin 2.2.0, (Gaim was renamed Pidgin) and GIMP 2.4. This Xubuntu version allowed the installation of Firefox extensions and plug-ins through the Add/Remove Software interface. The developers claimed that this version of Xubuntu could run on 64 MB of RAM, with 128 MB "strongly recommended". In a review of the release candidate for Xubuntu 7.10 that was installed on a Pentium 2 300 Celeron with 256 MB of RAM, equipped with an nVidia GeForce 4 64 MB video card, Review Linux noted that "the system was very fast". Review Linux positioned Xubuntu and its role, "The main difference between Xubuntu and Ubuntu is the fact that Xubuntu is a little lighter on system requirements and it uses Xfce as desktop. Xubuntu is perfect for that old computer just lying around in your basement." Xubuntu 8.04 LTS Xubuntu 8.04 Long Term Support (LTS) was made available on 24 April 2008. This version of Xubuntu used Xfce 4.4.2, Xorg 7.3 and Linux kernel 2.6.24. It introduced PolicyKit for permissions control, PulseAudio and a new printing manager. It also introduced Wubi, that allowed Windows users to install Xubuntu as a program on Windows. Applications included were Firefox 3 Beta 5, Brasero CD/DVD burning application, Transmission BitTorrent client, Mousepad text editor, AbiWord word processor and Ristretto image viewer Reviewer Christopher Dawson of ZDNet installed Xubuntu 8.04 on a Dell Latitude C400 with 512 MB of RAM, a 30GB hard drive and a 1 GHz Pentium III-M processor. He noted it provided better performance than the Windows XP Pro it replaced. He concluded: "This is where Xubuntu really shines... What it will do is take some very moderate hardware and provide a solid, reliable, and relatively snappy machine for a user with productivity needs or who accesses terminal services." Xubuntu 8.10 Xubuntu 8.10 was released on 30 October 2008. This version of Xubuntu brought a new version of Abiword, version 2.6.4, the Listen Multimedia Player and introduced the Catfish desktop search application. It used Linux kernel 2.6.27, X.Org 7.4. There was an installation option of an encrypted private directory using ecryptfs-utils. The Totem media player was included. Darren Yates, an Australian IT journalist, was very positive about Xubuntu 8.10, particularly for netbooks, which were at their peak of popularity then, dismissing "ubuntu itself is nothing flash". He said, "One of the disappointing things about the arrival of netbooks in Australia has been the decline of Linux in the face of an enslaught by Microsoft to push Windows XP Home Edition back into the market. It's sad because Xubuntu is the ideal Linux distro for these devices. While the latest Xubuntu 8.10 distro lacks drivers for WiFi wireless networking and in many cases also the built-in webcams, those drivers do exist and incorporating them inside Xubuntu would neither be difficult or take up much space." Xubuntu 9.04 Version 9.04 was released on 23 April 2009. The development team advertised this release as giving improved boot-up times, "benefiting from the Ubuntu core developer team's improvements to boot-time code, the Xubuntu 9.04 desktop boots more quickly than ever. This means you can spend less time waiting, and more time being productive with your Xubuntu desktop." Xubuntu 9.04 used Xfce 4.6, which included a new Xfce Settings Manager dialog, the new Xconf configuration system, an improved desktop menu and clock, new notifications, and remote file system application Gigolo. This release also brought all new artwork and incorporated the Murrina Storm Cloud GTK+ theme and a new XFWM4 window manager theme. 9.04 also introduced new versions of many applications, including the AbiWord word processor, Brasero CD/DVD burner and Mozilla Thunderbird e-mail client. It used X.Org server 1.6. The default file system was ext3, but ext4 was an option at installation. In testing Xubuntu 9.04, Distrowatch determined that Xubuntu used more than twice the system memory as Debian 5.0.1 Xfce and that while loading the desktop the memory usage was ten times higher. DistoWatch attributed this to Xubuntu's use of Ubuntu desktop environment services, including the graphical package manager and updater, network manager, power manager, and proprietary driver manager. They provided a plan to strip it down and reduce its memory footprint. DistroWatch concluded "Xubuntu is a great distribution, but its default selection of packages does not necessarily suit itself to low-memory systems." In reviewing Xubuntu in May 2009, Linux.com writer Rob Reilly said, "The latest Xubuntu distribution has just about the right mix of speed and power" and concluded "for the new Linux user, Xubuntu is an easy to use version of Ubuntu that is fast, simple, and reliable. Experienced or "get it done" types will appreciate the minimalist approach, that can be beefed up to whatever degree that is needed." Xubuntu 9.10 29 October 2009 saw the release of Xubuntu 9.10, which utilized Xfce 4.6.1, Linux kernel 2.6.31 and by default the ext4 file system and GRUB 2. This release included the Exaile 0.3.0 music player, the Xfce4 power manager replaced the Gnome Power Manager and improved desktop notifications using notify-osd. Upstart boot-up speed was improved. The release promised "faster application load times and reduced memory footprint for a number of your favorite Xfce4 applications thanks to improvements in library linking provided by ld's --as-needed flag." Dedoimedo gave Xubuntu a negative review, saying "When it comes to usability, Xubuntu has a lot to desire. While Xubuntu is based on Ubuntu, which is definitely one of the friendlier, simpler and more intuitive distros around, a core elements that has led to Ubuntu stardom, the integration of the Xfce desktop makes for a drastic change compared to stock edition. The usability is seriously marred, in several critical categories. And it gets worse. Losing functionality is one thing. Trying to restore it and ending with an unusable desktop is another." The review concluded "Sadly, Xubuntu is a no go. It's not what it ought to be. What more, it does injustice to the Ubuntu family, which usually delivers useful solutions, mainly to new Linux users. There were horrendous, glaring problems with Xubuntu that kicked me back to Linux not so usable 2005. I was taken by surprise, totally not expecting that an ultra-modern distro would pull such dirty, antiquated tricks up its sleeve." Xubuntu 10.04 LTS Xubuntu 10.04 Long Term Support (LTS) was released on 29 April 2010. It moved to PulseAudio and replaced the Xsane scanner utilities with Simple Scan. It also incorporated the Ubuntu Software Center, which had been introduced in Ubuntu 9.10, to replace the old Add/Remove Software utility. The included spreadsheet application, Gnumeric was updated to version 1.10.1 and new games were introduced. Because of incompatibilities in the gnome-screensaver screensaver application, it was replaced by xscreensaver. The default theme was an updated version of Albatross, designed by the Shimmer Team. This version of Xubuntu officially required a 700 MHz x86 processor, 128 MB of RAM, with 256 MB RAM "strongly recommended" and 3 GB of disk space. In reviewing Xubuntu 10.04 beta 1 in April 2010, Joey Sneddon of OMG Ubuntu, declared it "borderline irrelevant". He noted that it provided few performance advantages over Ubuntu. In testing it against Ubuntu and Lubuntu on a 1 GB RAM, 2 GHz Single core processor computer with a 128 MB video card, RAM usage with 3 tabs open in Firefox, 1 playing an HTML5 YouTube video was: Ubuntu Beta 1: 222 MB Xubuntu Beta 1: 215.8 MiB Lubuntu Beta 1: 137 MB Sneddon points out that from this testing that Xubuntu is barely more "lean" than Ubuntu and concludes "Xubuntu, whilst of interest to those who prefer the XFCE environment, remains an unremarkable spin from the Ubuntu canon that, for most users, is largely irrelevant." Jim Lynch of Desktop Linux Reviews praised Xubuntu 10.04's fast boot time and its incorporation of the Ubuntu Software Center, but criticized the lack of inclusion of Ubuntu One. Xubuntu 10.10 Xubuntu 10.10 was released on 10 October 2010. It included Parole, the Xfce4 media player, XFBurn CD/DVD writer in place of Brasero and Xfce4-taskmanager replaced Gnome-Task-Manager. These changes were all to lighten the release's memory footprint. AbiWord was updated to version 2.8.6 and Gnumeric to 1.10.8. This release also introduced the Bluebird theme, from the Shimmer Team. This version of Xubuntu required 192 MB of RAM to run the standard live CD or to install it. The alternate installation CD required 64 MB of RAM to install Xubuntu. Either CD required 2.0 GB of free hard disk space. Once installed, Xubuntu 10.10 could run with as little as 128 MB of RAM, but the developers strongly recommended a minimum of 256 MB of RAM. In reviewing Xubuntu 10.10 in October 2010, just after it was released, Jim Lynch of Eye on Linux said, "I had no problems using Xubuntu 10.10. My system was very stable; I didn't notice any application crashes or system burps. Xubuntu 10.10 is also very fast; applications opened and close very quickly. There was no noticeable system lag or sluggishness. The new theme Bluebird is attractive without being garish; it fits in well with Xubuntu's minimalist mission." Christopher Tozzi, writing about Xubuntu 10.10 beta in August 2010, noted that the distribution was shedding its Gnome dependencies and adopting lighter weight alternatives. He noted "it's encouraging to see more uniqueness in the distribution, especially given the uncertain future of the Gnome-Ubuntu relationship as the release of Gnome 3.0 approaches." Xubuntu 11.04 Xubuntu 11.04 was released on 28 April 2011. This version was based upon Xfce 4.8 and introduced editable menus using any menu editor that meets the freedesktop.org standards. This version also introduced a new Elementary Xubuntu icon theme, the Droid font by default and an updated installation slide show. While Ubuntu 11.04 introduced the new default Unity interface, Xubuntu did not adopt Unity and instead retained its existing Xfce interface. Although the developers have decided to retain a minimalist interface, Xubuntu 11.04 has a new dock-like application launcher to achieve a more modern look. Xubuntu 11.04 could be installed with one of 2 CDs. The Xubuntu 11.04 standard CD requires 4.4 GB of hard disk space and 256 MB of RAM to install, while the alternate CD, which uses a text-based installer, requires 64 MB of RAM and 2 GB of disk space for installation and provides additional options. Once installed, Xubuntu 11.04 can run with 256 MB of RAM, but 512 MB is "strongly recommended". In reviewing Xubuntu 11.04, Jim Lynch of Desktop Linux Reviews faulted the release for its lack of LibreOffice, its dull default wallpaper and the default automatic hiding of the bottom panel. In praising the release he said "Xubuntu 11.04 is a good choice for minimalists who prefer a desktop environment not bogged down with pointless eye-candy. It should work well on older or slower hardware. It's also a good option for those who dislike Unity and want a different desktop environment. Xfce is simple, fast and doesn't get in your way when you are trying to quickly launch an application or otherwise find something. And those who decide to use Xubuntu still remain in the Ubuntu family without the headache of dealing with Unity. So if you're a Unity resister, you should definitely check out Xubuntu 11.04." Joe Brockmeier of Linux.com in reviewing Xubuntu 11.04, praised the inclusion of AbiWord and Gnumeric over LibreOffice, as well as the Catfish file search utility. He added, "Though I've usually used the mainline Ubuntu release when I use Ubuntu, I have to say that I really like the latest iteration of Xubuntu. It does a great job of showcasing Xfce while providing a unique desktop that gives all the pluses of Ubuntu while still being a bit more like a traditional Linux desktop." As of the Xubuntu 11.04 release the developers now "strongly recommend" 512 MB of RAM to use Xubuntu. However, at least 1 GB of memory is recommended to "get a smooth experience when running multiple applications parallel on the desktop". Xubuntu 11.10 Xubuntu 11.10 was released on 13 October 2011, the same day that Ubuntu 11.10 was released. In this release gThumb became the new image viewer/organizer, Leafpad replaced Mousepad as the default text editor and LightDM was introduced as the log-in manager. The release also incorporated pastebinit for cut and paste actions. In reviewing Xubuntu 11.10 on the Acer eM350 netbook, Michael Reed of Linux Journal noted the extensive hardware support out of the box, attractive theme and good performance on 1 GB of RAM. He did remark on the inferior Adobe Flash performance compared to the Windows version of Flash, particularly in full screen mode, something common to all Linux Flash installations as well as the lack of native support for Samba networking, although he was quickly able to install this. Reed concluded "my overall assessment was that Xubuntu 11.10 was a better fit than Windows XP on this netbook. Being fair, one has to remember that XP is now ten years old. Xfce is going to get better and better, and it's already very comprehensive. There is a growing contingent of users for whom the direction that KDE4 and Gnome 3 have taken doesn't ring true, and increasingly, Xfce is going to be the first choice for them." In reviewing 11.10, Brian Masinick of IT Toolbox praised its low RAM usage and said the "Xubuntu 11.10 release is a fresh relief for those who simply want a nice, functional system. Xubuntu 11.10 delivers, and excels in providing a functional, no frills or surprises, responsive, and usable desktop system." Xubuntu 12.04 LTS Xubuntu 12.04 was released on 26 April 2012. It was a Long Term Support release and was supported for three years, until April 2016. This contrasts with Edubuntu, Kubuntu and Ubuntu 12.04 which, while also LTS releases, were all supported for five years. Xubuntu 12.04 incorporated many changes including some default shortcuts which were altered and new ones added, plus there were many appearance changes, including a new logo and wallpaper. Fixes were included for Greybird, Ubiquity, Plymouth, LightDM, and Terminal themes. The release shipped with version 3.2.14 of the Linux kernel. Pavucontrol was introduced to replace xfce4-mixer as it did not support PulseAudio. The Alacarte menu editor was used by default. The minimum system requirements for this release were 512 MiB of RAM, 5 GB of hard disk space, and a graphics card and monitor capable of at least 800×600 pixel resolution. Whisker Menu, a new application launcher for Xubuntu, was introduced via a Personal Package Archive for Xubuntu 2014 LTS. It proved a popular option and later became the default launcher in Xubuntu 14.04 LTS. Xubuntu 12.10 Xubuntu 12.10 was released on 18 October 2012. This release introduced the use of Xfce 4.10, as well as new versions of Catfish, Parole, LightDM, Greybird and the Ubiquity slideshow. The application menu was slightly reorganized, with settings-related launchers moved to the Settings Manager. The release also included updated artwork, new desktop wallpaper, a new look to the documentation and completely rewritten offline documentation. On 32 bit systems, hardware supporting PAE is required. The release included one notable bug fix: "No more window traces or "black on black" in installer". This release of Xubuntu does not support UEFI Secure Boot, unlike Ubuntu 12.10, which allows Ubuntu to run on hardware designed for Windows 8. It was expected that this feature would be included in the next release of Xubuntu. Xubuntu 12.10 includes Linux kernel 3.5.5, Python 3.2 and OpenJDK7 as the default Java implementation. The minimum system requirements for this release of Xubuntu are 512 MB of system memory (RAM), 5 GB of disk space and a graphics card and monitor capable of at least 800×600 pixels resolution. Xubuntu 13.04 Xubuntu 13.04 was released on 25 April 2013. It was intended as a maintenance release with few new features. It incorporated updated documentation, a new version of Catfish (0.6.1), updates to the Greybird theme, GIMP and Gnumeric were reintroduced, a new version of Parole (0.5.0) and that duplicate partitions are no longer shown on desktop or in the Thunar file manager. This was the first version of Xubuntu with a support period of 9 months for the interim (non-LTS) releases, instead of 18 months. Starting with this release the Xubuntu ISO images will not fit on a CD as they now average 800 MB. The new image target media is at least a 1.0 GB USB device or DVD. The decision to change the ISO image size was based upon the amount of developer time spent trying to shrink the files to fit them on a standard size CD. This ISO size change also allowed the inclusion of two applications that had been previously dropped due to space constraints, Gnumeric and GIMP. Xubuntu 13.10 Xubuntu 13.10 was released on 17 October 2013. This release included some improvements over the previous release, including a new version of xfce4-settings and a new dialog box for display settings. There was also a new color theme tool and gtk-theme-config was added as default. This release also included new wallpaper, new GTK+ themes, with Gtk3.10 support and the LightDM greeter. The official Xubuntu documentation was also updated. In reviewing Xubuntu 13.10, Jim Lynch stated: "Xubuntu 13.10, like its cousin Lubuntu 13.10, is a great choice if you're a minimalist. It's fast, stable and offers many of the advantages of Ubuntu 13.10 without the Unity experience (or torture, depending on your perspective)." Xubuntu 14.04 LTS Xubuntu 14.04 LTS was released on 17 April 2014 and, being an LTS, featured three years of support. It incorporated the Xfdesktop 4.11, the Mugshot user account profile editor, the MenuLibre menu editor in place of Alacarte and the Light-locker screen lock to replace Xscreensaver. The Whisker Menu was introduced as the default application launching menu, having been formerly a Personal Package Archive option introduced in Xubuntu 12.04 LTS. It replaced the previous default menu system. The Xfdesktop also supported using different wallpapers on each workspace. Jim Lynch reviewed Xubuntu 14.04 LTS and concluded: "I've always been a fan of Xubuntu as I tend to go for lightweight desktops versus ones with a lot more glitz and features. So I was quite pleased with Xubuntu 14.04. It's true that you aren't going to find tons of earth shattering features in this release, and that's fine because it's a long term support release anyway. I never expect new feature overload in LTS releases since the emphasis is on stability and polish. But Xubuntu 14.04 LTS is a definite improvement from the last version. The overall experience has been polished up significantly, and there are some small but useful features added like Mugshot, Light Locker and MenuLibre, and of course Whiskermenu." Xubuntu 14.10 Xubuntu 14.10 was released on 23 October 2014. This release incorporated very few new features. Changed were a new Xfce Power Manager plugin added to the panel and that items in the new alt-tab dialog could be clicked with the mouse. To illustrate the customization of the operating system, 14.10 featured pink highlight colours, something that could easily be changed by users, if desired. Silviu Stahie, writing for Softpedia stated: "Xubuntu releases are usually very quiet and we rarely see them overshadowing the Ubuntu base, but this is exactly what happened this time around. The devs have made a number of very important modifications and improvements, but they have also changed a very important aspect of the desktop, at least for the duration of the support of the distribution...The devs figured that it might be a good idea to show just how easy is to change things in the distribution...To be fair, this is the kind of change that you either love or hate, but fortunately for the users, it's very easy to return to default." Xubuntu 15.04 Xubuntu 15.04 was released on 23 April 2015. This release featured Xfce 4.12 and included new colour schemes, with redundant File Manager (Settings) menu entries removed. Otherwise this release was predominantly a bug-fix and package upgrade release, with very few significant changes. Marius Nestor of Softpedia noted, "The biggest feature of the newly announced Xubuntu 15.04 distro is the integration of the Xfce 4.12 desktop environment...Among other highlights...we can mention new and updated Xubuntu Light and Dark color schemes in the Mousepad and Terminal applications, but the former is now using the Xubuntu Light theme by default...Additionally, the distribution now offers better appearance for Qt applications, which will work out of the box using Xubuntu's GTK+ theme by default and removes the redundant File Manager (Settings) menu entry." Xubuntu 15.10 Xubuntu 15.10 was released on 22 October 2015. This release had only minimal changes over 15.04. It incorporated the Xfce4 Panel Switch for the backup and restoration of panels and included five preset panel layouts. Greybird accessibility icons were used for the window manager. Gnumeric and Abiword were replaced with LibreOffice Calc and LibreOffice Writer and a new default LibreOffice theme, libreoffice-style-elementary, was provided. Joey Sneddon of OMG Ubuntu described Xubuntu 15.10 as incorporating only "a modest set of changes." Xubuntu 16.04 LTS Released on 21 April 2016, Xubuntu 16.04 is an LTS version, supported for three years until April 2019. This release offered few new features. It included a new package of wallpapers and the replacement of the Ubuntu Software Center with Gnome Software, the same as in Ubuntu 16.04 LTS. Reviewer Jack Wallen said, "The truth of the matter is, the Ubuntu Software Center has been a horrible tool for a very long time. Making this move will greatly improve the Ubuntu experience for every user." The selection of wallpapers available in Xubuntu 16.04 LTS was singled out by OMG Ubuntu as "all breathtakingly beautiful". The first point release, 16.04.1, was released on 21 July 2016. The release of Xubuntu 16.04.2 was delayed a number of times, but it was eventually released on 17 February 2017. Xubuntu 16.04.3 was released on 3 August 2017. Xubuntu 16.04.4 was delayed from 15 February 2018 and released on 1 March 2018. Xubuntu 16.04.5 is scheduled for release on 2 August 2018. Xubuntu 16.10 Xubuntu 16.10 was released on 13 October 2016. This version of Xubuntu introduced very few new features. The official release notice stated, "This release has seen little visible change since April's 16.04, however much has been done towards supplying Xubuntu with Xfce packages built with GTK3, including the porting of many plugins and Xfce Terminal to GTK3." Reviewer Joey Sneddon of OMG Ubuntu! noted, "Xubuntu 16.10 has only a modest change log version [over] its April LTS release". In reviewing Xubuntu 16.10, Gary Newell of Everyday Linux said, "Xubuntu has always been one of my favourite distributions. It doesn't look as glamourous as some of the other Linux offerings out there and it certainly doesn't come with all the software you need pre-installed. The thing that Xubuntu gives you is a great base to start from...The truth is that nothing much really changes with Xubuntu. It is solid, steady and it doesn't need to change..." He did fault the installation of some software packages, which don't appear in the graphical software tools, but can be installed from the command line. Xubuntu 17.04 Xubuntu 17.04 was released on 13 April 2017. Joey Sneddon of OMG Ubuntu indicated that this release is mostly just bug fixes and has little in the way of new features. Xubuntu 17.10 Xubuntu 17.10 was released on 19 October 2017. This release included only minor changes including the GNOME Font Viewer included by default and that the client side decorations consume less space within the Greybird GTK+ theme. Distrowatch noted that Xubuntu 17.10, "includes significant improvements to accelerated video playback on Intel video cards. The distribution also includes support for driverless printing and includes the GNOME Font Viewer by default." Reviewer Joey Sneddon of OMG Ubuntu said of this release, "Xubuntu 17.10 is another iterative release, featuring only modest changes (if it ain't broke, don't fix it)." Xubuntu 18.04 LTS Xubuntu 18.04 is a long-term support version, released on 26 April 2018. In this version, removed the GTK Theme Configuration, the Greybird GTK+ theme was upgraded to 3.22.8 version, including HiDPI support, Google Chrome GTK+ 3 styles and a new dark theme. Sound Indicator was replaced by the Xfce PulseAudio Plugin. The release introduced a new plugin for the panel, xfce4-notifyd. Also Evince was replaced by Atril, GNOME File Roller by Engrampa, and GNOME Calculator by MATE Calculator. The recommended system requirements for this release are at least 1 GB of RAM and at least 20 GB of free hard disk space. Igor Ljubuncic from Dedoimedo wrote review of Xubuntu 18.04: Xubuntu 18.10 Xubuntu 18.10 was released on 18 October 2018. This release includes Xfce components at version 4.13 as the project moves towards a Gtk+3-only desktop, Xfce Icon Theme 0.13, Greybird 3.22.9, which improves the window manager appearance, a new purple wallpaper. The recommended system requirements for this release remained as at least 1 GB of RAM and at least 20 GB of free hard disk space. Igor Ljubuncic from Dedoimedo wrote a review in which he stated, "Xubuntu 18.10 Cosmic Cuttlefish is a pretty standard, run-of-the-mill distro, without any superb features or amazing wow effect. It kind of works, the defaults are somewhat boring, and you need to manually tweak things to get a lively, upbeat feel" Xubuntu 19.04 Xubuntu 19.04 was released on 18 April 2019. Starting with this version, Xubuntu no longer offered 32-bit ISOs. In this release, new default applications were included, such as GIMP, LibreOffice Impress. LibreOffice Draw and AptURL, and Orage was removed. This release was predominantly a bug fix release with few changes, but also included new screenshot tools and updated Xfce 4.13 components, using components from the development branch for Xfce 4.14. A review in Full Circle magazine concluded: "Xubuntu 19.04 is a strong release. It is pretty much flawless as a desktop OS, which really is to be expected for a 27th release. It provides a simple and elegant experience for users that allows them to get work done. No flash or splash, just a very mature distribution that gets incrementally better with each release" Igor Ljubuncic wrote a review in Dedoimedo, in which he concluded, "Xubuntu 19.04 Disco Dingo is a fairly decent release for the bi-annual non-LTS testbed. It's stable enough, sort of mature bordering on boring, fast when it needs to be, pretty when you make it, and gives a relatively rounded overall experience. But then, it also falls short in quite a few areas. These look more like nuggets of apathy than deliberate omissions - networking woes with Samba and Bluetooth, customization struggle, less than adequate battery life, some odd niggles here and there. It just feels like a tickbox exercise rather than a beautiful fruit of labor, passion and fun. It is somewhat better than Cosmic, but it's nowhere near as exciting as Xfce (or rather Xubuntu) used to be three years back. 7/10, and worth testing, but don't crank your adrenaline pump too high." Xubuntu 19.10 This standard release was the last one before the next LTS release and arrived on 17 October 2019. This release included Xfce 4.14, which was completed in August 2019 after nearly four and half years of development work. Other changes included the Xfce Screensaver replacing Light Locker for screen locking, new desktop keyboard shortcuts, the ZFS file system and logical volume manager included on an experimental basis for root. In a lengthy review on DistroWatch, Jesse Smith wrote, "I feel I do not get to say this often enough: this distribution is boring in the best possible way. Even with new, experimental filesystem support and a complete shift in the libraries used to power the Xfce desktop, Xubuntu is beautifully stable, fast, and easy to navigate ... Perhaps what I appreciated most about Xubuntu was that it did not distract me or get in the way at all. I did not see a notification or a pop-up or welcome screen during my trial. The distribution just installed and got out of my way so I could start working ... On the whole I am impressed with Xubuntu 19.10. I found myself wishing this was an LTS release as I would like to put this version on several computers, particular those of family members who run Linux on laptops. Xubuntu is providing a great balance between new features, stability, performance, and options and I highly recommend it for almost any desktop scenario." A review in Full Circle magazine in November 2019 criticized the window theming and the default wallpaper, terming it "dull and uninspired". The review also noted, "with 28 releases, Xubuntu is a very mature operating system. It provides users with a solid, stable, elegant desktop experience that is quick to learn and very easy to use. Mostly it lacks unnecessary flash and bling, and instead stays out of the way and lets users get work done and very efficiently, too. Xubuntu 19.10 is a release that brings small, incremental changes, with updates and polish that all bode very well for a good, solid spring 2020 LTS release." Xubuntu 20.04 LTS This release is a long-term support release and was released on 23 April 2020. Xubuntu 20.04.1 LTS was released on 6 August 2020. As in common with LTS releases, this one introduced very few new features. A new dark-colored windowing theme was included, Greybird-dark, as were six new community-submitted wallpaper designs. The applications apt-offline and pidgin-libnotify were not included and Python 2 support was removed. A review of this release in Debug Point concluded, "this release is another significant milestone for Xubuntu 20.04 LTS among other popular Ubuntu-based distributions. Xubuntu 20.04 managed to bring the latest to its users who are still using older hardware and devices with its offerings." DistroWatch reviewer Jeff Siegel gave this release a positive review, noting that Xubuntu is often undervalued. He wrote, "Xubuntu has always been the quiet middle child in the Ubuntu family, the one that was always overlooked in favour of its older siblings, the glitzy Kubuntu and the rock star Ubuntu - and even for the younger ones, like the oh so retro Ubuntu MATE. All Xubuntu has ever done is offer a solid, dependable, mostly error-free, long-term release every two years. Given a world of Linux distro hoppers, Plasma desktop, and extras like the GNOME and MX Linux tweak tools, and the Zorin browser chooser, who needs something like Xubuntu? A lot of us. We value the distro's dependability and continuity, its lack of controversy, and that it just works, almost and always, straight out of the box. In this, Xubuntu 20.04, Focal Fossa, continues the distro's tradition." Igor Ljubuncic from Dedoimedo reviewed the release in May 2020, writing: "Xubuntu 20.04 Focal Fossa is not a release worth its long-term support badge. It's not exciting, it has ergonomic problems, it has bugs, and it offers a lethargic experience. There's really no sense of pride. Inertia only. If we look at dry facts, you get an average score across the board. Some problems in pretty much every aspect. Things work, but it's a bare minimum. The sweet momentum that was, back in 2017 or so, gone. Well, there you go. Hopefully, the results will improve over time, but I'm doubtful. I've not seen anything really cool or fresh in the Xfce desktop per se for a while now. Xubuntu could work for those looking for a very spartan XP-like experience." A Full Circle review concluded, "Xubuntu 20.04 LTS is the Long Term Support release that Xubuntu fans have been waiting for. This 29th Xubuntu release is graceful in design, stable, and simple to use. New users will find that it comes with most of the software needed to get straight to productive work. Experienced Xubuntu users will find this LTS release very familiar, just an update without any unwelcome surprises, but with three year's worth of support. If it had some better default window themes it would be just about perfect." Xubuntu 20.10 This standard release came out on 22 October 2020. The Xubuntu developers transitioned their code base to GitHub for this release and otherwise there were no changes over Xubuntu 20.04 LTS. On 23 October 2020, reviewer Sarvottam Kumar of FOSS Bytes noted of this release, "out of all Ubuntu flavors, Xubuntu 20.10 seems the least updated variant containing the same Xfce 4.14 desktop environment as long-term Xubuntu 20.04 has. This is because the next Xfce 4.16 is still under development, with the first preview released last month." A Full Circle Magazine review concluded: "even though Xubuntu 20.10 is a solid release and works very well, it is difficult to recommend it to users already running Xubuntu 20.04 LTS due to the lack of any changes. Unless the user needs support for new hardware from the new Linux kernel or has a hot, burning desire to use LibreOffice 7 instead of 6, there really is no compelling reason to upgrade from Xubuntu 20.04 LTS. Comparing the support periods of nine months (until July 2021) for 20.10 versus three years (to April 2023) for 20.04 LTS, again, there isn't much to entice users to switch." Igor Ljubuncic of Dedoimedo panned the release, writing, "Xubuntu 20.10 simply does not radiate pride, quality and attention to detail that would warrant investment from the user ... this feels like a system trapped in time and lethargy.". Xubuntu 21.04 Xubuntu 21.04 is a standard release, made on 22 April 2021. This release introduced Xfce 4.16 which exclusively uses GTK3. A new minimal installation option was available. It also included two new applications: the HexChat IRC client and the Synaptic package manager as well as some general user interface changes. A review in Full Circle magazine concluded, "after making no changes in Xubuntu 20.10, it seems that the Xubuntu developers are not going to sit out this entire development cycle. Starting with 21.04, they have introduced some minor refinements. When you have a loyal user following, you need to proceed cautiously. Most Xubuntu users I know love the OS and don't want to see big changes. The result here, in Xubuntu 21.04, is a good solid release that will keep users happy on the road to the next LTS version." Xubuntu 21.10 Xubuntu 21.10 is a standard release, and was released on October 14, 2021. This release included the addition of GNOME Disk Analyzer, GNOME Disk Utility, and the media playback software Rhythmbox. In a review in Softpedia, Vladimir Ciobica praised the inclusion of the new selection of applications in Xubuntu 21.10. Applications As of the 20.04 LTS release, Xubuntu includes the following applications by default: Xubuntu includes the GNOME Software storefront which allows users to download additional applications from the Ubuntu repositories. Table of releases Xubuntu versions are released twice a year, coinciding with Ubuntu releases. Xubuntu uses the same version numbers and code names as Ubuntu, using the year and month of the release as the version number. The first Xubuntu release, for example, was 6.06, indicating June 2006. Xubuntu releases are also given code names, using an adjective and an animal with the same first letter, e.g., "Dapper Drake" and "Intrepid Ibex". These are the same as the respective Ubuntu code names. Xubuntu code names are in alphabetical order, allowing a quick determination of which release is newer, although there were no releases with the letters "A" or "C". Commonly, Xubuntu releases are referred to by developers and users by only the adjective portion of the code name, for example Intrepid Ibex is often called just Intrepid. Long Term Support (LTS) releases are supported for three years, while standard releases are supported for nine months. Derivatives Xubuntu has been developed into several new versions by third-party developers: Element OS A distribution for home theater PCs — discontinued in 2011. Emmabuntüs A distribution designed to facilitate the repacking of computers donated to Emmaüs Communities. GalliumOS A Linux distribution for Chrome OS devices. OzOS A now-defunct Linux distribution based on a severely stripped down version of Xubuntu. Focused on Enlightenment, e17, compiled directly from SVN source. Easy update of e17 is made from SVN updates, by a click on an icon or from CLI using morlenxus' script. Black Lab Linux (previously OS4 and PC/OS) A derivative of Xubuntu whose interface was made to look like BeOS. A 64 bit version was released in May 2009. In 2010 PC/OS moved to more unified look to its parent distribution and a GNOME version was released on 3 March 2010. Renamed Black Lab Linux on 19 November 2013. UberStudent Linux A discontinued education-use derivative of Xubuntu LTS releases UserOS Ultra A minimal Xubuntu variant was produced for Australia's PC User magazine. Voyager A French distribution which comes with the Avant Window Navigator. ChaletOS An English distribution similar to the Windows operating system in appearance. See also List of Ubuntu-based distributions Open-source software References External links 2006 software IA-32 Linux distributions Operating system distributions bootable from read-only media Ubuntu derivatives X86-64 Linux distributions Xfce Linux distributions
Operating System (OS)
643
Machine-dependent software Machine-dependent software is software that runs only on a specific computer. Applications that run on multiple computer architectures are called machine-independent, or cross-platform. Many organisations opt for such software because they believe that machine-dependent software is an asset and will attract more buyers. Organizations that want application software to work on heterogeneous computers may port that software to the other machines. Deploying machine-dependent applications on such architectures, such applications require porting. This procedure includes composing, or re-composing, the application's code to suit the target platform. Porting Porting is the process of converting an application from one architecture to another. Software languages such as Java are designed so that applications can migrate across architectures without source code modifications. The term is applied when programming/equipment is changed to make it usable in a different architecture. Code that does not operate properly on a specific system must be ported to another system. Porting effort depends upon a few variables, including the degree to which the first environment (the source stage) varies from the new environment (the objective stage) and the experience of the creators in knowing platform-specific programming dialects. Many languages offer a machine independent intermediate code that can be processed by platform-specific interpreters to address incompatibilities. The transitional representation characterises a virtual machine that can execute all modules written in the intermediate dialect. The intermediate code guidelines are interpreted into distinct machine code arrangements by a code generator to make executable code. The intermediate code may also be executed directly without static conversion into platform-specific code. Approaches Port the translator. This can be coded in portable code. Adapt the source code to the new machine. Execute the adjusted source utilizing the translator with the code generator source as data. This will produce the machine code for the code generator. See also Virtual machine Java (programming language) Hardware-dependent software References External links Agrawala, A. K., & Rauscher, T. G., 2014, Foundations of microprogramming: architecture, software, and applications, Academic press Huang, J., Li, Y. F., & Xie, M., 2015, An empirical analysis of data preprocessing for machine learning-based software cost estimation, Information and Software Technology, 67, 108-127 Lee, J. H., Yu, J. M., & Lee, D. H., 2013, A tabu search algorithm for unrelated parallel machine scheduling with sequence-and machine-dependent setups: minimizing total tardiness, The International Journal of Advanced Manufacturing Technology, 69(9-12), 2081-2089 Lin, S. W., & Ying, K. C., 2014, ABC-based manufacturing scheduling for unrelated parallel machines with machine-dependent and job sequence-dependent setup times, Computers & Operations Research, 51, 172-181 Mathur, R., Miles, S., & Du, M., 2015, Adaptive Automation: Leveraging Machine Learning to Support Uninterrupted Automated Testing of Software Applications, arXiv preprint Rashid, E. A., Patnaik, S. B., & Bhattacherjee, V. C., 2014, Machine learning and software quality prediction: as an expert system, International Journal of Information Engineering and Electronic Business (IJIEEB), 6(2), 9 Röhrich, T., & Welfonder, E., 2014, Machine Independent Software Wiring and Programming of Distributed Digital Control Systems, In Digital Computer Applications to Process Control: Proceedings of the 7th IFAC/IFIP/IMACS Conference, Vienna, Austria, 17–20 September 1985 (p. 247), Elsevier Shepperd, M., Bowes, D., & Hall, T., 2014, Researcher bias: The use of machine learning in software defect prediction, Software Engineering, IEEE Transactions on, 40(6), 603-616 Wang, J. B., Sun, L. H., & Sun, L. Y., 2011, Single-machine total completion time scheduling with a time-dependent deterioration, Applied Mathematical Modelling, 35(3), 1506-1511 Yin, Y., Liu, M., Hao, J., & Zhou, M., 2012, Sin Software architecture
Operating System (OS)
644
Linux-powered device Linux-based devices or Linux devices are computer appliances that are powered by the Linux kernel and possibly parts of the GNU operating system. Device manufacturers' reasons to use Linux may be various: low cost, security, stability, scalability or customizability. Many original equipment manufacturers use free and open source software to brand their products. Community maintained Linux devices are also available. Community maintained devices These devices were not intended to run Linux at the time of their production, but a community effort made possible either full or partial Linux support. Because of the open source philosophy that free and open source software brings to the software world, many people have ported the Linux kernel to run on devices other than a typical desktop, laptop or server computer. Some ports are performed by committed individuals or groups to provide alternative software on their favorite hardware. Examples include iPods, PlayStations, Xbox, TiVo, and WRT54G. The original hardware vendors are in some cases supportive of these efforts (Linksys with the WRT54G) or at the least tolerate the use of such software by end users (see TiVo hacking). Others go to great lengths to try to stop these alternative implementations. Android Android is a Linux-based operating system optimised for mobile touchscreen environments—smartphones, tablets, e-Books, and the like. Developed, published, and maintained by Google's Android Open Source Project (in consultation with the Open Handset Alliance), Android relieves smartphone manufacturers of the costs of developing- or licensing proprietary handset operating systems. First unveiled in 2007, Android became the world's most widely deployed smartphone platform in Q4 2010. By September 2012, 500 million Android devices had been activated, with a further 1.3 million devices being activated per day. Google Nexus developer phones are the flagship brand of Android handsets, with features and capabilities that represent the state of the art at the time of launch (typically every eleven months). License violations In most of these cases the OEMs are open about their use of such software and fulfil the requirements of their Free software licenses, such as the GNU General Public License (GPL), but in a small number of cases this use is masked, either deliberately or through professed ignorance or misunderstanding. Violators are usually found through public records, where they may be forced to declare their implementations, or through their own advertising, for example "Embedded Software Engineers with Mandatory Linux Experience Required" on their careers pages, and yet their site or product documentation offers no source download or offer to supply the software source as required by the license GPL. Organizations such as gpl-violations.org, the Free Software Foundation (FSF) and the Software Freedom Law Center (SFLC) are now more organized at pursuing such violators and obtaining compliance. Usually, they seek voluntary compliance as a first step and only enter legal proceedings when blocked. When notified of violations they confirm them by asking the supplier, examining available product samples, or even going so far as to make blind purchases of the product through front companies. See also Open-design movement Open-source robotics References External links a website on devices that can run embedded Linux LinuxDevices Archive a complete LinuxDevices.com archive (15,000 articles) LinuxDevices Forum LinuxGizmos a reborn implementation of the LinuxDevices website Linux-based devices Linux Consumer electronics
Operating System (OS)
645
Kickstart (Linux) The Red Hat Kickstart installation method is used by Fedora, Red Hat Enterprise Linux and related Linux distributions to automatically perform unattended operating system installation and configuration. Red Hat publishes Cobbler as a tool to automate the Kickstart configuration process. Usage Kickstart is normally used at sites with many such Linux systems, to allow easy installation and consistent configuration of new computer systems. Kickstart configuration files can be built three ways: By hand. By using the GUI system-config-kickstart tool. By using the standard Red Hat installation program Anaconda. Anaconda will produce an anaconda-ks.cfg configuration file at the end of any manual installation. This file can be used to automatically reproduce the same installation or edited (manually or with system-config-kickstart). Structure The kickstart file is a simple text file, containing a list of items, each identified by a keyword. While not strictly required, there is a natural order for sections that should be followed. Items within the sections do not have to be in a specific order unless otherwise noted. The section order is: Command section – single line general purpose commands. The %packages section – listing of software packages to be installed & related options. The %pre, %pre-install, %post, %onerror, and %traceback sections – can contain scripts that will be executed at the appropriate time during the installation. The %packages, %pre, %pre-install, %post, %onerror, and %traceback sections are all required to be closed with %end. Items that are not required for the given installation run can be omitted. Lines starting with a pound sign (#) are treated as comments and are ignored. If deprecated commands, options, or syntax are used during a kickstart installation, a warning message will be logged to the anaconda log. Since deprecated items are usually removed within a release or two, it makes sense to check the installation log to make sure you haven’t used any of them. When using ksvalidator, deprecated items will cause an error. Example A simple Kickstart for a fully automated Fedora installation. # use Fedora mirror as installation source, set Fedora version and target architecture url --mirrorlist=http://mirrors.fedoraproject.org/mirrorlist?repo=fedora-33&arch=x86_64 # set installation language lang en_US.UTF-8 # set keyboard keyboard us # set root password rootpw 12345 # create a sudo capable user user --name wikipedia-user --password 12345 --groups=wheel #set timezone timezone America/New_York # clear all existing storage (!) zerombr clearpart --all --initlabel # automatically create default storage layout autopart %packages # install the Fedora Workstation environment @^Fedora Workstation # install some package groups @3D Printing @C Development Tools and Libraries @System Tools # install some packages vim git mc %end External links Kickstart commands in Fedora - Kickstart command reference for Fedora Kickstart command and options reference - Kickstart command reference for RHEL 8 See also Digital Rebar Provision (Debian, Ubuntu, RHEL, CentOS, SuSE, vSphere ESXi, CoreOS, Alpine, Rancher, and others) Cobbler Jumpstart (Solaris) Preseed (Debian, Ubuntu) FAI (Debian, Ubuntu, CentOS, SUSE) AutoYaST (SUSE) System Installer Network Installation Manager References Linux package management-related software
Operating System (OS)
646
Peopleware Peopleware is a term used to refer to one of the three core aspects of computer technology, the other two being hardware and software. Peopleware can refer to anything that has to do with the role of people in the development or use of computer software and hardware systems, including such issues as developer productivity, teamwork, group dynamics, the psychology of programming, project management, organizational factors, human interface design, and human–machine interaction. Overview The concept of peopleware in the software community covers a variety of aspects: Development of productive persons Organizational culture Organizational learning Development of productive teams, and Modeling of human competencies. History The neologism, first used by Peter G. Neumann in 1977 and independently coined by Meilir Page-Jones in 1980, was popularized in the 1987 book Peopleware: Productive Projects and Teams by Tom DeMarco and Timothy Lister. The term Peopleware also became the title and subject matter of a long-running series of columns by Larry Constantine in Software Development magazine, later compiled in book form. References Consulting Software project management
Operating System (OS)
647
IBM System/7 The IBM System/7 was a computer system designed for industrial control, announced on October 28, 1970 and first shipped in 1971. It was a 16-bit machine and one of the first made by IBM to use novel semiconductor memory, instead of magnetic core memory conventional at that date. IBM had earlier products in industrial control market, notably the IBM 1800 which appeared in 1964. However, there was minimal resemblance in architecture or software between the 1800 series and the System/7. System/7 was designed and assembled in Boca Raton, Florida. Hardware architecture The processor designation for the system was IBM 5010. There were 8 registers which were mostly general purpose (capable of being used equally in instructions) although R0 had some extra capabilities for indexed memory access or system I/O. Later models may have been faster, but the versions existing in 1973 had register to register operation times of 400 ns, memory read operations at 800 ns, memory write operations at 1.2 µs, and direct IO operations were generally 2.2 μs. The instruction set would be familiar to a modern RISC programmer, with the emphasis on register operations and few memory operations or fancy addressing modes. For example, the multiply and divide instructions were done in software and needed to be specifically built into the operating system to be used. The machine was physically compact for its day, designed around chassis/gate configurations shared with other IBM machines such as the 3705 communications controller, and a typical configuration would take up one or two racks about high, the smallest System/7's were only about high. The usual console device was a Teletype Model 33 ASR (designated as the IBM 5028), which was also how the machine would generally read its boot loader sequence. Since the semiconductor memory emptied when it lost power (in those days, losing memory when you switched off the power was regarded as a novelty) and the S/7 didn't have ROM, the machine had minimal capabilities at startup. It typically would read a tiny bootloader from the Teletype, and then that program would in turn read in the full program from another computer or from a high speed paper tape reader, or from an RPQ interface to a tape cassette player. Although many of the external devices used on the system used the ASCII character set, the internal operation of the system used the EBCDIC character set which IBM used on most systems. Specialization There were various specializations for process control. The CPU had 4 banks of registers each of different priority and it could respond to interrupts within one instruction cycle by switching to the higher priority set. Many specialized I/O devices could be configured for things such as analog measurement or signal generation, solid state or relay switching, or TTL digital input and output lines. The machine could be installed in an industrial environment without air conditioning, although there were feature codes available for safe operation in extreme environments. Standard Hardware Units A System/7 is typically a combination of the following: IBM 5010: Processing Module. This module is always present in a System/7. Effectively this is the controller for the System/7, performing arithmetic and logical functions as well as providing control functions. IBM 5012: Multifunction Module. This module handles both digital and analog I/O. It can also be used to control an IBM 2790. IBM 5013: Digital Input/Output Module. This module handles digital I/O as well as the attachment for custom products. It can also be used to control an IBM 2790. IBM 5014: Analog Input Module. This module could take voltage signals and turn them into data inputs. IBM 5022: Disk Storage Unit. Announced in 1971, it could hold either 1.23 million or 2.46 million 16-bit words. IBM 5025: Enclosure. This is effectively the rack into which the power supplies and I/O modules are installed. IBM 5028: Operator Station. This is a stand-alone station that includes a keyboard and a printer. It also includes a paper tape punch and a paper tape reader. In the photo captioned IBM System/7 in use, it is to the left of the operator in the foreground of the photo. When first announced in 1970, one Operator Station was mandatory for each System/7, but in 1971 IBM announced that one 5028 could be shared by several System/7s. Maritime Application/Bridge System This is a solution specifically for on board ship navigation. It consists of the following hardware: 5010E Processing Module. This module is always present. 5022 Disk Storage Unit. 5026 C03 Enclosure. This has been modified to handle extended heavy vibrations and tilting 5028 Operator Station. 5090: N01 Radar Navigation Interface Module (RNIM). Interfaces with OEM equipment such as radar, gyros, navigation equipment. 5090: N02 Bridge Console. This provides a radar plan position indicator (PPI) that allows the navigator to communicate with and control the system. There are also RPQs to ruggedize the hardware, provide interfaces to various navigation equipment and provide spares for on board ship. Software The operating system would more properly be called a monitor. IBM provided a wide variety of subroutines, mostly written in assembler, that could be configured into a minimum set to support the peripherals and the application. The application-specific code was then written on top of the monitor stack. A minimal useful configuration would run with 8 kilobytes of memory, though in practice the size of the monitor and application program was usually 12kB and upwards. The maximum configuration had 64kB of memory. The advanced (for the time) semiconductor memory made the machine fast but also expensive, so a lot of work went into minimizing the typical memory footprint of an application before deployment. The development tools normally ran on IBM's 360 computer system and the program image was then downloaded to a System/7 in a development lab by serial link. Up until 1975 at least it was rare to use disk overlays for the programs, with no support for that in the software tools. Hard disks, in the IBM Dolphin line of sealed cartridges, were available but expensive and were generally used as file systems storing data and executable programs (thereby eliminating the need to rely on the paper tape reader for system boot-up). Most work was done in a macro assembly language, with a fairly powerful macro language facility allowing great flexibility in code configuration and generation. Static variable binding, like Fortran, was the norm and the use of arbitrary subroutine call patterns was rare. The machines were usually deployed for very fixed jobs with a rigidly planned set of software. This often extended to the real-time interrupt latency, using the 4 levels of priority and the carefully crafted software paths to ensure guaranteed latencies. Fortran and a PL/1 subset (PL/7) compilers were available no later than 1976 as larger configurations became more affordable and more complex data processing was required. System/7 programmers still needed to be aware of the actual instructions that were available for use. Much development work was done on S/360 or S/370 using a variation of the HLASM program geared to the MSP/7 macro language. To provide more flexibility in programming the System/7, a group in the IBM San Jose Research Laboratory in San Jose, California developed the LABS/7 operating environment, which with its language Event Driven Language (EDL), was ported to the Series/1 environment as the very successful Event Driven Executive (EDX). Uses The System/7 was designed to address the needs of specific "real-time" markets which required collecting and reacting to input and output (I/O) from analog devices (e.g. temperature sensors, industrial control devices). This was a very limited market at the time. Specific commercial uses included factory control systems and air conditioning energy control systems. AT&T was also a large customer. However, the main use may have been for, what were at the time, classified military uses. Example customers This is an eclectic list of customers intended to show the variety of use cases for which the System/7 could be employed: In 1971 IBM claimed their first customer delivery of a System/7, made to American Motors Corporation (AMC) in Kenosha, WA. The system was delivered on September 16, 1971, and installed 24 hours later. It was the first of two that were to be used to measure the emissions of new production automobiles. In 1972 it was reported that the University of Pennsylvania was using remote terminals with card readers, attached to an IBM System/7, to reduce the incidence of meal contract abuse among 2000 students. It helped ensure students did not exceed their meal limits or eat meals in multiple dining rooms in the same meal period. In 1978 it was reported that Pfizer Corporation was using a System/7 equipped with audio-response to allow around 1,300 sales representatives to remotely enter orders through a mini-terminal that could send touch-tone signals via a telephone. They called the system "Joanne". Maritime Application/Bridge System This solution was the first navigational aid that the Control Engineering Department of Lloyds Register added to their list of Approved Control and Electrical Equipment. The System/7 Maritime Application/Bridge System is designed to make the navigation of large ships safer and more efficient, by reducing the amount of data that bridge personnel needed to correlate while improving how it is presented. It provides five programmed functions: Collision assessment: This uses the ships radar as well as speed log and gyrocompass to determine where collision risks exist in up to a 16.5 nautical mile radius. Position fixing: This uses various inputs including satellite navigation receiver, Decca Navigator, gyrocompass and ships speed log to show the ships position. Adaptive auto pilot: This constantly adapts the ships steering in response to sea conditions Route planning: This allows forecasting for navigational changes, based on the ships current position and then either the inputted destination or the next turning point. Routes could be stored and retrieved. Route tracking: This uses boundaries inputed by the navigator and position fixing data. It then uses the PPI to display channels or lanes. It could sound an alarm if a boundary was being approached. Withdrawal The product line was withdrawn from marketing on March 20, 1984. IBM's subsequent product in industrial control was the Series/1, also designed at Boca Raton. References System 7 16-bit computers
Operating System (OS)
648
Amiga 1000 The Commodore Amiga 1000, also known as the A1000 and originally marketed as the Amiga, is the first personal computer released by Commodore International in the Amiga line. It combines the 16/32-bit Motorola 68000 CPU which was powerful by 1985 standards with one of the most advanced graphics and sound systems in its class, and runs a preemptive multitasking operating system that fits into of read-only memory and shipped with 256 KB of RAM. The primary memory can be expanded internally with a manufacturer-supplied 256 KB module for a total of 512 KB of RAM. Using the external slot the primary memory can be expanded up to Design The A1000 has a number of characteristics that distinguish it from later Amiga models: It is the only model to feature the short-lived Amiga check-mark logo on its case, the majority of the case is elevated slightly to give a storage area for the keyboard when not in use (a "keyboard garage"), and the inside of the case is engraved with the signatures of the Amiga designers (similar to the Macintosh); including Jay Miner and the paw print of his dog Mitchy. The A1000's case was designed by Howard Stolz. As Senior Industrial Designer at Commodore, Stolz was the mechanical lead and primary interface with Sanyo in Japan, the contract manufacturer for the A1000 casing. The Amiga 1000 was manufactured in two variations: One uses the NTSC television standard and the other uses the PAL television standard. The NTSC variant was the initial model manufactured and sold in North America. The later PAL model was manufactured in Germany and sold in countries using the PAL television standard. The first NTSC systems lack the EHB video mode which is present in all later Amiga models. Because AmigaOS was rather buggy at the time of the A1000's release, the OS was not placed in ROM then. Instead, the A1000 includes a daughterboard with 256 KB of RAM, dubbed the "writable control store" (WCS), into which the core of the operating system is loaded from floppy disk (this portion of the operating system is known as the "Kickstart"). The WCS is write-protected after loading, and system resets do not require a reload of the WCS. In Europe, the WCS was often referred to as WOM (Write Once Memory), a play on the more conventional term "ROM" (read-only memory). Technical information The preproduction Amiga (which was codenamed "Velvet") released to developers in early 1985 contained of RAM with an option to expand it to Commodore later increased the system memory to due to objections by the Amiga development team. The names of the custom chips were different; Denise and Paula were called Daphne and Portia respectively. The casing of the preproduction Amiga was almost identical to the production version: the main difference being an embossed Commodore logo in the top left corner. It did not have the developer signatures. The Amiga 1000 has a Motorola 68000 CPU running at 7.15909 MHz (on NTSC systems) or 7.09379 MHz (PAL systems), precisely double the video color carrier frequency for NTSC or 1.6 times the color carrier frequency for PAL. The system clock timings are derived from the video frequency, which simplifies glue logic and allows the Amiga 1000 to make do with a single crystal. In keeping with its video game heritage, the chipset was designed to synchronize CPU memory access and chipset DMA so the hardware runs in real time without wait-state delays. Though most units were sold with an analog RGB monitor, the A1000 also has a built-in composite video output which allows the computer to be connected directly to some monitors other than their standard RGB monitor. The A1000 also has a "TV MOD" output, into which an RF Modulator can be plugged, allowing connection to a TV that was old enough not to even have a composite video input. The original 68000 CPU can be directly replaced with a Motorola 68010, which can execute instructions slightly faster than the 68000 but also introduces a small degree of software incompatibility. Third-party CPU upgrades, which mostly fit in the CPU socket, use faster successors 68020/68881 or 68030/68882 microprocessors and integrated memory. Such upgrades often have the option to revert to 68000 mode for full compatibility. Some boards have a socket to seat the original 68000, whereas the 68030 cards typically come with an on-board 68000. The original Amiga 1000 is the only model to have 256 KB of Amiga Chip RAM, which can be expanded to 512 KB with the addition of a daughterboard under a cover in the center front of the machine. RAM may also be upgraded via official and third-party upgrades, with a practical upper limit of about 9 MB of "fast RAM" due to the 68000's 24-bit address bus. This memory is accessible only by the CPU permitting faster code execution as DMA cycles are not shared with the chipset. The Amiga 1000 features an 86-pin expansion port (electrically identical to the later Amiga 500 expansion port, though the A500's connector is inverted). This port is used by third-party expansions such as memory upgrades and SCSI adapters. These resources are handled by the Amiga Autoconfig standard. Other expansion options are available including a bus expander which provides two Zorro-II slots. Specifications Retail Introduced on July 23, 1985, during a star-studded gala featuring Andy Warhol and Debbie Harry held at the Vivian Beaumont Theater at Lincoln Center in New York City, machines began shipping in September with a base configuration of 256 KB of RAM at the retail price of . A analog RGB monitor was available for around , bringing the price of a complete Amiga system to US$1,595 (). Before the release of the Amiga 500 and Amiga 2000 models in 1987, the A1000 was marketed as simply the Amiga, although the model number was there from the beginning, as the original box indicates. In the US, the A1000 was marketed as The Amiga from Commodore, with the Commodore logo omitted from the case. The Commodore branding was retained for the international versions. Additionally, the Amiga 1000 was sold exclusively in computer stores in the US rather than the various non computer-dedicated department and toy stores through which the VIC-20 and Commodore 64 were retailed. These measures were an effort to avoid Commodore's "toy-store" computer image created during the Tramiel era. Along with the operating system, the machine came bundled with a version of AmigaBASIC developed by Microsoft and a speech synthesis library developed by Softvoice, Inc. Aftermarket upgrades Many A1000 owners remained attached to their machines long after newer models rendered the units technically obsolete, and it attracted numerous aftermarket upgrades. Many CPU upgrades that plugged into the Motorola 68000 socket functioned in the A1000. Additionally, a line of products called the Rejuvenator series allowed the use of newer chipsets in the A1000, and an Australian-designed replacement A1000 motherboard called The Phoenix utilized the same chipset as the A3000 and added an A2000-compatible video slot and on-board SCSI controller. Impact In 1994, as Commodore filed for bankruptcy, Byte magazine called the Amiga 1000 "the first multimedia computer... so far ahead of its time that almost nobody—including Commodore's marketing department—could fully articulate what it was all about". In 2006, PC World rated the Amiga 1000 as the 7th greatest PC of all time. In 2007, it was rated by the same magazine as the 37th best tech product of all time. Joe Pillow "Joe Pillow" was the name given on the ticket for the extra airline seat purchased to hold the first Amiga prototype while on the way to the January 1984 Consumer Electronics Show. The airlines required a name for the airline ticket and Joe Pillow was born. The engineers (RJ Mical and Dale Luck) who flew with the Amiga prototype (codenamed Lorraine) drew a happy face on the front of the pillowcase and even added a tie. Joe Pillow extended his fifteen minutes of fame when the Amiga went to production. All fifty-three Amiga team members who worked on the project signed the Amiga case. This included Joe Pillow and Jay Miner's dog Michy who each got to "sign" the case in their own unique way. See also Amiga models and variants Amiga Sidecarfor using MS-DOS with Intel 8088 @ 4.77 MHz with 256 KB RAM References External links The Commodore Amiga A1000 at OLD-COMPUTERS.COM Who was Joe Pillow? Amiga computers Computer-related introductions in 1985
Operating System (OS)
649
Michigan Terminal System The Michigan Terminal System (MTS) is one of the first time-sharing computer operating systems. Developed in 1967 at the University of Michigan for use on IBM S/360-67, S/370 and compatible mainframe computers, it was developed and used by a consortium of eight universities in the United States, Canada, and the United Kingdom over a period of 33 years (1967 to 1999). Overview The University of Michigan Multiprogramming Supervisor (UMMPS) was developed by the staff of the academic computing center at the University of Michigan for operation of the IBM S/360-67, S/370 and compatible computers. The software may be described as a multiprogramming, multiprocessing, virtual memory, time-sharing supervisor that runs multiple resident, reentrant programs. Among these programs is the Michigan Terminal System (MTS) for command interpretation, execution control, file management, and accounting. End-users interact with the computing resources through MTS using terminal, batch, and server oriented facilities. The name MTS refers to: The UMMPS Job Program with which most end-users interact; The software system, including UMMPS, the MTS and other Job Programs, Command Language Subsystems (CLSs), public files (programs), and documentation; and The time-sharing service offered at a particular site, including the MTS software system, the hardware used to run MTS, the staff that supported MTS and assisted end-users, and the associated administrative policies and procedures. MTS was used on a production basis at about 13 sites in the United States, Canada, the United Kingdom, Brazil, and possibly in Yugoslavia and at several more sites on a trial or benchmarking basis. MTS was developed and maintained by a core group of eight universities included in the MTS Consortium. The University of Michigan announced in 1988 that "Reliable MTS service will be provided as long as there are users requiring it ... MTS may be phased out after alternatives are able to meet users' computing requirements". It ceased operating MTS for end-users on June 30, 1996. By that time, most services had moved to client/server-based computing systems, typically Unix for servers and various Mac, PC, and Unix flavors for clients. The University of Michigan shut down its MTS system for the last time on May 30, 1997. Rensselaer Polytechnic Institute (RPI) is believed to be the last site to use MTS in a production environment. RPI retired MTS in June 1999. Today, MTS still runs using IBM S/370 emulators such as Hercules, Sim390, and FLEX-ES. Origins In the mid-1960s, the University of Michigan was providing batch processing services on IBM 7090 hardware under the control of the University of Michigan Executive System (UMES), but was interested in offering interactive services using time-sharing. At that time the work that computers could perform was limited by their small real memory capacity. When IBM introduced its System/360 family of computers in the mid-1960s, it did not provide a solution for this limitation and within IBM there were conflicting views about the importance of and need to support time-sharing. A paper titled Program and Addressing Structure in a Time-Sharing Environment by Bruce Arden, Bernard Galler, Frank Westervelt (all associate directors at UM's academic Computing Center), and Tom O'Brian building upon some basic ideas developed at the Massachusetts Institute of Technology (MIT) was published in January 1966. The paper outlined a virtual memory architecture using dynamic address translation (DAT) that could be used to implement time-sharing. After a year of negotiations and design studies, IBM agreed to make a one-of-a-kind version of its S/360-65 mainframe computer with dynamic address translation (DAT) features that would support virtual memory and accommodate UM's desire to support time-sharing. The computer was dubbed the Model S/360-65M. The "M" stood for Michigan. But IBM initially decided not to supply a time-sharing operating system for the machine. Meanwhile, a number of other institutions heard about the project, including General Motors, the Massachusetts Institute of Technology's (MIT) Lincoln Laboratory, Princeton University, and Carnegie Institute of Technology (later Carnegie Mellon University). They were all intrigued by the time-sharing idea and expressed interest in ordering the modified IBM S/360 series machines. With this demonstrated interest IBM changed the computer's model number to S/360-67 and made it a supported product. With requests for over 100 new model S/360-67s IBM realized there was a market for time-sharing, and agreed to develop a new time-sharing operating system called TSS/360 (TSS stood for Time-sharing System) for delivery at roughly the same time as the first model S/360-67. While waiting for the Model 65M to arrive, UM Computing Center personnel were able to perform early time-sharing experiments using an IBM System/360 Model 50 that was funded by the ARPA CONCOMP (Conversational Use of Computers) Project. The time-sharing experiment began as a "half-page of code written out on a kitchen table" combined with a small multi-programming system, LLMPS from MIT's Lincoln Laboratory, which was modified and became the UM Multi-Programming Supervisor (UMMPS) which in turn ran the MTS job program. This earliest incarnation of MTS was intended as a throw-away system used to gain experience with the new IBM S/360 hardware and which would be discarded when IBM's TSS/360 operating system became available. Development of TSS took longer than anticipated, its delivery date was delayed, and it was not yet available when the S/360-67 (serial number 2) arrived at the Computing Center in January 1967. At this time UM had to decide whether to return the Model 67 and select another mainframe or to develop MTS as an interim system for use until TSS was ready. The decision was to continue development of MTS and the staff moved their initial development work from the Model 50 to the Model 67. TSS development was eventually canceled by IBM, then reinstated, and then canceled again. But by this time UM liked the system they had developed, it was no longer considered interim, and MTS would be used at UM and other sites for 33 years. MTS Consortium MTS was developed, maintained, and used by a consortium of eight universities in the US, Canada, and the United Kingdom: University of Michigan (UM), 1967 to 1997, US University of British Columbia (UBC), 1968 to 1998, Canada NUMAC (University of Newcastle upon Tyne, University of Durham, and Newcastle Polytechnic), 1969 to 1992, United Kingdom University of Alberta (UQV), 1971 to 1994, Canada Wayne State University (WSU), 1971 to 1998, US Rensselaer Polytechnic Institute (RPI), 1976 to 1999, US Simon Fraser University (SFU), 1977 to 1992, Canada University of Durham (NUMAC), 1982 to 1992, United Kingdom Several sites ran more than one MTS system: NUMAC ran two (first at Newcastle and later at Durham), Michigan ran three in the mid-1980s (UM for Maize, UB for Blue, and HG at Human Genetics), UBC ran three or four at different times (MTS-G, MTS-L, MTS-A, and MTS-I for general, library, administration, and instruction). Each of the MTS sites made contributions to the development of MTS, sometimes by taking the lead in the design and implementation of a new feature and at other times by refining, enhancing, and critiquing work done elsewhere. Many MTS components are the work of multiple people at multiple sites. In the early days collaboration between the MTS sites was accomplished through a combination of face-to-face site visits, phone calls, the exchange of documents and magnetic tapes by snail mail, and informal get-togethers at SHARE or other meetings. Later, e-mail, computer conferencing using CONFER and *Forum, network file transfer, and e-mail attachments supplemented and eventually largely replaced the earlier methods. The members of the MTS Consortium produced a series of 82 MTS Newsletters between 1971 and 1982 to help coordinate MTS development. Starting at UBC in 1974 the MTS Consortium held annual MTS Workshops at one of the member sites. The workshops were informal, but included papers submitted in advance and Proceedings published after-the-fact that included session summaries. In the mid-1980s several Western Workshops were held with participation by a subset of the MTS sites (UBC, SFU, UQV, UM, and possibly RPI). The annual workshops continued even after MTS development work began to taper off. Called simply the "community workshop", they continued until the mid-1990s to share expertise and common experiences in providing computing services, even though MTS was no longer the primary source for computing on their campuses and some had stopped running MTS entirely. MTS sites In addition to the eight MTS Consortium sites that were involved in its development, MTS was run at a number of other sites, including: Centro Brasileiro de Pesquisas Fisicas (CBPF) within the Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Brazil Empresa Brasileira de Pesquisa Agropecuária (EMBRAPA), Brazil Hewlett-Packard (HP), US Michigan State University (MSU), US Goddard Space Flight Center, National Aeronautics and Space Administration (NASA), US A copy of MTS was also sent to the University of Sarajevo, Yugoslavia, though whether or not it was ever installed is not known. INRIA, the French national institute for research in computer science and control in Grenoble, France ran MTS on a trial basis, as did the University of Waterloo in Ontario, Canada, Southern Illinois University, the Naval Postgraduate School, Amdahl Corporation, ST Systems for McGill University Hospitals, Stanford University, and University of Illinois in the United States, and a few other sites. Hardware In theory MTS will run on the IBM S/360-67, any of the IBM S/370 series which include virtual memory, and their successors. MTS has been run on the following computers in production, benchmarking, or trial configurations: IBM: S/360-67, S/370-148, S/370-168, 3033U, 4341, 4361, 4381, 3081D, 3081GX, 3083B, 3090–200, 3090–400, 3090–600, and ES/9000-720 Amdahl: 470V/6, 470V/7, 470V/8, 5860, 5870, 5990 Hitachi: NAS 9060 Various S/370 emulators The University of Michigan installed and ran MTS on the first IBM S/360-67 outside of IBM (serial number 2) in 1967, the second Amdahl 470V/6 (serial number 2) in 1975, the first Amdahl 5860 (serial number 1) in 1982, and the first factory shipped IBM 3090–400 in 1986. NUMAC ran MTS on the first S/360-67 in the UK and very likely the first in Europe. The University of British Columbia (UBC) took the lead in converting MTS to run on the IBM S/370 series (an IBM S/370-168) in 1974. The University of Alberta installed the first Amdahl 470V/6 in Canada (serial number P5) in 1975. By 1978 NUMAC (at University of Newcastle upon Tyne and University of Durham) had moved main MTS activity on to its IBM S/370 series (an IBM S/370-168). MTS was designed to support up to four processors on the IBM S/360-67, although IBM only produced one (simplex and half-duplex) and two (duplex) processor configurations of the Model 67. In 1984 RPI updated MTS to support up to 32 processors in the IBM S/370-XA (Extended Addressing) hardware series, although 6 processors is likely the largest configuration actually used. MTS supports the IBM Vector Facility, available as an option on the IBM 3090 and ES/9000 systems. In early 1967 running on the single processor IBM S/360-67 at UM without virtual memory support, MTS was typically supporting 5 simultaneous terminal sessions and one batch job. In November 1967 after virtual memory support was added, MTS running on the same IBM S/360-67 was simultaneously supporting 50 terminal sessions and up to 5 batch jobs. In August 1968 a dual processor IBM S/360-67 replaced the single processor system, supporting roughly 70 terminal and up to 8 batch jobs. By late 1991 MTS at UM was running on an IBM ES/9000-720 supporting over 600 simultaneous terminal sessions and from 3 to 8 batch jobs. MTS can be IPL-ed under VM/370, and some MTS sites did so, but most ran MTS on native hardware without using a virtual machine. Features Some of the notable features of MTS include: The use of Virtual memory and Dynamic Address Translation (DAT) on the IBM S/360-67 in 1967. The use of multiprocessing on an IBM S/360-67 with two CPUs in 1968. Programs with access to (for the time) very large virtual address spaces. A straightforward command language that is the same for both terminal and batch jobs. A strong device independent input/output model that allows the same commands and programs to access terminals, disk files, printers, magnetic and paper tapes, card readers and punches, floppy disks, network hosts, and an audio response unit (ARU). A file system with support for "line files" where the line numbers and length of individual lines are stored as metadata separate from the data contents of the line, and the ability to read, insert, replace, and delete individual lines anywhere in the file without the need to read or write the entire file. A file editor ($EDIT) with both command line and "visual" interfaces and pattern matching based on SNOBOL4 patterns. The ability to share files in controlled ways (read, write-change, write-expand, destroy, permit). The ability to permit files, not just to other user IDs and projects (aka groups), but to specific commands or programs and combinations of user IDs, projects, commands and programs. The ability for multiple users to manage simultaneous access to files with the ability to implicitly and explicitly lock and unlock files and to detect deadlocks. Network host to host access from commands and programs as well as access to or from remote network printers, card readers and punches. An e-mail system ($MESSAGESYSTEM) that supports local and network mail with the ability to send to groups, to recall messages that haven't already been read, to add recipients to messages after they have been sent, and to display a history of messages in an e-mail chain without the need to include the text from older messages in each new message. The ability to access tapes remotely, and to handle data sets that extend across multiple tapes efficiently. The availability of a rich collection of well-documented subroutine libraries. The ability for multiple users to quickly load and use a collection of common reentrant subroutines, which are available in shared virtual memory. The availability of compilers, assemblers, and a Symbolic Debugging System (SDS) that allow users to debug programs written in high-level languages such as FORTRAN, Pascal, PL/I, ... as well as in assembly language. A strong protection model that uses the virtual memory hardware and the S/360 and S/370 hardware's supervisor and problem states and via software divides problem state execution into system (privileged or unprotected) and user (protected or unprivileged) modes. Relatively little code runs in supervisor state. For example, Device Support Routines (DSRs, aka device drivers) are not part of the supervisor and run in system mode in problem state rather than in supervisor state. A simulated Branch on Program Interrupt (BPI) instruction. Programs developed for MTS The following are some of the notable programs developed for MTS: Awit, a computer chess program written in Algol W by Tony Marsland. Chaos, one of the leading computer chess programs from 1973 through 1985. Written in FORTRAN Chaos started at RCA Systems Programming division in Cinnaminson, NJ with Fred Swartz and Victor Berman as first authors, Mike Alexander and others joined the team later and moved development to MTS at the UM Computing Center. CONFER II, one of the first computer conferencing systems. CONFER was developed by Robert Parnes starting in 1975 while he was a graduate student and with support from the University of Michigan's Center for Research on Learning and Teaching (CRLT) and School of Education. FakeOS, a simulator that allows object modules containing OS/360 SVCs, control blocks, and references to OS/360 access methods to execute under MTS. Forum, a computer conferencing system developed by staff of the Computing Centre at the University of British Columbia (UBC). GOM (Good Old Mad), a compiler for the 7090 MAD language converted to run under MTS by Don Boettner of the UM's Computing Center. IF (Interactive Fortran), developed by the University of British Columbia Computing Centre. MICRO Information Management System, one of the earliest relational database management systems implemented in 1970 by the Institute for Labor and Industrial Relations (ILIR) at the University of Michigan. MIDAS (Michigan Interactive Data Analysis System), an interactive statistical analysis package developed by Dan Fox and others at UM's Statistical Research Laboratory. Plus, a programming language developed by Alan Ballard and Paul Whaley of the Computing Centre at the University of British Columbia (UBC). TAXIR, an information storage and retrieval system designed for taxonomic data at the University of Colorado by David Rogers, Henry Fleming, Robert Brill, and George Estabrook and ported to MTS and enhanced by Brill at the University of Michigan. Textform, a text-processing program developed at the University of Alberta's Computing Centre to support device independent output to a wide range of devices from line printers, to the Xerox 9700 page printers, to advanced phototypesetting equipment using fixed width and proportional fonts. VSS, a simulator developed at the University of British Columbia's Computing Centre that makes it possible to run OS/MFT, OS/MVT, VS1, and MVS application programs under MTS. Programs that run under MTS The following are some of the notable programs ported to MTS from other systems: APL VS, IBM's APL VS compiler program product. ASMH, a version of IBM's 370 assembler with enhancements from SLAC and MTS. COBOL VS, IBM's COBOL VS compiler program product. CSMP, IBM's Continuous System Modeling Program. Fortran, the G, H, and VS compilers from IBM. GASP, a FORTRAN based discrete simulation package. Kermit, Columbia University's communications software and protocol MPS, IBM's Mathematical Programming System/360. NASTRAN, finite element analysis program originally developed by and for NASA. OSIRIS (Organized Set of Integrated Routines for Investigations with Statistics), a collection of statistical analysis programs developed at the University of Michigan's Institute for Social Research (ISR). PascalSB, the Stony Brook Pascal compiler. Pascal/SLAC, the Pascal compiler from the Stanford Linear Accelerator Center. Pascal VS, IBM's Pascal VS compiler program product. PL/I Optimizing Compiler from IBM. REDUCE2, an algebraic language implemented in LISP. SAS (Statistical Analysis System). SHAZAM, a package for estimating, testing, simulating and forecasting econometrics and statistical models SIMSCRIPT II.5, a free-form, English-like, general-purpose discrete event simulation language. SPIRES (Stanford Public Information Retrieval System), a database management system. SPSS (Statistical Package for the Social Sciences) TELL-A-GRAPH, a proprietary conversational graphics program from ISSCO of San Diego, CA. TEX, Don Knuth's TeX text-processing program. TROLL, econometric modeling and statistical analysis Programming languages available under MTS MTS supports a rich set of programming languages, some developed for MTS and others ported from other systems: ALGOL W ALGOL 68 APL (IBM's VS APL) Assembler (360/370: G, H, Assist; DEC PDP-11) BASIC (BASICUM), WBASIC BCPL (Basic Combined Programming Language) C COBOL (ANSI, VS, WATBOL) EXPL (Extended XPL) FORTRAN (G, H, VS, WATFOR, WATFIV) GASP (A FORTRAN-based discrete simulation language) GOM (Good Old Mad, the 7090 Michigan Algorithm Decoder ported to the S/370 architecture) GPSS/H (General Purpose Simulation System V) ICON IF (Interactive FORTRAN, an incremental compiler and environment for executing and debugging FORTRAN programs, developed at the University of British Columbia) MAD/I (an expanded version of the Michigan Algorithm Decoder for the IBM S/360 architecture that is not compatible with the original 7090 version of MAD, see also GOM above) MPS, IBM's Mathematical Programming System/360 MTS LISP 1.5 (a new implementation of LISP 1.5 developed at the UM's Mental Health Research Institute, MHRI) Pascal (VS, JB) PIL, PIL/2 (Pitt Interpretive Language) PL/I (F and OPT from IBM, PL/C from Cornell University) PL/M PL360 Plus (A "Pascal-like" system implementation language from the University of British Columbia (UBC) based on the SUE system language developed at the University of Toronto, c. 1971) Prolog Simula SUE SNOBOL4 (String Oriented Symbolic Language) SPITBOL (Speedy Implementation of SNOBOL) UMIST (University of Michigan Interpretive String Translator, based on TRAC) System architecture UMMPS, the supervisor, has complete control of the hardware and manages a collection of job programs. One of the job programs is MTS, the job program with which most users interact. MTS operates as a collection of command language subsystems (CLSs). One of the CLSs allows for the execution of user programs. MTS provides a collection of system subroutines that are available to CLSs, user programs, and MTS itself. Among other things these system subroutines provide standard access to Device Support Routines (DSRs), the components that perform device dependent input/output. Manuals and documentation The lists that follow are quite University of Michigan centric. Most other MTS sites used some of this material, but they also produced their own manuals, memos, reports, and newsletters tailored to the needs of their site. End-user documentation The manual series MTS: The Michigan Terminal System, was published from 1967 through 1991, in volumes 1 through 23, which were updated and reissued irregularly. Initial releases of the volumes did not always occur in numeric order and volumes occasionally changed names when they were updated or republished. In general, the higher the number, the more specialized the volume. The earliest versions of MTS Volume I and II had a different organization and content from the MTS volumes that followed and included some internal as well as end user documentation. The second edition from December 1967 covered: MTS Volume I: Introduction; Concepts and facilities; Calling conventions; Batch, Terminal, Tape, and Data Concentrator user's guides; Description of UMMPS and MTS; Files and devices; Command language; User Programs; Subroutine and macro library descriptions; Public or library file descriptions; and Internal specifications: Dynamic loader (UMLOAD), File and Device Management (DSRI prefix and postfix), Device Support Routines (DSRs), and File routines MTS Volume II: Language processor descriptions: F-level assembler; FORTRAN G; IOH/360; PIL; SNOBOL4; UMIST; WATFOR; and 8ASS (PDP-8 assembler) The following MTS Volumes were published by the University of Michigan Computing Center and are available as PDFs: MTS Volume 1: The Michigan Terminal System, 1991 MTS Volume 2: Public File Descriptions, 1990 MTS Volume 3: Subroutine and Macro Descriptions, 1989 MTS Volume 4: Terminals and Networks in MTS, 1988 (earlier Terminals and Tapes) MTS Volume 5: System Services, 1985 MTS Volume 6: FORTRAN in MTS, 1988 MTS Volume 7: PL/I in MTS, 1985 MTS Volume 8: LISP and SLIP in MTS, 1983 MTS Volume 9: SNOBOL4 in MTS, 1983 MTS Volume 10: BASIC in MTS, 1980 MTS Volume 11: Plot Description System, 1985 MTS Volume 12: PIL/2 in MTS, 1974 MTS Volume 13: The Symbolic Debugging System, 1985 (earlier Data Concentrator User's Guide) MTS Volume 14: 360/370 Assemblers in MTS, 1986 MTS Volume 15: FORMAT and TEXT360, 1988 MTS Volume 16: ALGOL W in MTS, 1980 MTS Volume 17: Integrated Graphics System, 1984 MTS Volume 18: MTS File Editor, 1988 MTS Volume 19: Tapes and Floppy Disks, 1993 MTS Volume 20: PASCAL in MTS, 1989 MTS Volume 21: MTS Command Extensions and Macros, 1991 MTS Volume 22: Utilisp in MTS, 1988 MTS Volume 23: Messaging and Conferencing in MTS, 1991 MTS Reference Summary, a ~60 page, 3" x 7.5", pocket guide to MTS, Computing Center, University of Michigan The Taxir primer: MTS version, Brill, Robert C., Computing Center, University of Michigan Fundamental Use of the Michigan Terminal System, Thomas J. Schriber, 5th Edition (revised), Ulrich's Books, Inc., Ann Arbor, MI, 1983, 376 pp. Digital computing, FORTRAN IV, WATFIV, and MTS (with *FTN and *WATFIV), Brice Carnahan and James O Wilkes, University of Michigan, Ann Arbor, MI, 1968–1979, 1976 538 p. Documentation for MIDAS, Michigan Interactive Data Analysis System, Statistical Research Laboratory, University of Michigan OSIRIS III MTS Supplement, Center for Political Studies, University of Michigan Various aspects of MTS at the University of Michigan were documented in a series of Computing Center Memos (CCMemos) which were published irregularly from 1967 through 1987, numbered 2 through 924, though not necessarily in chronological order. Numbers 2 through 599 are general memos about various software and hardware; the 600 series are the Consultant's Notes series—short memos for beginning to intermediate users; the 800 series covers issues relating to the Xerox 9700 printer, text processing, and typesetting; and the 900 series covers microcomputers. There was no 700 series. In 1989 this series continued as Reference Memos with less of a focus on MTS. A long run of newsletters targeted to end-users at the University of Michigan with the titles Computing Center News, Computing Center Newsletter, U-M Computing News, and the Information Technology Digest were published starting in 1971. There was also introductory material presented in the User Guide, MTS User Guide, and Tutorial series, including: Getting connected—Introduction to Terminals and Microcomputers Introduction to the Computing Center Introduction to Computing Center services Introduction to Database Management Systems on MTS Introduction to FORMAT Introduction to Magnetic Tapes Introduction to MTS Introduction to the MTS File Editor Introduction to Programming and Debugging in MTS Introduction to Terminals Introduction to Terminals and Microcomputers Internals documentation The following materials were not widely distributed, but were included in MTS Distributions: MTS Operators Manual MTS Message Manual MTS Volume n: Systems Edition MTS Volume 99: Internals Documentation Supervisor Call Descriptions Disk Disaster Recovery Procedures A series of lectures describing the architecture and internal organization of the Michigan Terminal System given by Mike Alexander, Don Boettner, Jim Hamilton, and Doug Smith (4 audio tapes, lecture notes, and transcriptions) Distribution The University of Michigan released MTS on magnetic tape on an irregular basis. There were full and partial distributions, where full distributions (D1.0, D2.0, ...) included all of the MTS components and partial distributions (D1.1, D1.2, D2.1, D2.2, ...) included just the components that had changed since the last full or partial distribution. Distributions 1.0 through 3.1 supported the IBM S/360 Model 67, distribution 3.2 supported both the IBM S/360-67 and the IBM S/370 architecture, and distributions D4.0 through D6.0 supported just the IBM S/370 architecture and its extensions. MTS distributions included the updates needed to run licensed program products and other proprietary software under MTS, but not the base proprietary software itself, which had to be obtained separately from the owners. Except for IBM's Assembler H, none of the licensed programs were required to run MTS. The last MTS distribution was D6.0 released in April 1988. It consisted of 10,003 files on six 6250 bpi magnetic tapes. After 1988, distribution of MTS components was done in an ad hoc fashion using network file transfer. To allow new sites to get started from scratch, two additional magnetic tapes were made available, an IPLable boot tape that contained a minimalist version of MTS plus the DASDI and DISKCOPY utilities that could be used to initialize and restore a one disk pack starter version of MTS from the second magnetic tape. In the earliest days of MTS, the standalone TSS DASDI and DUMP/RESTORE utilities rather than MTS itself were used to create the one-disk starter system. There were also less formal redistributions where individual sites would send magnetic tapes containing new or updated work to a coordinating site. That site would copy the material to a common magnetic tape (RD1, RD2, ...), and send copies of the tape out to all of the sites. The contents of most of the redistribution tapes seem to have been lost. Today, complete materials from the six full and the ten partial MTS distributions as well as from two redistributions created between 1968 and 1988 are available from the Bitsavers Software archive and from the University of Michigan's Deep Blue digital archive. Working with the D6.0 distribution materials, it is possible to create an IPLable version of MTS. A new D6.0A distribution of MTS makes this easier. D6.0A is based on the D6.0 version of MTS from 1988 with various fixes and updates to make operation under Hercules in 2012 smoother. In the future, an IPLable version of MTS will be made available based upon the version of MTS that was in use at the University of Michigan in 1996 shortly before MTS was shut down. Licensing As of December 22, 2011, the MTS Distribution materials are freely available under the terms of the Creative Commons Attribution 3.0 Unported License (CC BY 3.0). In its earliest days MTS was made available for free without the need for a license to sites that were interested in running MTS and which seemed to have the knowledgeable staff required to support it. In the mid-1980s licensing arrangements were formalized with the University of Michigan acting as agent for and granting licenses on behalf of the MTS Consortium. MTS licenses were available to academic organizations for an annual fee of $5,000, to other non-profit organizations for $10,000, and to commercial organizations for $25,000. The license restricted MTS from being used to provide commercial computing services. The licensees received a copy of the full set of MTS distribution tapes, any incremental distributions prepared during the year, written installation instructions, two copies of the current user documentation, and a very limited amount of assistance. Only a few organizations licensed MTS. Several licensed MTS in order to run a single program such as CONFER. The fees collected were used to offset some of the common expenses of the MTS Consortium. See also Merit Network Time-sharing system evolution References External links Archives MTS Archive, a collection of documents, photographs, movies, and other materials related to MTS and the organizations and people that developed and used it MTS distribution archive at Bitsavers' MTS distribution archive at the University of Michigan's Deep Blue digital archive MTS D6.0A - A pre-built version of MTS for use with the Hercules S/370 emulator, available from the MTS Archive MTS PDF Document Archive at Bitsavers' The UM Computing Center Public Collection at the Hathi Trust Digital Library contains full text versions of over 250 documents related to MTS that are available for online viewing. The Computing Center collection in the University of Michigan's Deep Blue digital archive contains over 50 items, mostly PDFs, but also a few videos, related to MTS and the U-M Computing Center. Papers A Comparative Study of the Michigan Terminal System (MTS) with Other Time Sharing Systems for the IBM 360/67 Computer, Elvert F. Hinson, Master's thesis, Naval Postgraduate School, Monterey, CA., December 1971 "Measurement and Performance of a Multiprogramming System", B. Arden and D. Boettner, Proceedings of the 2nd ACM Symposium on Operating Systems Principles, pp. 130–46, October 1969 Merit Network History MTS Bibliography, a list of published literature about MTS "MTS - Michigan Terminal System", Donald W. Boettner and Michael T. Alexander, ACM SIGOPS Operating Systems Review, Volume 4, Issue 4 (December 1970) "The Michigan Terminal System", Donald W. Boettner and Michael T. Alexander, Proceedings of the IEEE, Volume 63, Issue 6 (June 1975), pp. 912–918 "A Faster Cratchit - The History of Computing at Michigan", Vol. XXVII, No. 1 (January 1976), U-M Research News, 24 pages Web sites MTS History, collected by former University of Michigan Computing Center staff member Tom Valerio Personal perspective on MTS by Dan Boulet a student and later Computing Services staff member at the University of Alberta Personal reflections on MTS by Mark Riordan of Michigan State University's Computer Laboratory Several articles from the May 13, 1996 issue of the University of Michigan Information Technology Digest, Volume 5, No. 5, giving the history of and reminiscences about MTS, Merit, and UMnet on the eve of MTS's retirement at the University of Michigan, preserved on Web pages created by Josh Simon Try-MTS.com, a web site showing how to run MTS under the Hercules emulator, tutorials on using the system and on several of the programming languages available on MTS Public MTS Terminal, logon and look around like a student would in the 90's Time-sharing operating systems IBM mainframe operating systems Discontinued operating systems Formerly proprietary software History of software University of Michigan 1967 software
Operating System (OS)
650
Microsoft Venus Microsoft Venus was an aborted venture by Microsoft into the low-end personal computing market in the People's Republic of China. The product, a set-top operating system designed to work with low-end televisions (somewhat similar to MSN TV in the United States), was announced by then-Microsoft chairman Bill Gates on March 10, 1999 in Shenzhen, and was to be made available by January 2000; it never made it out of Microsoft's lab however, slowly dying less than a year after its announcement. History Relatively little is known about Microsoft Venus, since the project never made it beyond the prototype stage, and was designed to be exclusive to the People's Republic of China. What is known of the project appears to show that Microsoft designed Venus in response to a largely untapped Chinese computing market; with their low-end set-top box, designed to be a combination of Internet accessibility and the basic features of a personal computer (such as a rudimentary word processor), they hoped to tap into this market, gaining market share and profit in the world's fastest-growing economy in the process. Despite initial support from the Chinese government — which included, but was not limited to, discounts and assorted subsidies — and lucrative distribution agreements with Acer, Philips, Lenovo, and a few other companies, the planned venture faced many problems, the largest of which was a massive cost overrun. The units were reported to sell for up to 3,000 yuan (or US$360 at the time), which was then a large sum for the general Chinese public. As 1999 progressed and 2000 began, the Chinese government's relations with Microsoft continued to sour over production costs of the Venus. This tension reached a fever pitch in January 2000, when the Chinese government ordered Windows 2000 uninstalled from all ministerial computers, opting to use the locally produced Red Flag Linux instead. January 2000 passed without a Venus release, and the product remained "vaporware". After Microsoft's aforementioned brief showdown with the Chinese government that same month, all talk of Venus appears to have ceased in the news media. Venus was never released, and with it went Microsoft's attempt at selling low-cost computing to the Chinese masses. Aftermath While Microsoft's first attempt at bringing computing to the Chinese masses was a failure, later events have proved more favorable to the company. Lenovo's purchase of IBM's Windows-based computing division in 2005 gave the consumer electronics corporation, which is partially owned by the Chinese government, a strong presence in the Windows computing market both at home in China and abroad that has persisted since the initial acquisition. However, Lenovo's computers are full-fledged PCs, and do not bear a strong resemblance to the all-in-one television component that Microsoft envisioned becoming a large success in China. Specifications According to Reuters' account of the initial unveiling of the Venus prototype, the box was intended to combine an Internet browser, rudimentary word processing and graphic design suite, and CD player into one set-top television component. Gates touted the computer as having "learning capabilities". This ability was intended to be combined with a contract with forty Chinese governmental ministries to produce an application known as "Government Online", which would have served the stated purpose of bringing the Chinese bureaucracy closer to the masses. Microsoft reportedly intended for the Venus boxes to run on a special version of Windows CE, fine-tuned for display on television sets. Exact processor speeds were never released by Microsoft, but were believed to be comparable to that of low-end Windows computers at that time. See also MSN TV Science and technology in China Set-top box References Set-top box Microsoft operating systems Microsoft hardware Uncompleted Microsoft initiatives Windows CE
Operating System (OS)
651
EPOC (operating system) EPOC is a mobile operating system developed by Psion, a British company founded in 1980. It began as a 16-bit operating system (OS) for Psion's own x86-compatible devices, and was later replaced by a 32-bit system for x86 and ARM. Psion licensed the 32-bit system to other hardware makers, such as Ericsson. To distinguish it from the 16-bit OS, the 32-bit version was sometimes called EPOC32. Technologically, it was a major departure from the 16-bit version (which came to be called EPOC16 or SIBO). In 1998, the 32-bit version was renamed Symbian OS. After Nokia acquired the rights to Symbian in 2010, they published Symbian's source code under the Eclipse Public License. In 2011, Nokia rescinded the open-source license for subsequent releases of the software. Name The name EPOC comes from the word epoch (the beginning of an era). The name was shortened to four letters to accord with the names of such software innovations as Unix and Mach. Initially the operating system was capitalised as Epoc rather than 'EPOC', since it is not an acronym. The change to all capital letters was made on the recommendation of Psion's marketing department. Thereafter, a rumour circulated in the technical press that EPOC was an acronym for "Electronic Piece of Cheese". When Psion started developing a 32-bit operating system in 1994, they kept it under the EPOC brand. To avoid confusion within the company, they started calling the old system EPOC16, and the new one EPOC32. Then it became conventional within the company to refer to EPOC16 as SIBO, which was the codename of Psion's 16-bit mobile computing initiative. This change freed them use the name EPOC for EPOC32. In June 1998, Psion formed a limited company, named Symbian Ltd., with the telecommunications corporations Nokia, Ericsson, and Motorola. By buying into the new firm, the telecommunications corporations each acquired a stake in Psion's EPOC operating system and other intellectual property. Symbian Ltd. changed the name of EPOC/EPOC32 to Symbian OS, which debuted in November 2000 on the Nokia 9210 Communicator smartphone. EPOC16 (1989–1998) EPOC was developed at Psion, a software and mobile-device company founded in London in 1980. The company released its first pocket computer in 1984: an 8-bit device named the Psion Organiser. In 1986 they released a series of improved models under the Organiser II brand, but the 8-bit era was ending. Psion saw a need to develop a 16-bit operating system to drive their next generation of devices. First, however, they needed to engineer a 16-bit single-board computer, something that was extremely difficult at the time. They codenamed the project SIBO, for "single-board organiser" or "sixteen-bit organiser". To develop the SIBO hardware and software, they needed samples of the 16-bit microprocessors they would be programming; but it took more than a year to secure the chips, which caused a significant delay. By 1987, development of EPOC was underway: It was a single-user, preemptive multitasking operating system designed to run in read-only memory (ROM). The operating system and its programmes were written in Intel 8086 assembly language and C. When the operating system started, it opened the pre-installed programmes in advance so that the system could switch between them quickly. To enable users to write and run their own programmes, EPOC featured an updated version of the Open Programming Language (OPL), which was first published with the Psion Organiser. OPL was a simple interpreted language somewhat like BASIC. In 1989, Psion released the first 16-bit computers to be equipped with the new operating system: the MC200, MC400, and MC600 notebooks. Each of these had an Intel 80C86 processor, but differed in some other specifications, such as memory capacity. Among the later SIBO devices were the Psion Series 3 (1991), 3A (1993), 3C (1996), Workabout series, and the Siena 512K model (1996). The final EPOC device was the Psion Series 3mx (1998). The user interface differed by device. The notebook computers had a windows, icons, menus, pointer (WIMP) graphical user interface (GUI). The handheld computers, which had smaller screens and no pointing device, accept input from a keyboard or a stylus. On-screen, programmes were represented by icons, but on smaller devices a user could also access them via specialised buttons. EPOC32 (1997–2000) In parallel with the production of their 16-bit devices, Psion had been developing a 32-bit version of EPOC since late 1994. The move to 32 bits was necessary to remain competitive, and Psion wanted to have a mobile operating system they could license to other companies. Thus, the system needed to be more portable than their prior systems. For the 32-bit operating system, the engineers wrote a new object-oriented codebase in C++. During the transition period, the old system came to be called EPOC16, and new one EPOC32. Where EPOC16 was designed specifically for the Intel 80186 platform, EPOC32 was built for ARM, a computing platform called a reduced instruction set computer (RISC), which instruction set architecture is smaller and of more uniform length than in an alternative complex instruction set computer (CISC). Like EPOC16, EPOC32 was a single-user, pre-emptive multitasking operating system. It also featured memory protection, which was an essential feature for modern operating systems. Psion licensed EPOC32 to other device manufacturers, and made it possible for manufacturers to change or replace the system's GUI. Because of the licensing arrangement, Psion considered spinning-off their software division as Psion Software. Psion's own PDAs had a GUI named Eikon. Visually, Eikon was a refinement of design choices from Psion's 8- and 16-bit devices. Releases 1–4 Early iterations of the EPOC32 were codenamed Protea. The first published version, called Release 1, appeared on the Psion Series 5 ROM v1.0 in June 1997. Release 2 was never published, but an updated ROM (version 1.1) for the Series 5 featured Release 3. The Series 5 used Psion's new user interface, Eikon. One of the first EPOC licensees was a short-lived company named Geofox; they halted production after selling fewer than 1,000 units. Another licensee, Oregon Scientific, released a budget device named Osaris; it was the only EPOC device to ship with Release 4. Release 5 EPOC Release 5 premiered in March 1999. It ran on ARMv4 processors, such as the StrongARM series. In addition to its email, messaging, and data synchronisation features, it introduced support for the Java Development Kit, which made it capable of running a wider variety of programmes. In 2000, EPOC's GUI variations were replaced with three reference interfaces: Crystal was for devices with a small keyboard; Quartz was for "communicator" devices (which had some telecommunication features, and tended to be equipped with a thumb keyboard); and Pearl was for mobile phones. Each classification supported VGA graphics. Psion deployed Release 5 on their 5mx series (1999), Revo (1999), netBook (1999), Series 7 (1999), Revo Plus (2000), and netPad (2001) devices. Ericsson rebranded the Psion Series 5mx as the MC218, and SONICblue rebranded the Revo as the Diamond Mako; like the original devices, the rebranded versions were released in 1999. The Ericsson R380 smartphone, released in November 2000, was the first device to be distributed with EPOC Release 5.1. This release was also known as ER5u; the u indicated that the system supported the Unicode system of text encoding: an important feature for the representation of diverse languages. Psion developed an ER5u-enabled device codenamed "Conan", but it did not advance beyond the prototype stage. The device was intended to be a Bluetooth-enabled successor to the Revo. Symbian (2000–2012) In June 1998, Psion Software became Symbian Ltd., a major joint venture between Psion and phone manufacturers Ericsson, Motorola, and Nokia. The next release of EPOC32, Release 6, was rebranded Symbian OS. It decoupled the user interface from the underlying operating system, which afforded device manufacturers the ability (or burden) of implementing a graphical interface on their devices. The final version of Symbian OS to be released was v10.1; the final update was published in 2012. References External links Symbian OS Mobile operating systems 1989 software Computer-related introductions in 1989 Telecommunications-related introductions in 1989
Operating System (OS)
652
IBM System/360 Model 22 The IBM System/360 Model 22 was an IBM mainframe from the System/360 line. History The Model 22 was a cut-down (economy) version of the Model 30 computer, aimed at bolstering the low end of the range. The 360/22 was announced less than a year after the June 22, 1970 withdrawal of the 360/30, and it lasted six and a half years, from April 7, 1971, to October 7, 1977. Comparisons Models Only 2 models were offered: 24K or 32K of memory. Notes References System 360 Model 22
Operating System (OS)
653
System Management BIOS In computing, the System Management BIOS (SMBIOS) specification defines data structures (and access methods) that can be used to read management information produced by the BIOS of a computer. This eliminates the need for the operating system to probe hardware directly to discover what devices are present in the computer. The SMBIOS specification is produced by the Distributed Management Task Force (DMTF), a non-profit standards development organization. The DMTF estimates that two billion client and server systems implement SMBIOS. The DMTF released the version 3.5.0 of the specification on September 22, 2021. SMBIOS was originally known as Desktop Management BIOS (DMIBIOS), since it interacted with the Desktop Management Interface (DMI). History Version 1 of the Desktop Management BIOS (DMIBIOS) specification was produced by Phoenix Technologies in or before 1996. Version 2.0 of the Desktop Management BIOS specification was released on March 6, 1996 by American Megatrends (AMI), Award Software, Dell, Intel, Phoenix Technologies, and SystemSoft Corporation. It introduced 16-bit plug-and-play functions used to access the structures from Windows 95. The last version to be published directly by vendors was 2.3 on August 12, 1998. The authors were American Megatrends, Award Software, Compaq, Dell, Hewlett-Packard, Intel, International Business Machines (IBM), Phoenix Technologies, and SystemSoft Corporation. Circa 1999, the Distributed Management Task Force (DMTF) took ownership of the specification. The first version published by the DMTF was 2.3.1 on March 16, 1999. At approximately the same time Microsoft started to require that OEMs and BIOS vendors support the interface/data-set in order to have Microsoft certification. Version 3.0.0, introduced in February 2015, added a 64-bit entry point, which can coexist with the previously defined 32-bit entry point. Version 3.4.0 was released in August 2020. Version 3.5.0 was released in September 2021. Contents The SMBIOS table consists of an entry point (two types are defined, 32-bit and 64-bit), and a variable number of structures that describe platform components and features. These structures are occasionally referred to as "tables" or "records" in third-party documentation. Structure types As of version 3.3.0, the SMBIOS specification defines the following structure types: Accessing SMBIOS data The EFI configuration table (EFI_CONFIGURATION_TABLE) contains entries pointing to the SMBIOS 2 and/or SMBIOS 3 tables. There are several ways to access the data, depending on the platform and operating system. From UEFI In the UEFI Shell, the command can retrieve and display the SMBIOS data. One can often enter the UEFI shell by entering the BIOS, and then selecting the shell as a boot option (as opposed to a DVD drive or hard drive). From Linux For Linux, the dmidecode utility can be used. From Windows Microsoft specifies WMI as the preferred mechanism for accessing SMBIOS information from Microsoft Windows. On Windows systems that support it (XP and later), some SMBIOS information can be viewed with either the WMIC utility with 'BIOS'/'MEMORYCHIP'/'BASEBOARD' and similar parameters, or by looking in the Windows Registry under HKLM\HARDWARE\DESCRIPTION\System. Various software utilities can retrieve raw SMBIOS data, including FirmwareTablesView and AIDA64. Generating SMBIOS data Table and structure creation is normally up to the system firmware/BIOS. The UEFI Platform Initialization (PI) specification includes an SMBIOS protocol (EFI_SMBIOS_PROTOCOL) that allows components to submit SMBIOS structures for inclusion, and enables the producer to create the SMBIOS table for a platform. Platform virtualization software can also generate SMBIOS tables for use inside VMs, for instance QEMU. If the SMBIOS data is not generated and filled correctly then the machine may behave unexpectedly. For example, a Mini PC that advertises Chassis Information | Type = Tablet may behave unexpectedly using Linux. A desktop manager like GNOME will attempt to monitor a non-existent battery and shutdown the screen and network interfaces when the missing battery drops below a threshold. Additionally, if the Chassis Information | Manufacturer is not filled in correctly then work-arounds for the incorrect Type = Tablet problem cannot be applied. See also Web-Based Enterprise Management (WBEM) References External links SMBIOS Demystified, August 1, 2006, by Kiran Sanjeeva DMTF standards BIOS Computer hardware standards
Operating System (OS)
654
MacOS malware macOS malware includes viruses, trojan horses, worms and other types of malware that affect macOS, Apple's current operating system for Macintosh computers. macOS (previously Mac OS X and OS X) is said to rarely suffer malware or virus attacks, and has been considered less vulnerable than Windows. There is a frequent release of system software updates to resolve vulnerabilities. Utilities are also available to find and remove malware. History Early examples of macOS malware include Leap (discovered in 2006, also known as Oompa-Loompa) and RSPlug (discovered in 2007). An application called MacSweeper (2009) misled users about malware threats in order to take their credit card details. The trojan MacDefender (2011) used a similar tactic, combined with displaying popups. In 2012, a worm known as Flashback appeared. Initially, it infected computers through fake Adobe Flash Player install prompts, but it later exploited a vulnerability in Java to install itself without user intervention. The malware forced Oracle and Apple to release bug fixes for Java to remove the vulnerability. Bit9 and Carbon Black reported at the end of 2015 that Mac malware had been more prolific that year than ever before, including: Lamadai – Java vulnerability Appetite – Trojan horse targeting government organizations Coin Thief – Stole bitcoin login credentials through cracked Angry Birds applications A trojan known as Keydnap first appeared in 2016, which placed a backdoor on victims' computers. Adware is also a problem on the Mac, with software like Genieo, which was released in 2009, inserting ads into webpages and changing users' homepage and search engine. Malware has also been spread on Macs through Microsoft Word macros. Ransomware In March 2016 Apple shut down the first ransomware attack targeted against Mac users, encrypting the user's confidential information. It was known as KeRanger. After completing the encryption process, KeRanger demanded that victims pay one bitcoin (about at the time, about as of February 18, 2021) for the user to recover their credentials. References Malware by platform
Operating System (OS)
655
MeeGo MeeGo is a discontinued Linux distribution hosted by the Linux Foundation, using source code from the operating systems Moblin (produced by Intel) and Maemo (produced by Nokia). Primarily targeted at mobile devices and information appliances in the consumer electronics market, MeeGo was designed to act as an operating system for hardware platforms such as netbooks, entry-level desktops, nettops, tablet computers, mobile computing and communications devices, in-vehicle infotainment devices, SmartTV / ConnectedTV, IPTV-boxes, smart phones, and other embedded systems. Nokia wanted to make MeeGo its primary smartphone operating system in 2010, but after a change in direction it was stopped in February 2011, leaving Intel alone in the project. The Linux Foundation canceled MeeGo in September 2011 in favor of Tizen, which Intel then joined in collaboration with Samsung. A community-driven successor called Mer was formed that year. A Finnish start-up, Jolla, picked up Mer to develop a new operating system: Sailfish OS, and launched Jolla Phone smartphone at the end of 2013. Another Mer derivative called Nemo Mobile was also developed. History MeeGo T01 was first announced at Mobile World Congress in February 2010 by Intel and Nokia in a joint press conference. The stated aim is to merge the efforts of Intel's Moblin and Nokia's Maemo former projects into one new common project that would drive a broad third party application ecosystem. According to Intel, MeeGo was developed because Microsoft did not offer comprehensive Windows 7 support for the Atom processor. On 16 February 2010 a tech talk notice was posted about the former Maemo development project founded in 2009 and code named Harmattan, that originally slated to become Maemo 6. Those notice stated that Harmattan is now considered to be a MeeGo instance (though not a MeeGo product), and Nokia is giving up the Maemo branding for Harmattan on the Nokia N9 and beyond. (Any previous Maemo versions up to Maemo 5, a.k.a. Fremantle, will still be referred to as Maemo.) In addition it was made clear that only the naming was given up whilst development on that Harmattan will continue so that any schedules will be met. Aminocom and Novell also played a large part in the MeeGo effort, working with the Linux Foundation on their build infrastructure and official MeeGo products. Amino was responsible for extending MeeGo to TV devices, while Novell was increasingly introducing technology that was originally developed for openSUSE, (including Open Build Service, ZYpp for package management, and other system management tools). In November 2010, AMD also joined the alliance of companies that were actively developing MeeGo. Quite noticeable changes in the project setup happened on 11 February 2011 when Nokia officially announced to switch over to Windows Phone 7 and thus abandoning MeeGo and the partnership. Nokia CEO Stephen Elop said in an interview with Engadget: "What we’re doing is not thinking of MeeGo as the Plan B. We’re thinking about MeeGo and related development work as what’s the next generation." Nokia did eventually release one MeeGo smartphone that year running "Harmattan", the Nokia N9. On 27 September 2011, it was announced by Intel employee Imad Sousou that in collaboration with Samsung MeeGo will be replaced by Tizen during 2012. Community developers from the Mer (software distribution) project however started to continue MeeGo without Intel and Nokia. At a later time some of the former MeeGo developers from Nokia headed for founding the company Jolla that after some time popped up with a MeeGo and its free successor Mer based OS platform they called Sailfish OS. Overview MeeGo is intended to run on a variety of hardware platforms including hand-helds, in-car devices, netbooks and televisions. All platforms share the MeeGo core, with different "User Experience" ("UX") layers for each type of device. MeeGo is designed by combining the best of both Intel's Fedora-based Moblin and Nokia's Debian-based Maemo. When it was first announced, the then President and CEO of Nokia, Olli-Pekka Kallsvuo, said that MeeGo would create an ecosystem, which is the best among other operating systems and will represent players from different countries. System requirements MeeGo provides support for both ARM and Intel x86 processors with SSSE3 enabled and uses btrfs as the default file system. User interfaces Within the MeeGo project there are several graphical user interfaces – internally called User Experiences ("UX"). Netbook The Netbook UX is a continuation of the Moblin interface. It is written using the Clutter-based Mx toolkit, and uses the Mutter window manager. Samsung Netbook NP-N100 use MeeGo for its operating system. MeeGo's netbook version uses several Linux applications in the background, such as Evolution (Email, calendar), Empathy (instant messaging), Gwibber (microblogging), Chromium (web browser), and Banshee (multimedia player), all integrated into the graphical user interface. Handset The Handset UX is based on Qt, with GTK+ and Clutter included to provide compatibility for Moblin applications. To support the hundreds of Hildon-based Maemo applications, users have to install the Hildon library ported by the maemo.org community. Depending on the device, applications will be provided from either the Intel AppUp or the Nokia Ovi digital software distribution systems. The MeeGo Handset UX's "Day 1" prerelease was on 30 June 2010. The preview was initially available for the Aava Mobile Intel Moorestown platform, and a 'kickstart' file provided for developers to build an image for the Nokia N900. Smartphone MeeGo OS v1.2 "Harmattan" is used in Nokia N9 and N950 phones. Tablet Intel demonstrated the Tablet UX on a Moorestown-based tablet PC at COMPUTEX Taipei in early June 2010. Since then, some information appeared on MeeGo website indicating there will be a Tablet UX part of the MeeGo project, but it is not known if this UX will be the one demonstrated by Intel. This Tablet UX will be fully free like the rest of the MeeGo project and will be coded with Qt and the MeeGo Touch Framework. Intel has revealed interest in combining Qt with Wayland instead of X11 in MeeGo Touch in order to utilize the latest graphics technologies supported by Linux kernel, which should improve user experiences and reduce system complexity. Minimum hardware requirements are currently unknown. The WeTab runs MeeGo T01 with a custom user interface and was made available in September 2010. In-Vehicle infotainment The GENIVI Alliance, a consortium of several car makers and their industry partners, uses Moblin with Qt as base for its 'GENIVI 1.0 Reference Platform' for In-Vehicle Infotainment (IVI) and automotive navigation system as a uniformed mobile computing platform. Graham Smethurst of GENIVI Alliance and BMW Group announced in April 2010 the switch from Moblin to MeeGo. Smart TV Intel planned to develop a version of MeeGo for IPTV set top boxes, but had since cancelled. Licensing The MeeGo framework consists of a wide variety of original and upstream components, all of which are licensed under licenses certified by the Free Initiative (such as the GNU General Public License). In order to allow hardware vendors to personalize their device's user experiences, the project's license policy requires that MeeGo's reference User Experience subsystems be licensed under a Permissive free software license – except for libraries that extend MeeGo API's (which were licensed under the GNU Lesser General Public License to help discourage fragmentation), or applications (which can be licensed separately). Technical foundations The MeeGo Core integrates elements of two other Linux distributions: Maemo (a distribution which Nokia derived from Debian) and Moblin (which Intel derived from Fedora). MeeGo uses RPM software repositories. It is one of the first Linux distributions to deploy Btrfs as the default file system. Although most of the software in MeeGo's Jolla interface use the Qt widget toolkit, it also supports GTK+. The final revision of MeeGo Qt v4.7, Qt Mobility v1.0, OpenGL ES v2.0. MeeGo also supports the Accounts & SSO, Maliit, oFono software frameworks. MeeGo compiles software with the openSUSE Build Service. Derivatives As with Moblin before, MeeGo also serves as a technology pool from which software vendors can derive new products. MeeGo/Harmattan Even though MeeGo was initiated as collaboration between Nokia and Intel, the collaboration was formed when Nokia was already developing the next incarnation of its Maemo Linux distribution. As a result, the Maemo 6 base operating system was kept intact while the Handset UX was shared, with the name changed to "MeeGo/Harmattan". On 21 June 2011, Nokia announced its first MeeGo/Harmattan smartphone device, Nokia N9. Mer The original Mer project was a free re-implementation of Maemo, ported to the Nokia Internet Tablet N800. When MeeGo first appeared this work was discontinued and the development effort went to MeeGo. After both Nokia and then Intel abandoned MeeGo, the Mer project was revived and continued to develop the MeeGo codebase and tools. It is now being developed in the open by a meritocratic community. Mer provides a Core capable of running various UXs developed by various other projects, and will include maintained application development APIs, such as Qt, EFL, and HTML5/WAC. Some of the former MeeGo user interface were already ported to run on top of Mer, such as the handset reference UX, now called Nemo Mobile. There are also a couple of new tablet UXes available, such as Cordia and Plasma Active. Mer is considered to be the legitimate successor of Meego, as the other follow-up project Tizen (see below) changed the APIs fundamentally. Nemo Mobile Nemo Mobile is a community driven operating system incorporating Mer targeted at mobile phones and tablet. Sailfish OS Sailfish OS is an operating system developed by the Finnish startup Jolla. It also incorporates Mer. After Nokia abandoned their participation in the MeeGo project, the directors and core professionals from Nokia's N9 team left the company and together formed Jolla, to bring MeeGo back into the market mainstream. This effort eventually resulted in the creation of the Sailfish OS. The Sailfish OS and the Sailfish OS SDK are based on the core and the tools of the Mer core distribution, which is a revival of the core of the MeeGo project (a meritocracy-governed and managed successor of the MeeGo OS, but without its own Graphical User Interface and system kernel). Sailfish includes a multi-tasking user interface that Jolla intends to use to differentiate its smartphones from others and as a competitive advantage against devices that run Google's Android or Apple's iOS. Among other things, the Sailfish OS is characterised by: can be used with a wide range of devices in the same way as MeeGo Jolla continues to use the MeeGo APIs (via Mer), which consists of: Qt 4.7 [Qt47] Qt Mobility 1.0 [QtMob] OpenGL ES 2.0 [OGLES] updated version, like Qt 5.0 are or will be used in/via Mer core; an in-house Jolla GUI (successor of swipe UI) for smartphone devices; uses QML, Qt and HTML5; thanks to Mer, the core can run on various hardware like Intel, ARM and any other which has a kernel able to work with the Mer core; open source, except for some of Jolla's UI elements. Those interested in further development can become involved through the Mer project or the Sailfish Alliance or Jolla; Jolla, i.e. the Sailfish team, is an active contributor to the Mer project Tizen Although Tizen was initially announced as a continuation of the MeeGo effort, there is little shared effort and architecture between these projects, since Tizen inherited much more from Samsung's LiMo than from MeeGo. As most of the Tizen work is happening behind closed doors and is done by Intel and Samsung engineers, the people involved in the former MeeGo open source project continued their work under Mer and projects associated with it. Because Tizen does not use the Qt framework, which is the core part of Meego's API (see above), Tizen cannot technically be considered to be a derivate of MeeGo. SUSE and Smeegol Linux On 1 June 2010, Novell announced that they would ship a SUSE Linux incarnation with MeeGo's Netbook UX (MeeGo User Experience) graphical user interface. A MeeGo-based Linux distribution with this user interface is already available from openSUSE's Goblin Team under the name Smeegol Linux, this project combines MeeGo with openSUSE to get a new netbook-designed Linux distribution. What makes Smeegol Linux unique when compared to the upstream MeeGo or openSUSE is that this distribution is at its core based on openSUSE but has the MeeGo User Experience as well as a few other changes such as adding the Mono-based Banshee media player, NetworkManager-powered network configuration, a newer version of Evolution Express, and more. Any end-users can also build their own customized Smeegol Linux OS using SUSE Studio. Fedora Fedora 14 contains a selection of software from the MeeGo project. Linpus Linpus Technologies is working on bringing their services on top of MeeGo Netbook and MeeGo Tablet. Splashtop The latest version of the instant-on OS Splashtop-platform (by Splashtop Inc. which was previously named DeviceVM Inc.) is compliant with MeeGo, and future version of Splashtop will be based on MeeGo and will be available for commercial use in the first half of 2011. Release schedule It was announced at the Intel Developer Forum 2010 that MeeGo would follow a six-month release schedule. Version 1.0 for Atom netbooks and a code drop for the Nokia N900 became available for download . Project planning Launch In February 2011, Nokia announced a partnership with Microsoft for mobile handsets and the departure of Nokia's MeeGo team manager Alberto Torres, leading to speculation as to Nokia's future participation in MeeGo development or using Windows Phone by Nokia. In September 2011, Nokia began shipping the first MeeGo smartphone Nokia N9, ahead of the Windows Phone 7 launch expected later this year. The first MeeGo-based tablet WeTab was launched in 2010 by Neofonie. In early July 2012, Nokia's Meego development lead Sotiris Makyrgiannis and other team members left Nokia. Companies supporting the project See also Comparison of mobile operating systems Sailfish OS – the operating system by Jolla with the Mer core, the legacy of MeeGo OS by Nokia&Intel partnership developed further by Jolla Mer core – the core stack of code by merproject.org, one of main parts of Sailfish OS, free open source software which initially has consisted in about 80% of the MeeGo original open source code. Nokia X platform – the next Linux project by Nokia KaiOS Hongmeng OS References External links ARM operating systems Discontinued Linux distributions RPM-based Linux distributions Free mobile software Intel software Linux Foundation projects Mobile operating systems Nokia platforms Tablet operating systems Linux distributions
Operating System (OS)
656
Korora (operating system) Korora (previously Kororaa) was a remix of the Fedora Linux distribution. Originally Kororaa was a binary installation method for Gentoo Linux which aimed for easy installation of a Gentoo system by using install scripts instead of manual configuration. The name derives from the Māori word – the little penguin. History Korora was started by Christopher Smart as a method to quickly reproduce a Gentoo Linux installation on multiple desktop machines. Chris also intended that Korora be used to quickly demonstrate the power of Gentoo Linux to users critical of 'compile times.' On November 7, 2007, Chris announced that he was discontinuing his work on the project, and that there would be no new versions of Korora. The introduction of the Korora XGL Live CD was intended to demonstrate the capabilities of Novell's Xgl and Compiz. On December 23, 2010, Chris Smart has announced rebirth of Korora as a Fedora Remix. On , Korora 18 was released, featuring a revised name spelled with only one A and a new logo. On May 16, 2018, Korora stopped its development. Kororaa XGL Live CD In March 2006, a Kororaa based Live CD was released, preconfigured with Xgl capabilities. The live CD supports NVIDIA, ATI and Intel graphics cards, and the latest version (0.3) comes with both K Desktop Environment 3 and GNOME. Development hiatus and restart On November 7, 2007, Smart announced that development on Korora would be ended, and no further versions would be released; the reasons given were that: Sabayon Linux already serves a purpose as a binary Gentoo distribution Gentoo already comes with a GUI installer Compiz is already installed by default in the MATE-edition. Korora couldn't compete with other distributions which include non-free drivers by default. the weight of the project was too much for a single developer. On December 23, 2010, Smart announced the restart of Korora with a release of the Fedora-based version of the distro: I know that you'll be looking for something Linux related to do over your Christmas holidays and New Year, so I've just released the first installable live DVD beta for testing. The final release will be Korora 14 (derived from Fedora 14), code-named 'Nemo'. As with the original Korora, it's based on KDE. Essentially, Korora has been reborn as a Fedora remix, inspired by Rahul Sundaram's Omega GNOME remix. It aims to provide all general computing uses out of the box and it aims to include software packages that most users will want. Version history References External links RPM-based Linux distributions X86-64 Linux distributions Linux distributions
Operating System (OS)
657
Safe mode Safe mode is a diagnostic mode of a computer operating system (OS). It can also refer to a mode of operation by application software. Safe mode is intended to help fix most, if not all, problems within an operating system. It is also widely used for removing rogue security software. Background Microsoft Windows, macOS, Android and Linux distributions such as Ubuntu and Linux Mint are examples of contemporary operating systems that implement a safe mode (called "Safe Boot" in macOS) as well as other complex electronic devices. In safe mode, an operating system has reduced functionality, but the task of isolating problems is easier since many non-core components are disabled, such as sound. An installation that will only boot into safe mode typically has a major problem, such as disk corruption or the installation of poorly-configured software that prevents the operating system from successfully booting into its normal operating mode. Though it varies by operating system, safe mode typically loads only essential executable modules and disables devices except for those necessary to display information and accept input. It can also take the form of a parallel "miniature" operating system that has no configuration information shared with the normal operating system. For example, on Microsoft Windows, the user can also choose to boot to the Recovery Console, a small text-based troubleshooting mode kept separate from the main operating system (which can also be accessed by booting the install CD) or to various "safe mode" options that run the dysfunctional OS but with features, such as video drivers, audio, and networking, disabled. Safe mode typically provides access to utility and diagnostic programs so a user can troubleshoot what is preventing the operating system from working normally. Safe mode is intended for maintenance, not functionality, and it provides minimal access to features. Operating systems Windows Microsoft Windows' safe mode (for 7/Vista /XP /2000/ME/98/95) is accessed by pressing the F8 key as the operating system boots. Also, in a multi-boot environment with multiple versions of Windows installed side by side, the F8 key can be pressed at the OS selector prompt to get to safe mode. However, under Windows 8 (released in 2012), the traditional press-F8-for-safe-mode-options UI convention no longer works, and either Shift-F8 or a special GUI-based workaround is necessary. Unix An equivalently minimal setting in Unix-like operating systems is single-user mode, in which daemons and the X Window System are not started, and only the root user can log in. It can do emergency repairs or maintenance, including resetting users' passwords on the machine without the need to know the old one. macOS In macOS holding the shift key after powering up activates Safe Boot that has background maintenance features (Besides the mode selection, it runs a file system repair, and in Mac OS 10.4, it disables all fonts other than those in /System/Library/Fonts, moves to the Trash all font caches normally stored in /Library/Caches/com.apple.ATS/(uid)/, where (uid) is a user ID number such as 501, and disables all startup items and any Login Items). Unlike in Windows where safe mode with networking is disabled by default and requires using a safe mode with networking enabled, in the macOS Safe Boot always includes networking. On the Classic Mac OS versions 6, 7, 8, and 9, a similar mode to the Unix root is achieved by holding down the shift key while booting, which starts the system without extensions. Android The way that safe mode in Android can be activated differs per vendor. Safe mode can be disabled by rebooting the device. Application software safe mode Application software sometimes offers a safe mode as well. In the PHP interpreter, prior to version 5.4, safe mode offers stricter security measures. The Glasgow Haskell Compiler from version 7.2 offers "Safe Haskell" mode, restricting usage of functions such as unsafePerformIO. Mozilla Firefox's safe mode allows the user to remove extensions which may be preventing the browser from loading. Internet Explorer can run in "No Add-Ons" mode and Protected Mode. Cydia's MobileSubstrate also has a safe mode that allows the user to uninstall tweaks and extensions that may crash the SpringBoard. See also BOOT.INI Bcdedit MSConfig Microsoft Windows References External links Mac OS X: Starting up in Safe Mode Booting
Operating System (OS)
658
Windows Essentials Windows Essentials (formerly Windows Live Essentials and Windows Live Installer) is a discontinued suite of Microsoft freeware applications that includes email, instant messaging, photo sharing, blogging, and parental control software. Essentials programs are designed to integrate well with each other, with Microsoft Windows, and other Microsoft web-based services such as OneDrive and Outlook.com. Applications Windows Essentials 2012 includes the following applications: Photo Gallery Movie Maker Mail Writer OneDrive (later integrated into Windows 8.1 or Windows 10) Family Safety (Windows 7 only) Messenger Windows Essentials applications support installation on Windows 10, Windows 7, Windows 8, Windows 8.1, Windows Server 2008 R2, Windows Server 2012 and Windows Server 2012 R2. Previous versions were also available on Windows XP and Windows Vista, and included Windows Live Messenger. History Windows Live Dashboard On August 25, 2006, Microsoft began seeking testers for their invitation-only Windows Live service named Windows Live Essentials. It was very similar to Google Pack in that it allows users to discover, install, and maintain a number of Windows Live application programs. However, the original Windows Live Essentials was referred to as the website serving the purpose of allowing users to discover new Windows Live services. The Windows Live Essentials website was integrated tightly with Windows Live Dashboard, an application which offers a view of the services the user already has and what new Windows Live software and services are available. Windows Live Dashboard required users to sign-in with their Windows Live ID to check whether the service has been downloaded or not. At that time, web-based services such as Windows Live Hotmail (then Windows Live Mail) was also part of the list. Shortly after its initial beta release, the original Windows Live Essentials website became unavailable and the website was redirected to Windows Live Betas (then Windows Live Ideas), and as a result Windows Live Dashboard also became unavailable. Windows Live Installer (Wave 2) Subsequent reappearance of a Windows Live Dashboard is seen with the initial "Windows Live Wave 2" unified installers from Windows Live Messenger 8.5, Mail and Writer that was released on May 30, 2007. In the "Windows Live Wave 2" suite of services, Windows Live Installer was the name of the website and software given to serve the purposes of allowing users to discover, download and install Windows Live software and services. Users were able to select the Windows Live software they wished to install on the website, and the website would pass on the information to the unified installer software such that the installer will only download and install those applications selected. Windows Live Essentials 2009 (Wave 3) The Windows Live Installer application was significantly updated with the subsequent "Windows Live Wave 3" release of applications, with the new inclusion of Windows Live Movie Maker (beta) and Microsoft Office Outlook Connector to its suite of products. On October 29, 2008, it was announced at the Professional Developers Conference 2008 that Windows Live Installer would be renamed as Windows Live Essentials, and would be integrated into Windows 7 to allow users to download the included Windows Live applications. However, the Windows Live Essentials applications will not be "bundled" with the Windows 7 operating system. This should allow more frequent updates to the Windows Live Essentials applications outside of major operating system releases. On December 15, 2008, the "beta refresh" versions of Windows Live Essentials applications were released. This release included many changes since the previous beta release based on user feedback. A significant visual change in this release was the introduction of new application icons which added a common design theme to all the Live Essentials applications. The words "beta" were removed from most of the build numbers. On January 7, 2009, the "beta refresh" versions were released as the final versions, with the notable exception of Windows Live Movie Maker. Microsoft updated Windows Live Essentials Wave 3 on February 13, 2009 and again on August 19, 2009, when Windows Live Movie Maker was released out of beta and significantly updated with additional features since the beta version released in December 2008. The final build number was 14.0.8117.0416. After the release of Windows Live Essentials 2011, which does not support Windows XP, Windows Live Essentials 2009 was renamed to Windows Live Essentials for Windows XP and was made available for Windows XP users to help maintain the product user base. Windows Live Essentials 2011 (Wave 4) Microsoft released a public beta for the next major update for Windows Live Essentials dubbed "Wave 4" on June 24, 2010. The updated applications include Windows Live Messenger, Mail, Photo Gallery, Movie Maker, Writer, Family Safety, Mesh, and Messenger Companion. For Windows Live Mesh, the application has been rewritten to be based on the previous Live Mesh and will allow PC and Mac users to keep their documents, pictures and music in sync across multiple computers. It was also announced that Windows Live Toolbar will be discontinued and replaced by the Bing Bar. In addition, the ribbon user interface was incorporated into Mail, Movie Maker, Photo Gallery, and Writer. The Wave 4 beta has dropped support for Windows XP; Windows Vista or Windows 7 is required for its use. The beta refresh of Windows Live Essentials 2011 was released on August 17, 2010. Microsoft released the final version of Windows Live Essentials 2011 on September 30, 2010. The applications were updated with a hotfix/QFE (except for Mesh and Family Safety) on December 1, 2010, and that update became available through Windows Update from March 20, 2011. On May 2, 2012, Microsoft announced the re-branding of Windows Live. Although all applications in Windows Live Essentials 2011 suite will continue to function on Windows Vista, 7, and 8, there will be no significant updates made to these applications in the future. On June 2014, Microsoft announced that Windows Live Essentials 2011 would no longer be available for download on Windows Vista. Windows Essentials 2012 (Wave 5) Microsoft released Windows Essentials 2012 on August 7, 2012 for Windows 7 and Windows 8 users. Windows Essentials 2012 included SkyDrive for Windows (later renamed OneDrive), and dropped Windows Live Mesh, Messenger Companion and Bing Bar. Microsoft Family Safety is also installed for Windows 7 users only, as Windows 8 has built-in family safety functionalities. Further, Windows Essentials 2012 also dropped the "Windows Live" branding from the installer itself, as well as from programs such as Photo Gallery and Movie Maker, which have been branded Windows Photo Gallery and Windows Movie Maker respectively. These two programs have also received several updates and enhancements since their 2011 release, including video stabilization, waveform visualizations, new narration tracks, audio emphasizing, default save as H.264 format, and enhanced text effects for Movie Maker; as well as AutoCollage integration and addition of Vimeo as a publishing partner for Photo Gallery. No significant changes or re-branding were made in this release for other programs such as Windows Live Messenger, Windows Live Mail, and Windows Live Writer. It is still possible today to download the original pack for Windows Essentials 2012. Such links are provided in the form of links to the Wayback Machine. Deprecation Some of the apps included in the package are unsupported on Windows 8.1 and 10 and are removed after an upgrade. The Facebook integration in Photo Gallery and Movie Maker broke due to Facebook API changes. Microsoft turned off Windows Live Messenger's service and redirected users to Skype, another one of its messaging services; the installer has not been updated to reflect this change. Windows Live Mail no longer works with Outlook.com using the proprietary DeltaSync protocol due to removal of support for the latter from Outlook.com, but can be configured to use such accounts using either IMAP or POP3 protocols. Windows Live Writer no longer works with Blogger due to Blogger's authentication API changes. The suite had otherwise not received a major upgrade since 2012, and it had not received any updates through Windows Update since 2014, although Microsoft offered a downloadable buggy update in December 2015. Microsoft announced that the suite would be officially retired on January 10, 2017 and would no longer be available for download, and that after the end of support date applications already installed would continue to work but with "an increased security risk associated with use of unsupported products past their end of support date." References Further reading External links Download for Windows Live Essentials 2012 Essentials Windows-only freeware 2006 software Discontinued Microsoft software Products and services discontinued in 2017
Operating System (OS)
659
Flex Flex or FLEX may refer to: Computing Flex (language), developed by Alan Kay FLEX (operating system), a single-tasking operating system for the Motorola 6800 FlexOS, an operating system developed by Digital Research FLEX (protocol), a communication protocol for pagers Flex, a family of automatic test equipment produced by Teradyne FLEx, a piece of software used in language documentation Apache Flex, formerly Adobe Flex, technologies for developing rich internet applications Flex, a lexical analyser generator and a free software alternative to lex Flex machine, a computer developed by RSRE in the 1980s CSS Flexible Box Layout, commonly known as Flexbox, a CSS 3 web layout model Science and technology Bending, also known as flexure, as used in mechanics Flexion, in anatomy, a position made possible by the joint angle decreasing Femtosecond Lenticule EXtraction, a form of refractive eye surgery Flex circuit, a flexible printed circuit used in electronic assemblies FLEX mission, a future satellite launch mission by the European Space Agency Flex temp, a reduced takeoff thrust setting which trades excess performance for reduced engine wear Flex-fuel, a flexible-fuel vehicle Inflection point, of a curve in geometry Power cord, or flex, a flexible electrical cable Flex, a flexible electrical cable, as used on electrical appliances Music Flex (album), Lene Lovich's 1979 second album Flex (EP) a 2003 EP by Pitch Black "Flex" (Dizzee Rascal song), 2007 "Flex" (Mad Cobra song), 1992 "Flex (Ooh, Ooh, Ooh)", a 2015 song by Rich Homie Quan Flex, a break in the recitation tone before the mediation in Gregorian chant A "flex" in a "flex bar", a technique used in battle rap. Organizations Flex (club), a nightclub in Vienna, Austria Flex (company) (NASDAQ symbol: FLEX), a contract electronics maker based in Singapore Flex FM, a London-based radio station Flex-Elektrowerkzeuge, a German producer of power tools Flex Linhas Aéreas, a Brazilian regional airline People Flex (singer), Félix Danilo Gómez (born 1974), Panamanian singer Flex Alexander, Mark Alexander Knox (b. 1970), an American actor and comedian Funkmaster Flex, Aston George Taylor Jr (b. 1968), an American hip hop DJ Walter Flex (1887–1917), German author Other uses Flex (comics), a fictional superhero Flex (film), a commercial/short film by English director Chris Cunningham Flex (magazine), an American bodybuilding magazine Flex (TV program), a 2021 Philippine variety show Fleet Landing Exercises, a series of landing exercises conducted by the Fleet Marine Force, a combined-United States Navy/Marine landing force Flex Your Rights, a civil-liberties non-profit based in Washington DC, US Ford Flex, a people mover vehicle Future Leaders Exchange, an American student exchange program for high school students from the former Soviet Union See also Bend (disambiguation) Flexible (disambiguation)
Operating System (OS)
660
Compis Compis (COMPuter I Skolan', also a pun on the colloquial Swedish word kompis meaning comrade) was a computer system intended for the general educational system in Sweden and sold to Swedish schools beginning in 1984 through the distributor Esselte Studium, who also was responsible for the software packages. The computers were also used in Danish, Finnish and Norwegian schools under the name Scandis- History IN 1980, the ABC 80 used in the schools was regarded as becoming obsolete, and Styrelsen för teknisk utveckling (board for technical development) was tasked to find a replacement. In 1981, the procurement Tudis (Teknikupphandlingsprojekt Datorn i Skolan) was launched, and while the decision was controversial, Svenska Datorer AB was awarded the contract with development beginning in 1982. After Svenska Datorer went bankrupt, production was transferred to TeliDatorer/Telenova under Televerket (Sweden) The computer was distributed by Esselte and exclusively marketed towards, and sold to, Swedish, Norwegian and Finnish schools, mainly high stage (year 7-9) and gymnasium-level. The computer was based on the Intel 80186 CPU and with CP/M-86 as the operating system in ROM (although it could also run MS-DOS from disk). The computer had a wide selection of ports, including one for a light pen. The Compis project was criticized from the start, and as the move to IBM PC compatibility came it was left behind and finally cancelled in 1988 although it was in use well into the 1990s. Applications Notable applications being run on the Compis in an educational environment was: COMAL interpreter Turbo Pascal 3.0 compiler, under the name Scandis-Pascal WordStar word processor Harmony'' software: word processing, spreadsheet and database. The name was a pun on Lotus Symphony, the dominant productivity software at the time. Some schools had simple local area networks of Compis/Scandis computers, in which 10–20 machines shared one hard disk with a typical capacity of 10MB. See also Education in Sweden Unisys ICON External links Compis Info: A site dedicated to the Compis Telenova Compis: some documentation available here (page in Swedish). References Nationalencyclopedins nätupplaga, "Compis" Swedish Internet museum Personal computers Goods manufactured in Sweden
Operating System (OS)
661
Acos ACOS or Acos may refer to: Arccosine, an inverse trigonometric function The Advanced Comprehensive Operating System mainframe computer operating system Acos District in Peru Acos Vinchos District in Peru A Crown of Swords novel See also Aco (disambiguation) Cos (disambiguation)
Operating System (OS)
662
GNU GRUB GNU GRUB (short for GNU GRand Unified Bootloader, commonly referred to as GRUB) is a boot loader package from the GNU Project. GRUB is the reference implementation of the Free Software Foundation's Multiboot Specification, which provides a user the choice to boot one of multiple operating systems installed on a computer or select a specific kernel configuration available on a particular operating system's partitions. GNU GRUB was developed from a package called the Grand Unified Bootloader (a play on Grand Unified Theory). It is predominantly used for Unix-like systems. The GNU operating system uses GNU GRUB as its boot loader, as do most Linux distributions and the Solaris operating system on x86 systems, starting with the Solaris 10 1/06 release. Operation Booting When a computer is turned on, BIOS finds the configured primary bootable device (usually the computer's hard disk) and loads and executes the initial bootstrap program from the master boot record (MBR). The MBR is the first sector of the hard disk, with zero as its offset (sectors counting starts at zero). For a long time, the size of a sector has been 512 bytes, but since 2009 there are hard disks available with a sector size of 4096 bytes, called Advanced Format disks. , such hard disks are still accessed in 512-byte sectors, by utilizing the 512e emulation. The legacy MBR partition table supports a maximum of four partitions and occupies 64 bytes, combined. Together with the optional disk signature (four bytes) and disk timestamp (six bytes), this leaves between 434 and 446 bytes available for the machine code of a boot loader. Although such a small space can be sufficient for very simple boot loaders, it is not big enough to contain a boot loader supporting complex and multiple file systems, menu-driven selection of boot choices, etc. Boot loaders with bigger footprints are thus split into pieces, where the smallest piece fits into and resides within the MBR, while larger piece(s) are stored in other locations (for example, into empty sectors between the MBR and the first partition) and invoked by the boot loader's MBR code. Operating system kernel images are in most cases files residing on appropriate file systems, but the concept of a file system is unknown to the BIOS. Thus, in BIOS-based systems, the duty of a boot loader is to access the content of those files, so it can be loaded into the RAM and executed. One possible approach for boot loaders to load kernel images is by directly accessing hard disk sectors without understanding the underlying file system. Usually, an additional level of indirection is required, in form of maps or map files auxiliary files that contain a list of physical sectors occupied by kernel images. Such maps need to be updated each time a kernel image changes its physical location on disk, due to installing new kernel images, file system defragmentation etc. Also, in case of the maps changing their physical location, their locations need to be updated within the boot loader's MBR code, so the sectors indirection mechanism continues to work. This is not only cumbersome, but it also leaves the system in need of manual repairs in case something goes wrong during system updates. Another approach is to make a boot loader aware of the underlying file systems, so kernel images are configured and accessed using their actual file paths. That requires a boot loader to contain a driver for each of the supported file systems, so they can be understood and accessed by the boot loader itself. This approach eliminates the need for hardcoded locations of hard disk sectors and existence of map files, and does not require MBR updates after the kernel images are added or moved around. Configuration of a boot loader is stored in a regular file, which is also accessed in a file system-aware way to obtain boot configurations before the actual booting of any kernel images. As a result, the possibility for things to go wrong during various system updates is significantly reduced. As a downside, such boot loaders have increased internal complexity and even bigger footprints. GNU GRUB uses the second approach, by understanding the underlying file systems. The boot loader itself is split into multiple stages, allowing for itself to fit within the MBR boot scheme. Two major versions of GRUB are in common use: GRUB version 1, called GRUB legacy, is only prevalent in older releases of Linux distributions. GRUB 2 was written from scratch and intended to replace its predecessor, and is now used by a majority of Linux distributions. Version 0 (GRUB Legacy) GRUB 0.x follows a two-stage approach. The master boot record (MBR) usually contains GRUB stage 1, or can contain a standard MBR implementation which chainloads GRUB stage 1 from the active partition's boot sector. Given the small size of a boot sector (512 bytes), stage 1 can do little more than load the next stage of GRUB by loading a few disk sectors from a fixed location near the start of the disk (within its first 1024 cylinders). Stage 1 can load stage 2 directly, but it is normally set up to load the stage 1.5., located in the first 30 KiB of hard disk immediately following the MBR and before the first partition. In case this space is not available (unusual partition table, special disk drivers, GPT or LVM disk) the install of stage 1.5 will fail. The stage 1.5 image contains file system drivers, enabling it to directly load stage 2 from any known location in the filesystem, for example from /boot/grub. Stage 2 will then load the default configuration file and any other modules needed. Version 2 (GRUB 2) Startup on systems using BIOS firmware See illustration in last image on the right. boot.img (stage 1) is written to the first 440 bytes of the Master Boot Record (MBR boot code in sector 0), or optionally in a partition boot sector (PBR). It addresses diskboot.img by a 64-bit LBA address. The actual sector number is written by grub-install. diskboot.img is the first sector of core.img with the sole purpose to load the rest of core.img identified by LBA sector numbers also written by grub-install. On MBR partitioned disks, core.img (stage 1.5) is stored in the empty sectors (if available) between the MBR and the first partition. Recent operating systems suggest a 1 MiB gap here for alignment (2047*512 byte or 255*4KiB sectors). This gap used to be 62 sectors (31 KiB) as a reminder of the sector number limit of Cylinder-Head-Sector (C/H/S) addressing used by BIOS before 1996, therefore core.img is designed to be smaller than 32 KiB. On GPT partitioned disks: primary partitions are not limited to 4, thus core.img is written to its own tiny (1 MiB), filesystem-less BIOS boot partition. stage 2: core.img loads /boot/grub/i386-pc/normal.mod from the partition configured by grub-install. If the partition index has changed, GRUB will be unable to find the normal.mod, and presents the user with the GRUB Rescue prompt. Depending on how GRUB2 was installed, the /boot/grub/ is either in the root partition of the Linux distribution, or in the separate /boot partition. after normal.mod loaded: normal.mod parses /boot/grub/grub.cfg, optionally loads modules (eg. for graphical UI and file system support) and shows the menu. Startup on systems using UEFI firmware /efi/<distro>/grubx64.efi (for x64 UEFI systems) is installed as a file in the EFI System Partition, and booted by the firmware directly, without a boot.img in MBR sector 0. This file like stage1 and stage1.5. /boot/grub/ can be installed on the EFI System Partition or the separate /boot partition. For x64 UEFI systems, stage2 are the /boot/grub/x86_64-efi/normal.mod file and other /boot/grub/ files. After startup GRUB presents a menu where the user can choose from operating systems (OS) found by grub-install. GRUB can be configured to automatically load a specified OS after a user-defined timeout. If the timeout is set to zero seconds, pressing and holding while the computer is booting makes it possible to access the boot menu. In the operating system selection menu GRUB accepts a couple of commands: By pressing , it is possible to edit kernel parameters of the selected menu item before the operating system is started. The reason for doing this in GRUB (i.e. not editing the parameters in an already booted system) can be an emergency case: the system has failed to boot. Using the kernel parameters line it is possible, among other things, to specify a module to be disabled (blacklisted) for the kernel. This could be required if the specific kernel module is broken and thus prevents boot-up. For example, to blacklist the kernel module nvidia-current, one could append modprobe.blacklist=nvidia-current at the end of the kernel parameters. By pressing , the user enters the GRUB command line. The GRUB command line is not a regular Linux shell, like e.g. bash, and accepts only certain GRUB-specific commands, documented by various Linux distributions. Once boot options have been selected, GRUB loads the selected kernel into memory and passes control to the kernel. Alternatively, GRUB can pass control of the boot process to another boot loader, using chain loading. This is the method used to load operating systems that do not support the Multiboot Specification or are not supported directly by GRUB. History GRUB was initially developed by Erich Boleyn as part of work on booting the operating system GNU/Hurd, developed by the Free Software Foundation. In 1999, Gordon Matzigkeit and Yoshinori K. Okuji made GRUB an official software package of the GNU Project and opened the development process to the public. , the majority of Linux distributions have adopted GNU GRUB 2, as well as other systems such as Sony's PlayStation 4. Development GRUB version 1 (also known as "GRUB Legacy") is no longer under development and is being phased out. The GNU GRUB developers have switched their focus to GRUB 2, a complete rewrite with goals including making GNU GRUB cleaner, more robust, more portable and more powerful. GRUB 2 started under the name PUPA. PUPA was supported by the Information-technology Promotion Agency (IPA) in Japan. PUPA was integrated into GRUB 2 development around 2002, when GRUB version 0.9x was renamed GRUB Legacy. Some of the goals of the GRUB 2 project include support for non-x86 platforms, internationalization and localization, non-ASCII characters, dynamic modules, memory management, a scripting mini-language, migrating platform specific (x86) code to platform specific modules, and an object-oriented framework. GNU GRUB version 2.00 was officially released on June 26, 2012. Three of the most widely used Linux distributions use GRUB 2 as their mainstream boot loader. Ubuntu adopted it as the default boot loader in its 9.10 version of October 2009. Fedora followed suit with Fedora 16 released in November 2011. OpenSUSE adopted GRUB 2 as the default boot loader with its 12.2 release of September 2012. Solaris also adopted GRUB 2 on the x86 platform in the Solaris 11.1 release. In late 2015, the exploit of pressing backspace 28 times to bypass the login password was found and quickly fixed. Variants GNU GRUB is free and open-source software, so several variants have been created. Some notable ones, which have not been merged into GRUB mainline: OpenSolaris includes a modified GRUB Legacy that supports Solaris VTOC slices, automatic 64-bit kernel selection, and booting from ZFS (with compression and multiple boot environments). Google Summer of Code 2008 had a project to support GRUB legacy to boot from ext4 formatted partitions. The Syllable project made a modified version of GRUB to load the system from its AtheOS File System. TrustedGRUB extends GRUB by implementing verification of the system integrity and boot process security, using the Trusted Platform Module (TPM). The Intel BIOS Implementation Test Suite (BITS) provides a GRUB environment for testing BIOSes and in particular their initialization of Intel processors, hardware, and technologies. BITS supports scripting via Python, and includes Python APIs to access various low-level functionality of the hardware platform, including ACPI, CPU and chipset registers, PCI, and PCI Express. GRUB4DOS is a GRUB legacy fork that improves the installation experience on DOS and Microsoft Windows by putting everything besides the GRLDR config in one image file. It can be loaded directly from DOS, or by NTLDR or Windows Boot Manager. GRUB4DOS is under active development and as of 2021 supports UEFI. Utilities GRUB configuration tools The setup tools in use by various distributions often include modules to set up GRUB. For example, YaST2 on SUSE Linux and openSUSE distributions and Anaconda on Fedora/RHEL distributions. StartUp-Manager and GRUB Customizer are graphical configuration editors for Debian-based distributions. The development of StartUp-Manager stopped on 6 May 2011 after the lead developer cited personal reasons for not actively developing the program. GRUB Customizer is also available for Arch-based distributions. For GRUB 2 there are KDE Control Modules. GRLDR ICE is a tiny tool for modifying the default configuration of grldr file for GRUB4DOS. Boot repair utilities Boot-Repair is a simple graphical tool for recovering from frequent boot-related problems with GRUB and Microsoft Windows bootloader. This application is available under GNU GPL license. Boot-Repair can repair GRUB on multiple Linux distributions including, but not limited to, Debian, Ubuntu, Mint, Fedora, openSUSE, and Arch Linux. Installer for Windows Grub2Win is a Windows open-source software package. It allows GNU GRUB to boot from a Windows directory. The setup program installs GNU GRUB version 2.06 to an NTFS partition. A Windows GUI application is then used to customize the GRUB boot menu, themes, UEFI boot order, scripts etc. All GNU GRUB scripts and commands are supported for both UEFI and legacy systems. Grub2Win can configure GRUB for multiboot of Windows, Ubuntu, openSuse, Fedora and many other Linux distributions. It is freely available under GNU GPL License at SourceForge. Alternative boot-managers The strength of GRUB is the wide range of supported platforms, file-systems, and operating systems, making it the default choice for distributions and embedded systems. However, there are boot-managers targeted at the end user that gives more friendly user experience, graphical OS selector and simpler configuration: rEFInd – Macintosh-style graphical boot-manager, only for UEFI-based computers (BIOS not supported). CloverEFI – Macintosh-style graphical boot-manager for BIOS and UEFI-based computers. Emulates UEFI with a heavily modified DUET from the TianoCore project. Requires a FAT formatted partition even on BIOS systems. As a benefit, it has a basic filesystem driver in the partition boot sector, avoiding the brittleness of GRUB 2nd, 3rd stage and the infamous GRUB Rescue prompt. The user interface looks similar to rEFInd: both inherit from the abandoned boot-manager rEFIt. Non-graphical alternatives: systemd-boot – Light, UEFI-only boot-manager with text-based OS selector menu. External links How-Tos and troubleshooting Distribution wikis have many solutions for common issues and custom setups that might help you: Arch Linux /GRUB Ubuntu /Grub2 (also see Links at the bottom) Fedora /GRUB_2 Gentoo /GRUB2 Grub2 theme tutorial Documentation GRUB manual – most detailed documentation, including all commands GRUB wiki archived in 2010 Introductory articles Boot with GRUB, an April 2001 article in Linux Journal Technicalities Booting Linux on x86 using Grub2 – in-depth article Unified Extensible Firmware Interface (UEFI firmware, common since 2012) GUID Partition Table (GPT) – handles hard drives bigger than 2 TiB and more than 4 partitions Master boot record used with BIOS firmware (motherboards roughly before 2012) BIOS Boot Specification Version 1.01 (January 11, 1996) – hard to find See also SysLinux (IsoLinux) – commonly used bootloader on CDs, DVDs BOOTMGR – current Windows bootloader NTLDR - previous Windows bootloader, used before Windows Vista rEFInd - alternative boot loader for UEFI-based computers Comparison of boot loaders Notes References Free boot loaders Free software primarily written in assembly language Free software programmed in C GRUB Research projects 1995 software
Operating System (OS)
663
Windows Server 2003 Windows Server 2003 is the first version of Windows Server operating system produced by Microsoft. It is part of the Windows NT family of operating systems and was released on April 24, 2003. Windows Server 2003 is the successor to the Server editions of Windows 2000 and the predecessor to Windows Server 2008. An updated version, Windows Server 2003 R2, was released to manufacturing on December 6, 2005. Windows Server 2003's kernel was later adopted in the development of Windows Vista and Windows XP Professional x64 Edition. Overview Windows Server 2003 is the follow-up to Windows 2000 Server, incorporating compatibility and other features from Windows XP. Unlike Windows 2000, Windows Server 2003's default installation has none of the server components enabled, to reduce the attack surface of new machines. Windows Server 2003 includes compatibility modes to allow older applications to run with greater stability. It was made more compatible with Windows NT 4.0 domain-based networking. Windows Server 2003 brought in enhanced Active Directory compatibility and better deployment support to ease the transition from Windows NT 4.0 to Windows Server 2003 and Windows XP Professional. Windows Server 2003 is the first server edition of Windows to support the IA64 and x64 architectures. The product went through several name changes during the course of development. When first announced in 2000, it was known by its codename "Whistler Server"; it was named "Windows 2002 Server" for a brief time in mid-2001, followed by "Windows .NET Server" and "Windows .NET Server 2003". After Microsoft chose to focus the ".NET" branding on the .NET Framework, the OS was finally released as "Windows Server 2003". Development Windows Server 2003 was the first Microsoft Windows version which was thoroughly subjected to semi-automated testing for bugs with a software system called PREfast developed by computer scientist Amitabh Srivastava at Microsoft Research. The automated bug checking system was first tested on Windows 2000 but not thoroughly. Amitabh Srivastava's PREfast found 12% of Windows Server 2003's bugs, the remaining 88% being found by human computer programmers. Microsoft employs more than 4,700 programmers who work on Windows, 60% of whom are software testers whose job is to find bugs in Windows source code. Microsoft co-founder Bill Gates stated that Windows Server 2003 was Microsoft's "most rigorously tested software to date." Microsoft later used Windows Server 2003's kernel in the development of Windows Vista after its reset. Changes The following features are new to Windows Server 2003: Internet Information Services (IIS) v6.0 Significant improvements to Message Queuing Manage Your Server – a role management administrative tool that allows an administrator to choose what functionality the server should provide Improvements to Active Directory, such as the ability to deactivate classes from the schema, or to run multiple instances of the directory server (ADAM) Improvements to Group Policy handling and administration Provides a backup system to restore lost files Improved disk management, including the ability to back up from shadows of files, allowing the backup of open files. Improved scripting and command line tools, which are part of Microsoft's initiative to bring a complete command shell to the next version of Windows Support for a hardware-based "watchdog timer", which can restart the server if the operating system does not respond within a certain amount of time. The ability to create a rescue disk was removed in favor of Automated System Recovery (ASR). Editions Windows Server 2003 comes in a number of editions, each targeted towards a particular size and type of business. In general, all variants of Windows Server 2003 have the ability to share files and printers, act as an application server, host message queues, provide email services, authenticate users, act as an X.509 certificate server, provide LDAP directory services, serve streaming media, and to perform other server-oriented functions. Web Windows Server 2003 Web is meant for building and hosting Web applications, Web pages, and XML web services. It is designed to be used primarily as an IIS web server and provides a platform for developing and deploying XML Web services and applications that use ASP.NET technology. Domain controller and Terminal Services functionality are not included on Web Edition. However, Remote Desktop for Administration is available. Only 10 concurrent file-sharing connections are allowed at any moment. It is not possible to install Microsoft SQL Server and Microsoft Exchange software in this edition without installing Service Pack 1. Despite supporting XML Web services and ASP.NET, UDDI cannot be deployed on Windows Server 2003 Web. The .NET Framework version 2.0 is not included with Windows Server 2003 Web, but can be installed as a separate update from Windows Update. Windows Server 2003 Web supports a maximum of 2 physical processors and a maximum of 2 GB of RAM. It is the only edition of Windows Server 2003 that does not require any client access license (CAL) when used as the internet facing server front-end for Internet Information Services and Windows Server Update Services. When using it for storage or as a back-end with another remote server as the front-end, CALs may still be required. Standard Microsoft Windows Server 2003 Standard is aimed towards small to medium-sized businesses. Standard Edition supports file and printer sharing, offers secure Internet connectivity, and allows centralized desktop application deployment. A specialized variant for the x64 architecture was released in April 2005. The IA-32 variants supports up to four physical processors and up to 4 GB RAM; the x64 variant is capable of addressing up to 32 GB of RAM and also supports Non-Uniform Memory Access. Enterprise Windows Server 2003 Enterprise is aimed towards medium to large businesses. It is a full-function server operating system that supports up to 8 physical processors and provides enterprise-class features such as eight-node clustering using Microsoft Cluster Server (MSCS) software and support for up to 64 GB of RAM through PAE. Enterprise Edition also comes in specialized variants for the x64 and Itanium architectures. With Service Pack 2 installed, the x64 and Itanium variants are capable of addressing up to 1 TB and 2 TB of RAM, respectively. This edition also supports Non-Uniform Memory Access (NUMA). It also provides the ability to hot-add supported hardware. Windows Server 2003 Enterprise is also the required edition to issue custom certificate templates. Datacenter Windows Server 2003 Datacenter is designed for infrastructures demanding high security and reliability. Windows Server 2003 is available for IA-32, Itanium, and x64 processors. It supports a maximum of 32 physical processors on IA-32 platform or 64 physical processors on x64 and IA-64 hardware. IA-32 variants of this edition support up to 64 GB of RAM. With Service Pack 2 installed, the x64 variants support up to 1 TB while the IA-64 variants support up to 2 TB of RAM. Windows Server 2003 Datacenter also allows limiting processor and memory usage on a per-application basis. This edition has better support for storage area networks (SANs): It features a service which uses Windows sockets to emulate TCP/IP communication over native SAN service providers, thereby allowing a SAN to be accessed over any TCP/IP channel. With this, any application that can communicate over TCP/IP can use a SAN, without any modification to the application. The Datacenter edition, like the Enterprise edition, supports 8-node clustering. Clustering increases availability and fault tolerance of server installations by distributing and replicating the service among many servers. This edition supports clustering with each cluster having its own dedicated storage, or with all cluster nodes connected to a common SAN. Derivatives Windows Compute Cluster Server Windows Compute Cluster Server 2003 (CCS), released in June 2006, is designed for high-end applications that require high performance computing clusters. It is designed to be deployed on numerous computers to be clustered together to achieve supercomputing speeds. Each Compute Cluster Server network comprises at least one controlling head node and subordinate processing nodes that carry out most of the work. Compute Cluster Server uses the Microsoft Messaging Passing Interface v2 (MS-MPI) to communicate between the processing nodes on the cluster network. It ties nodes together with a powerful inter-process communication mechanism which can be complex because of communications between hundreds or even thousands of processors working in parallel. The application programming interface consists of over 160 functions. A job launcher enables users to execute jobs to be executed in the computing cluster. MS MPI was designed to be compatible with the reference open source MPI2 specification which is widely used in High-performance computing (HPC). With some exceptions because of security considerations, MS MPI covers the complete set of MPI2 functionality as implemented in MPICH2, except for the planned future features of dynamic process spawn and publishing. Windows Storage Server Windows Storage Server 2003, a part of the Windows Server 2003 series, is a specialized server operating system for network-attached storage (NAS). Launched in 2003 at Storage Decisions in Chicago, it is optimized for use in file and print sharing and also in storage area network (SAN) scenarios. It is only available through Original equipment manufacturers (OEMs). Unlike other Windows Server 2003 editions that provide file and printer sharing functionality, Windows Storage Server 2003 does not require any CAL. Windows Storage Server 2003 NAS equipment can be headless, which means that they are without any monitors, keyboards or mice, and are administered remotely. Such devices are plugged into any existing IP network and the storage capacity is available to all users. Windows Storage Server 2003 can use RAID arrays to provide data redundancy, fault-tolerance and high performance. Multiple such NAS servers can be clustered to appear as a single device, which allows responsibility for serving clients to be shared in such a way that if one server fails then other servers can take over (often termed a failover) which also improves fault-tolerance. Windows Storage Server 2003 can also be used to create a Storage Area Network, in which the data is transferred in terms of chunks rather than files, thus providing more granularity to the data that can be transferred. This provides higher performance to database and transaction processing applications. Windows Storage Server 2003 also allows NAS devices to be connected to a SAN. Windows Storage Server 2003 R2, as a follow-up to Windows Storage Server 2003, adds file-server performance optimization, Single Instance Storage (SIS), and index-based search. Single instance storage (SIS) scans storage volumes for duplicate files, and moves the duplicate files to the common SIS store. The file on the volume is replaced with a link to the file. This substitution reduces the amount of storage space required, by as much as 70%. Windows Storage Server 2003 R2 provides an index-based, full-text search based on the indexing engine already built into Windows server. The updated search engine speeds up indexed searches on network shares. This edition also provides filters for searching many standard file formats, such as .zip, AutoCAD, XML, MP3, and .pdf, and all Microsoft Office file formats. Windows Storage Server 2003 R2 includes built in support for Windows SharePoint Services and Microsoft SharePoint Portal Server, and adds a Storage Management snap-in for the Microsoft Management Console. It can be used to manage storage volumes centrally, including DFS shares, on servers running Windows Storage Server R2. Windows Storage Server 2003 R2 can be used as an iSCSI target with standard and enterprise editions of Windows Storage Server 2003 R2, incorporating WinTarget iSCSI technology which Microsoft acquired in 2006 by from StringBean software. This will be an add-on feature available for purchase through OEM partners as an iSCSI feature pack, or is included in some versions of WSS as configured by OEMs. Windows Storage Server 2003 can be promoted to function as a domain controller; however, this edition is not licensed to run directory services. It can be joined to an existing domain as a member server. Features Distributed File System (DFS): DFS allows multiple network shares to be aggregated as a virtual file system. Support for SAN and iSCSI: Computers can connect to a Storage Server over the LAN, and there is no need for a separate fibre channel network. Thus a Storage Area Network can be created over the LAN itself. iSCSI uses the SCSI protocol to transfer data as a block of bytes, rather than as a file. This increases performance of the Storage network in some scenarios, such as using a database server. Virtual Disc Service: It allows NAS devices, RAID devices and SAN shares to be exposed and managed as if they were normal hard drives. JBOD systems: JBOD (Just a bunch of discs) systems, by using VDS, can manage a group of individual storage devices as a single unit. There is no need for the storage units to be of the same maker and model. Software and Hardware RAID: Windows Storage Server 2003 has intrinsic support for hardware implementation of RAID. In case hardware support is not available, it can use software enabled RAID. In that case, all processing is done by the OS. Multi Path IO (MPIO): It provides an alternate connection to IO devices in case the primary path is down. Editions Windows Storage Server 2003 R2 was available in the following editions: Windows Unified Data Storage Server is a variant of Windows Storage Server 2003 R2 with iSCSI target support standard, available in only the standard and enterprise editions. Windows Small Business Server Windows Small Business Server (SBS) is a software suite which includes Windows Server and additional technologies aimed at providing a small business with a complete technology solution. The Standard edition of SBS includes Microsoft Remote Web Workplace, Windows SharePoint Services, Microsoft Exchange Server, Fax Server, Active Directory, a basic firewall, DHCP server and network address translation capabilities. The Premium edition of SBS adds Microsoft SQL Server 2000 and Microsoft ISA Server 2004. SBS has its own type of CAL that is different and costs slightly more than CALs for the other editions of Windows Server 2003. However, the SBS CAL encompasses the user CALs for Windows Server, Exchange Server, SQL Server and ISA Server, and hence is less expensive than buying all other CALs individually. SBS has the following design limitations, mainly affecting Active Directory: Only one computer in a Windows Server domain can be running SBS SBS must be the root of the Active Directory forest SBS cannot trust any other domains SBS is limited to 75 users or devices depending on the type of CAL SBS is limited to a maximum of 4 GB of RAM (Random Access Memory) SBS domains cannot have any child domains Terminal Services only operates in remote administration mode on SBS, meaning that only two simultaneous RDP sessions are allowed To remove the limitations from an instance of SBS and upgrade to regular Windows Server, Exchange Server, SQL and ISA Server, there is a Windows Small Business Server 2003 R2 Transition Pack. Windows Home Server Windows Home Server is an operating system from Microsoft based on Windows Small Business Server 2003 SP2 (this can be seen in the directory listings of the installation DVD). Windows Home Server was announced on January 7, 2007 at the Consumer Electronics Show by Bill Gates and is intended to be a solution for homes with multiple connected PCs to offer file sharing, automated backups, and remote access. Windows Home Server began shipment to OEMs on September 15, 2007. Windows Server for Embedded Systems Windows Server 2003 for Embedded Systems replaced "Windows 2000 Server for Embedded Systems". Intended use was for building firewall, VPN caching servers and similar appliances. Variants were available with "Server Appliance Software" and with "Microsoft Internet Security and Acceleration Server" Availability of the original version ended May 28, 2003. Availability of R2 ended March 5, 2006. End of extended support was July 14, 2015 (all variants except Storage Server), and End of Licence was May 28, 2018 (R2 and original). The End of Licence date is the last date that OEM's may distribute systems using this version. All variants continued to receive Critical security updates until the end of extended support: Release 2 for Embedded Systems was available in 32 and 64 bit variants, Standard (1-4 CPU) and Enterprise (1-8 CPU): Windows XP Professional x64 Edition Windows XP Professional x64 Edition was released less than a month after Windows Server 2003 SP1, and used the same kernel and source code tree. While many features of the 32-bit variant of Windows XP were brought over into Windows XP Professional x64 Edition, other limitations imposed by constraints such as only supporting 64-bit drivers, and support for 16-bit programs being dropped led to incompatibilities with the 32-bit Windows XP editions available. It later received a Service Pack update as part of the release of Windows Server 2003 SP2. Updates Service Pack 1 On March 30, 2005, Microsoft released Service Pack 1 for Windows Server 2003. Among the improvements are many of the same updates that were provided to Windows XP users with Service Pack 2. Features that are added with Service Pack 1 include: Security Configuration Wizard: A tool that allows administrators to more easily research, and make changes to, security policies. Hot Patching: This feature is set to extend Windows Server 2003's ability to take DLL, Driver, and non-kernel patches without a reboot. IIS 6.0 Metabase Auditing: Allowing the tracking of metabase edits. Windows Firewall: Brings many of the improvements from Windows XP Service Pack 2 to Windows Server 2003; also with the Security Configuration Wizard, it allows administrators to more easily manage the incoming open ports, as it will automatically detect and select default roles. Other networking improvements include support for Wireless Provisioning Services, better IPv6 support, and new protections against SYN flood TCP attacks. Post-Setup Security Updates: A default mode that is turned on when a Service Pack 1 server is first booted up after installation. It configures the firewall to block all incoming connections, and directs the user to install updates. Data Execution Prevention (DEP): Support for the No Execute (NX) bit which helps to prevent buffer overflow exploits that are often the attack vector of Windows Server exploits. Windows Media Player version 10 Internet Explorer 6 SV1 (e.g. 'IE6 SP2') Support for fixed disks bearing data organized using the GUID Partition Table system A full list of updates is available in the Microsoft Knowledge Base. Service Pack 2 Service Pack 2 for Windows Server 2003 was released on March 13, 2007. The release date was originally scheduled for the first half of 2006. On June 13, 2006, Microsoft made an initial test version of Service Pack 2 available to Microsoft Connect users, with a build number of 2721. This was followed by build 2805, known as Beta 2 Refresh. The final build is 3790. Microsoft has described Service Pack 2 as a "standard" service pack release containing previously released security updates, hotfixes, and reliability and performance improvements. In addition, Service Pack 2 contains Microsoft Management Console 3.0, Windows Deployment Services (which replaces Remote Installation Services), support for WPA2, and improvements to IPsec and MSConfig. Service Pack 2 also adds Windows Server 2003 Scalable Networking Pack (SNP), which allows hardware acceleration for processing network packets, thereby enabling faster throughput. SNP was previously available as an out-of-band update for Windows Server 2003 Service Pack 1. Windows Server 2003 R2 Windows Server 2003 R2 is the title of a complementary offering by Microsoft. It consists of a copy of Windows Server 2003 SP1 on one CD and a host of optionally installed new features (reminiscent of Microsoft Plus!) on another. It was released to manufacturing on December 6, 2005 for IA-32 and x64 platforms, but not for IA-64. It was succeeded by Windows Server 2008. New features of Windows Server 2003 R2 include: .NET Framework 2.0 Active Directory Federation Services Microsoft Management Console version 3.0. Additionally, several new snap-ins are included: Print Management Console, for managing print servers File Server Resource Manager, for managing disk quotas on file servers Storage Manager for SANs, for managing LUNs A new version of Distributed File System that includes remote differential compression technology Microsoft Virtual Server 2005, a hypervisor and the precursor to Hyper-V Windows Services for UNIX Support lifecycle On July 13, 2010, Windows Server 2003's mainstream support expired and the extended support phase began. During the extended support phase, Microsoft continued to provide security updates; however, free technical support, warranty claims, and design changes are no longer being offered. Extended support lasted until July 14, 2015. Although Windows Server 2003 is unsupported, Microsoft released an emergency security patch in May 2017 for the OS as well as other unsupported versions of Windows (including Windows Vista and Windows 7 RTM without a service pack), to address a vulnerability that was being leveraged by the WannaCry ransomware attack. Microsoft in 2020 announced that it would disable the Windows Update service for SHA-1 endpoints and since Windows Server 2003 did not get an update for SHA-2, Windows Update Services are no longer available on the OS as of late July 2020. However as of April 2021, the old updates for Windows Server 2003 are still available on the Microsoft Update Catalog. Source code leak On September 23, 2020, the Windows XP Service Pack 1 and Windows Server 2003 source code was leaked onto the imageboard 4chan by an unknown user. Anonymous users from the latter managed to compile the code, as well as a Twitter user who posted videos of the process on YouTube proving that the code was genuine. The latter were struck down on copyright grounds by Microsoft. The leak was incomplete as it was missing the Winlogon source code and some other components. The original leak itself was spread using magnet links and torrent files whose payload originally included Server 2003 and XP source code and which was later updated by additional files among which were previous leaks of Microsoft products, its patents, media about conspiracy theories about Bill Gates by anti-vaccination movements and an assortment of PDF files on different topics. Microsoft issued a statement stating that it was investigating the leaks. See also BlueKeep (security vulnerability) Comparison of Microsoft Windows versions Comparison of operating systems History of Microsoft Windows List of operating systems Microsoft Servers References External links Windows Server 2003 on Microsoft TechNet Windows Server 2003 Downloads on Microsoft TechNet Windows Server Performance Team Blog Kernel comparison with Linux 2.6 by David Solomon, Mark Russinovich, and Andreas Polze 2003 software Products and services discontinued in 2015 Windows Server IA-32 operating systems X86-64 operating systems
Operating System (OS)
664
XNU XNU is the computer operating system (OS) kernel developed at Apple Inc. since December 1996 for use in the Mac OS X (now macOS) operating system and released as free and open-source software as part of the Darwin OS, which is the basis for the Apple TV Software, iOS, iPadOS, watchOS, and tvOS OSes. XNU is an abbreviation of X is Not Unix. Originally developed by NeXT for the NeXTSTEP operating system, XNU was a hybrid kernel derived from version 2.5 of the Mach kernel developed at Carnegie Mellon University, which incorporated the bulk of the 4.3BSD kernel modified to run atop Mach primitives, along with an application programming interface (API) in Objective-C for writing drivers named Driver Kit. After Apple acquired NeXT, the kernel was updated with code derived from OSFMK 7.3 from OSF, and the FreeBSD project, and the Driver Kit was replaced with a C++ API for writing drivers named I/O Kit. Kernel design XNU is a hybrid kernel, containing features of both monolithic kernels and microkernels, attempting to make the best use of both technologies, such as the message passing ability of microkernels enabling greater modularity and larger portions of the OS to benefit from memory protection, and retaining the speed of monolithic kernels for some critical tasks. , XNU runs on ARM64 and x86-64 processors, both one processor and symmetric multiprocessing (SMP) models. PowerPC support was removed as of the version in Mac OS X 10.6. Support for IA-32 was removed as of the version in Mac OS X 10.7; support for 32-bit ARM was removed as of the version in . Mach The basis of the XNU kernel is a heavily modified (hybrid) Open Software Foundation Mach kernel (OSFMK) 7.3. As such, it is able to run the core of an operating system as separated processes, which allows a great flexibility (it could run several operating systems in parallel above the Mach core), but this often reduces performance because of time-consuming kernel/user mode context switches and overhead stemming from mapping or copying messages between the address spaces of the kernel and that of the service daemons. With macOS, the designers have attempted to streamline some tasks and thus BSD functions were built into the core with Mach. The result is a heavily modified (hybrid) OSFMK 7.3 kernel, Apple licensed OSFMK 7.3, which is a microkernel, from the OSF. OSFMK 7.3 includes applicable code from the University of Utah Mach 4 kernel and from the many Mach 3.0 variants forked from the original Carnegie Mellon University Mach 3.0 microkernel. BSD The Berkeley Software Distribution (BSD) part of the kernel provides the Portable Operating System Interface (POSIX) application programming interface (API, BSD system calls), the Unix process model atop Mach tasks, basic security policies, user and group ids, permissions, the network protocol stack (protocols), the virtual file system code (including a file system independent journaling layer), several local file systems such as Hierarchical File System (HFS, HFS Plus (HFS+)) and Apple File System (APFS), the Network File System (NFS) client and server, cryptographic framework, UNIX System V inter-process communication (IPC), audit subsystem, mandatory access control, and some of the locking primitives. The BSD code present in XNU has been most recently synchronised with that from the FreeBSD kernel. Although much of it has been significantly modified, code sharing still occurs between Apple and the FreeBSD Project . K32/K64 XNU in Mac OS X Snow Leopard, v10.6, (Darwin version 10) comes in two varieties, a 32-bit version called K32 and a 64-bit version called K64. K32 can run 64-bit applications in userland. What was new in Mac OS X 10.6 was the ability to run XNU in 64-bit kernel space. K32 was the default kernel for 10.6 Server when used on all machines except Mac Pro and Xserve models from 2008 onwards and can run 64-bit applications. K64 has several benefits compared to K32: Can manage more than 32 GB RAM, as the memory map would consume a disproportionately large area of the 32-bit kernel space. Cache buffer sizes can be larger than what the 32-bit kernel space allows, potentially increasing I/O performance. Performance is increased when using high-performance networking devices or multiple graphics processing units (GPUs), as the kernel can map all of the devices in 64-bit space even if several have very large direct memory access (DMA) buffers. Booting while holding down 6 and 4 forces the machine to boot K64 on machines supporting 64-bit kernels. K64 will run 32-bit applications but it will not run 32-bit kernel extensions (KEXTs), so these must be ported to K64 to be able to load. XNU in OS X Mountain Lion, v10.8, and later only provides a 64-bit kernel. I/O Kit I/O Kit is the device driver framework, written in a subset of C++ based on Embedded C++. Using its object-oriented design, features common to any class of driver are provided within the framework, helping device drivers be written in less time and code. The I/O Kit is multi-threaded, symmetric multiprocessing (SMP)-safe, and allows for hot-pluggable devices and automatic, dynamic device configuration. Many drivers can be written to run from user space, which further enhances the stability of the system. If a user-space driver crashes, it will not crash the kernel. However, if a kernel-space driver crashes it will crash the kernel. Examples of kernel-space drivers include disk adapter and network adapter drivers, graphics drivers, drivers for Universal Serial Bus (USB) and FireWire host controllers, and drivers for virtual machine software such as VirtualBox, Parallels Desktop for Mac, and VMware Fusion. See also Kernel (operating system) A/UX mkLinux OSF/1 Darwin (operating system) – open source operating system released by Apple, Inc., with XNU as kernel macOS – operating system released by Apple, Inc., with XNU as kernel References External links , at Apple Open Source Browser , official mirror – an overview of the components of XNU, written by Amit Singh in December 2003 Inside the Mac OS X Kernel – "This talk intends to clear up the confusion by presenting details of the Mac OS X kernel" Mach (kernel) Monolithic kernels MacOS Software using the Apple Public Source License
Operating System (OS)
665
PlayStation 4 system software The PlayStation 4 system software is the updatable firmware and operating system of the PlayStation 4. The operating system is Orbis OS, based on FreeBSD 9. Technology System The native operating system of the PlayStation 4 is Orbis OS, which is a fork of FreeBSD version 9.0 which was released on January 12, 2012. The PlayStation 4 features two graphics APIs, a low-level API named Gnm and a high-level API named Gnmx. Most developers start with Gnmx, which wraps around Gnm, which in turn manages the more esoteric GPU details. This can be a familiar way to work if the developers are used to platforms like Direct3D 12. Another key area of the game is its programmable pixel shaders. Sony's own PlayStation Shader Language (PSSL) was introduced to the PlayStation 4. It has been suggested that the PlayStation Shader Language is very similar to the HLSL standard in DirectX 12, with just subtle differences that could be eliminated for the most part through preprocessor macros. Besides the kernel and related components, other components included and worth mentioning are Cairo, jQuery, Lua, Mono, OpenSSL, WebKit, and the Pixman rendering library. Many of these are open-source software, although the PlayStation 4 is not an open console. The Software Development Kit (SDK) is based on LLVM and Clang, which Sony has chosen due to its conformant C and C++ front-ends, C++11 support, compiler optimization and diagnostics. Graphical shell The PlayStation 4 uses the PlayStation Dynamic Menu as its graphical shell, in contrast to the XrossMediaBar (XMB) used by the PlayStation Portable and PlayStation 3, as well as the LiveArea used by the PlayStation Vita and PlayStation TV. It is named "Dynamic Menu" because the options it offers to players are context-sensitive, changing based on what a player is actually doing with their PlayStation 4 at any given time. This makes navigation simpler than the previous iteration. This dynamic menu can alter itself so that there's as little time as possible between the users placing a game in the disc drive and the actual gameplay beginning. The PlayStation 4's user interface attempts simplicity as a priority. The main place for entertainment options, the Content area, is prominently displayed with large square icons on a horizontal line arranged by the most recently used. Users can scroll through this gamer newsfeed in an alternating, brick-like formation reminiscent of the social media site Pinterest. Many other main objects will display additional information when having the cursor selected on them. A game may have news updates or advertisements for its downloadable content. Recently played games receive tiles along with a number of mandatory items like the Live from PlayStation and the Internet Browser applications. Content icon customization and options on how to sort them would give players a way to mold the display to better suit their needs. Augmented Reality The augmented reality application, the Playroom comes pre-installed with the PlayStation 4 console. It was demonstrated at E3 2013 and utilizes the Sony PlayStation Camera technology. According to Sony, it is a "fantastically fresh augmented reality entertainment experience", which has been created by combining the light bar located on the front of DualShock 4 controller with the PlayStation Camera. Players are allowed to produce a small floating robot called Asobi, who interacts with the players, scans their faces and shoots fireballs. Once the PlayStation Camera identifies the player with the help of the light bar on the front, a flick on the touchpad of the DualShock 4 controller brings up the augmented reality Bots function of the PlayRoom, which creates the illusion that there are hundreds of little bots inside the controller, which can be released simply with a tap on the track pad that functions like the PlayStation Vita. PS4 owners can view their smartphone or PlayStation Vita for drawing the object and flick it anywhere for the augmented reality Bots to play with. Remote Play and second screen Through Remote Play users can operate their PS4 through the uses of a PlayStation Vita handheld game console, allowing for the play of PS4 games and other media on the small device via streaming. All games that do not require the PlayStation Move or PlayStation Camera are compatible. The second screen can be used to display unique content when playing games that support this option, but it should not be confused with a split-screen. The second screen may be used to show extra content like maps, alternate camera angles, radar or even playbooks in sports games. Apart from PlayStation Vita, other mobile devices such as iPads or Android tablets can also be used as a second screen. That comes in the form of both the official PlayStation App and game companion apps such as Knack's Quest. Social features A heavy emphasis on social features has been placed on the PlayStation 4 console, loading up the PS4 with a number of share-centric apps and features. The [What's New] feature, which allows users to check out their friends' latest activities via a landing page full of their pictures, trophies and other recent events, is an easy way to find out what friends have been up to. On the other hand, a cross-chat feature dubbed [Party Chat] is an interesting way to keep in touch. This gives gamers the ability to chat with other users whether or not they're playing the same title. The PS4's sharing capabilities adds another layer to console gaming. PS4 owners are able to capture or livestream the gameplay with a simple button touch. They can record up to 60 minutes of their latest gaming exploits with a quick press of the Share button on the controller. Footage can be shared on Facebook, Twitter and YouTube. They also have the ability to broadcast their gameplay in real-time to Twitch and Ustream in addition to recording videos. There are also other social features such as community creation. Some of them are introduced via system updates. Favorite Groups is a new section within the Friends app, and acts as a way to quickly access other people a user plays with the most. This feature is aimed at making it easier and faster to get into a game session with friends. On the other hand, communities are new hubs that can be formed around shared interests like games, activities, or play styles. There also exist other smaller social features on PS4, such as the ability to message a friend with a request to watch their gameplay live. Internet features While the PlayStation 4 console can function without an Internet connection, it provides more functionality when it is connected to the Internet. For example, updates to the system software may be downloaded from the Internet, and users may play online when the Internet is properly connected. Online play is the main pillar for the PlayStation 4, but a PlayStation Plus subscription is required to play the majority of PS4 titles online, unlike PlayStation 3 titles. According to Sony, they are developing many new ways to play and connect for PS4 which requires a large investment of resources. As a result, Sony claim they cannot keep such a service free and maintain its quality at the same time considering the cost, and they thus decided that it would be better to charge a fee in order to continue to offer a good service. The web browser included in the PlayStation 4 console is based on the open source WebKit layout engine, unlike the PlayStation 3 which uses the NetFront browser. Using the same modern Webkit core as Safari from Apple, the PS4 web browser receives a very high score in HTML5 compliance testing. However, just like other major browsers, it does not support the deprecated Adobe Flash, which means that websites which still require Flash might not display properly or function as intended. Also, the PDF format is not supported. However, one clear advantage for gamers is being able to cut between gaming and browsing and back again with no loss of gameplay due to the multitasking feature of the web browser. Additionally, while the PS4 web browser has limited support for USB keyboards, it does not seem to support USB mice at all. Furthermore, with Internet connection enabled the PlayStation 4 allows users to access a variety of PlayStation Network (PSN) services, including the PlayStation Store, PlayStation Plus subscription service, and more. Users may download or buy games and other content from these services. Also, gamers are able to play a selection of PS3 titles via the Internet-based PlayStation Now gaming service. PS Now is a cloud-based gaming subscription service from Sony Interactive. Multimedia features The PlayStation 4 supports playing standard 12-centimeter DVD-Video discs; DVD recordable and rewritable discs except those that have not been finalized; and standard Blu-ray discs with the exception of Blu-ray Recordable Erasable version 1.0 discs and Blu-ray XL format discs. Unlike all previous PlayStation consoles the system does not support the Compact Disc format at all, including Compact Disc Digital Audio and Video CD format discs. Blu-ray 3D support was added in system software version 1.75. In 2015, Sony partnered with Spotify to bring the music streaming service to the PlayStation 4, allowing music to be streamed in the background of any game or application for both free and premium members of Spotify. The Media Player application included with the system can be used to play media files on USB storage devices or media servers. According to the PlayStation 4's user guide, media formats supported include videos contained in the MP4, MKV and AVI formats and encoded in H.264 AVC High Profile 4.2, MPEG4 ASP, MPEG2 Visual, AVCHD, and XAVC S; audio encoded in MP3, FLAC, or AAC; or AC-3 (Dolby Digital) and photos encoded in JPEG, BMP, or PNG. The PlayStation 4 Pro also supports videos encoded in H.264 AVC High Profile 5.2. If users have a PlayStation VR headset connected, they can view 360-degree videos inside the headset. Backward compatibility The PlayStation 4 was not backward compatible with any games from previous PlayStation consoles at launch. Though PlayStation 4 users cannot play PlayStation 3 games directly, in 2014, the PlayStation Now cloud-based streaming service allowed for the streaming of selected PS3 games. In December 2015, Sony added PlayStation 2 backward compatibility and republished some PS2 games such as Dark Cloud and Grand Theft Auto III, Grand Theft Auto: Vice City, and Grand Theft Auto: San Andreas on the PS4 via the PlayStation Store in the Americas and Europe. Supported PS2 games run via software emulation (upscaled to high definition) on PS4 systems instead of having been remastered. Each one has been updated to access various PS4 features, including Trophies, Share Play, Broadcasting, Remote Play and second-screen features. However, the original PS2 game discs, and PS2 Classics re-released for the PS3 are not compatible with the PS4 system. History of updates The initial version of the system software for the PlayStation 4 is 1.01 as pre-installed on the original consoles. Support for the Remote Play and second screen experiences were added in version 1.50, which was launched on the same day the PlayStation 4 console itself was released in North America on November 15, 2013. Both features are accessible from the PlayStation Vita console by using its PS4 Link application, and the second screen functionality is also accessible from smartphones and tablets through the PlayStation Mobile app. It is also able to record or share video clips as well as broadcasting gameplay to Twitch or Ustream. It supports Blu-ray and DVD-Video playback, and version 1.60 was released on February 4, 2014, improving DVD playback. Version 1.60 also adds support for Pulse Elite wireless headsets. Version 1.70 was released on April 30, 2014, and adds a number of new features, such as the addition of a rich video editor called ShareFactory that offers users the tools to combine, edit and personalize captured video clips. This update also adds the abilities to share video clips and screenshots while streaming, and to copy video clips and screenshots to USB storage. Version 1.75 was released on July 29, 2014, further adding the support for playback of Blu-ray 3D. It also improves the sound quality during 1.5-speed playback with Blu-ray and DVD video. Version 1.76 was released on September 2, 2014, came with minor changes and was the last update until version 2.0. Released on October 28, 2014, version 2.00 is a major upgrade to the PlayStation 4 system software. Among the features introduced is Share Play, which allows PlayStation Plus users to invite an online friend to join their play session via streaming, even if they do not own a copy of the game. Users can pass control of the game entirely to the remote user, or partake in cooperative multiplayer as if they were physically present. This version also adds a YouTube app and the ability to upload video clips to YouTube, and users can now play music stored on USB storage devices. Also, with the support for custom themes and the ability to change the background color, users can set themes for home screens and function screens for each user in this version. Version 2.50 was released on March 26, 2015, adding a suspend/resume feature to allow players to jump in and out of games with the PS button, and games are suspended in the low-power Rest Mode instead of closing completely. This version also allows the console's hard drive to be backed up or restored to a USB flash drive. On September 30, 2015, Sony released PS4 update 3.00. It introduced "entirely new features" and user-interface enhancements. Among the new features was the ability to share videos directly to Twitter, a dedicated PlayStation Plus section, tweaks to the interface for streaming on YouTube, improvements to social features such as messages and group creation, and the ability to save screenshots as PNGs. An increase in online storage capacity from 1 GB to 10 GB was also introduced for PlayStation Plus Members. Sony states that this update will create "new ways to connect with friends and players around the world, expanding the social capabilities of the system even further". On April 6, 2016, Sony released PS4 update 3.50, that would enable the PS4 to use Remote Play functionality on Windows and macOS (formerly named OS X). VG247 reported that the update will allow Remote Play functionality on computers running Windows 8.1, Windows 10, OS X Yosemite, and OS X El Capitan. Furthermore, the article explains that Remote Play will support resolution options of 360p, 540p, and 720p, frame rate options of 30 FPS and 60 FPS, and that one DualShock 4 controller can be connected via the computer's USB port. On September 13, 2016, Sony released PS4 update 4.00, which added High Dynamic Range (HDR) and home screen folder support, 1080p streaming, tweaks to menus and game info screens for greater overview, and streamlined interfaces. On March 9, 2017, Sony released the next major firmware update, version 4.50. The update includes support for installing applications on external hard drives, custom wallpapers, a refined Quick Menu, a simplified notifications list, custom status updates in What's New, and 3D Blu-Ray support for the PlayStation VR and includes support for the preloading of game patches, however it is up to the developer to make use of it. The first game to take advantage of this feature is LittleBigPlanet 3. On October 3, 2017, Sony released PS4 update 5.00. Overhauling the master/sub-account system, the update allows for more customization of accounts for family members and roles, and applying parental controls to each account. The groups system is replaced with a new friends management system, along with support for 5.1 and 7.1 surround sound configurations for PlayStation VR. A new tournament bracket viewer has been made, along with tweaks to broadcasting (with 1080p streaming at 60 frames per second on Twitch now possible), and other changes to PS Message, notifications, and the quick menu. Lastly, it introduces localization for Czech, Greek, Hungarian, Indonesian, Romanian, Thai, and Vietnamese languages. 5.50 was released on March 8, 2018. It includes playtime restrictions for child accounts, the ability to hide applications from the library, custom wallpapers via USB, a supersampling mode on PS4 Pro and the ability to delete notifications. 5.53 was released on April 12, and 5.55 was released on May 17, 2018. Both only include updates to improve system performance. On September 13, 2018, Sony released PS4 update 6.00 which improved system performance. 6.50 was released on March 7, 2019 that allowed users to use Remote Play on iOS devices through the Remote Play app from the App Store, and some other minor improvements. On May 10, 2019, Sony Interactive Entertainment added the ability to remove purchased games from the Download List for PlayStation Store and delete games from My Profile. On October 8, 2019, Sony released PS4 update 7.00. The major feature that was added was the ability to use Remote Play on select Android devices running the Android Lollipop version and above. In addition, other features include chat transcription and the expanded limit for party users from 8 to 16. On December 19, 2019, Sony released update 7.02 with the goal of improved system performance and stability. On October 14, 2020, Sony released PS4 update 8.00. It brought changes to existing Party and Message features. Other additions include new avatars, enhanced 2-step verification system and updated parental controls. On April 14th, 2021, Sony released PS4 update 8.50. On September 15th, 2021, Sony released PS4 update 9.00. On December 1st, 2021, Sony released PS4 update 9.03. On February 19th, 2022, Sony released PS4 update 9.04. See also Other gaming platforms from Sony: PlayStation 3 system software PlayStation Vita system software PlayStation Portable system software Other gaming platforms from this generation: Wii U system software Xbox One system software Nintendo 3DS system software Nintendo Switch system software Other gaming platforms from the seventh generation: Wii system software Xbox 360 system software Nintendo DSi system software References Software Game console operating systems 2013 software Proprietary operating systems Unix variants de:PlayStation 4#Betriebssysteme
Operating System (OS)
666
Systeminfo.exe In computing, systeminfo, is a command-line utility included in Microsoft Windows versions from Windows XP onwards and in ReactOS. Overview The command produces summary output of hardware/software operating environment parameters. The detailed configuration information about the computer and its operating system includes data on the operating system configuration, security information, product ID, and hardware properties, such as RAM, disk space, and network cards. The ReactOS version was developed by Dmitry Chapyshev and Rafal Harabien. It is licensed under the GPL. Syntax The command-syntax is: systeminfo[.exe] [/s Computer [/u Domain\User [/p Password]]] [/fo {TABLE|LIST|CSV}] [/nh] See also System profiler System Information (Windows) References Further reading External links systeminfo | Microsoft Docs Console applications Utilities for Windows Windows administration
Operating System (OS)
667
Olivetti P6066 Olivetti P6066 was a personal computer programmable with a version of Basic owned by Olivetti and integrated in the operating system. Description It was identical to Olivetti P6060 in the mechanical design; however, the color (white) and performances were different. It was an improved version of the P6060, from which it was possible to make an upgrade. Head of the development was Pier Giorgio Perotto, and the production site was Scarmagno. External links Retro Computer museum, Zatec, Czech Republic video Archivio Olivetti Olivetti personal computers Computer-related introductions in 1975
Operating System (OS)
668
Pop! OS Pop!_OS is a free and open-source Linux distribution, based upon Ubuntu, and featuring a GTK-based desktop environment known as COSMIC, which is based on GNOME. The distribution is developed by American Linux computer manufacturer System76. Pop!_OS is primarily built to be bundled with the computers built by System76, but can also be downloaded and installed on most computers. Pop!_OS provides full out-of-the-box support for both AMD and Nvidia GPUs. It is regarded as an easy distribution to set up for gaming, mainly due to its built-in GPU support. Pop!_OS provides default disk encryption, streamlined window and workspace management, keyboard shortcuts for navigation as well as built-in power management profiles. The latest releases also have packages that allow for easy setup for TensorFlow and CUDA. Pop!_OS is maintained primarily by System76, with the release version source code hosted in a GitHub repository. Unlike many other Linux distributions, it is not community-driven, although outside programmers can contribute, view and modify the source code. They can also build custom ISO images and redistribute them under another name. Features Pop!_OS primarily uses free software, with some proprietary software used for hardware drivers for Wi-Fi, discrete GPU and media codecs. It comes with a wide range of default software, including LibreOffice, Firefox and Geary. Additional software can be downloaded using the package manager the Pop!_Shop. Pop!_OS uses APT as its package manager and initially did not use Snaps or Flatpak, but Flatpak support was added in version 20.04 LTS. Software packages are available from the Ubuntu repositories, as well as Pop!_OS's own repositories. Pop!_OS features a customized GNOME Shell interface, with a Pop!_OS theme. There is a GUI toggle in the GNOME system menu for switching between different video modes on dual GPU laptops. There are three display modes: hybrid, discrete and iGPU only. There is a power management package developed from the Intel Clear Linux distribution. Pop!_OS uses Xorg as its display manager, with Wayland available optionally, as Ubuntu has done. Wayland lacks support for proprietary device drivers, in particular Nvidia, while Xorg is supported. To enable use of Nvidia proprietary drivers for most performance and GPU switching, Pop!_OS uses only Xorg to date. TensorFlow and CUDA enabled programs can be added by installing packages from the Pop!_OS repositories without additional configuration required. It provides a Recovery Partition that can be used to 'refresh' the system while preserving user files. It can be used only if it is set up during initial installation. From the 21.04 release, Pop!_OS included a new GNOME-based desktop environment called COSMIC, an acronym for "Computer Operating System Main Interface Components" developed by System76. It features separate views for workspaces and applications, a dock included by default, and supports both mouse-driven and keyboard-driven workflows. System76 stated it will be creating a new desktop environment not based on GNOME. This desktop environment will be written in Rust and developed to be similar to the COSMIC desktop used since version 21.04. System76 cites limitations with GNOME extensions, as well as disagreements with GNOME developers on the desktop experience as reasons to build a new desktop environment. Installation Pop!_OS provides two ISO images for download: one with AMD video drivers and another with Nvidia drivers. The appropriate ISO file may be downloaded and written to either a USB flash drive or a DVD using tools such as Etcher or UNetbootin. Pop!_OS initially used an Ubuntu-themed installer. Later it switched to a custom software installer built in partnership with elementary OS called Pop Shop which comes pre-installed with Pop!_OS. Release history 17.10 Prior to offering Pop!_OS, System76 had shipped all its computers with Ubuntu pre-installed. Development of Pop!_OS was commenced in 2017, after Ubuntu decided to halt development of Unity and move back to GNOME as its desktop environment. The first release of Pop!_OS was 17.10, based upon Ubuntu 17.10. In a blog post explaining the decision to build the new distribution, the company stated that there was a need for a desktop-first distribution. The first release was a customized version of Ubuntu GNOME, with mostly visual differences. Some different default applications were supplied and some settings were changed. The initial Pop theme was a fork of the Adapta GTK theme, plus other upstream projects. 17.10 also introduced the Pop!_Shop software store, which is a fork of the elementary OS app store. Bertel King of Make Use Of reviewed version 17.10, in November 2017 and noted, "System76 isn’t merely taking Ubuntu and slapping a different name on it." King generally praised the release, but did fault the "visual inconsistencies" between applications that were optimized for the distribution and those that were not and the application store, Pop!_Shop, as incomplete. For users who may want to try it on existing hardware he concluded, "now that Ubuntu 17.10 has embraced GNOME, that’s one less reason to install Pop!_OS over Ubuntu." 18.04 LTS Version 18.04 added power profiles; providing easy GPU switching, especially for Nvidia Optimus equipped laptops; HiDPI support; full disk encryption and access to the Pop!_OS repository. In 2018, reviewer Phillip Prado described Pop!_OS 18.04 as "a beautiful looking Linux distribution". He concluded, "overall, I think Pop!_OS is a fantastic distribution that most people could really enjoy if they opened up their workflow to something they may or may not be used to. It is clean, fast, and well developed. Which I think is exactly what System 76 was going for here." 18.10 Release 18.10 was released in October 2018. It included a new Linux kernel, graphic stack, theme changes and updated applications, along with improvements to the Pop!_Shop software store. 19.04 Version 19.04 was mostly an incremental update, corresponding to the same Ubuntu version. It incorporated a "Slim Mode" option to maximize screen space, through reducing the height of application window headers, a new dark mode for nighttime use and a new icon set. Joey Sneddon of OMG! Ubuntu! reviewed Pop!_OS 19.04 in April 2019 and wrote, "I don’t see any appreciable value in Pop OS. Certainly nothing that would make me recommend it over regular Ubuntu 19.04 ..." 19.10 In addition to incremental updates, version 19.10 introduced Tensorman, a custom TensorFlow toolchain management tool, multilingual support and a new theme based on Adwaita. In a 2019 comparison between Pop!_OS and Ubuntu, Ankush Das of It's FOSS found that while both distributions have their advantages, "the overall color scheme, icons, and the theme that goes on in Pop!_OS is arguably more pleasing as a superior user experience." 20.04 LTS Pop!_OS 20.04 LTS was released on 30 April 2020 and is based upon Ubuntu 20.04 LTS. It introduced selectable auto-tiling, expanded keyboard shortcuts and workspaces management. It also added Pop!_Shop application store support for Flatpak and introduced a "hybrid graphics mode" for laptops, allowing operation using the power-saving Intel GPU and then providing switching to the NVidia GPU for applications that require it. Firmware updates became automatic and operating system updates could be downloaded and later applied while off-line. In examining Pop!_OS 20.04 beta, FOSS Linux editor, Divya Kiran Kumar noted, "with its highly effective workspaces, advanced window management, ample keyboard shortcuts, out-of-the-box disk encryption, and myriad pre-installed apps. It would be an excellent pick for anyone hoping to use their time and effort effectively." Jason Evangelho reviewed Pop!_OS in FOSS Linux January 2020 and pronounced it the best Ubuntu-based distribution. A review of Pop!_OS 20.04 by Ankush Das in It's FOSS in May 2020 termed it "the best Ubuntu-based distribution" and concluded, "with the window tiling feature, flatpak support, and numerous other improvements, my experience with Pop!_OS 20.04 has been top-notch so far." OMG! Ubuntu! reviewer Joey Sneddon wrote of Pop!_OS 20.04, "it kinda revolutionises the entire user experience". He further noted, "The fact this distro doesn't shy away from indulging power users, and somehow manages to make it work for everyone, underlines why so-called 'fragmentation' isn't a bad thing: it's a chameleonic survival skill that allows Linux to adapt to whatever the task requires. It is the T-1000 of computing, if you get the reference. And I can't lie: Ubuntu could really learn a few things from this approach." In a 19 October 2020 review in FOSS Bytes by Mohammed Abubakar termed it, "The Best Ubuntu-based Distro!" and said it is, "an Ubuntu-based Linux distro that strikes a perfect balance between being beginner-friendly and professional or gaming use". 20.10 Pop!_OS 20.10 was released on 23 October 2020 and is based upon Ubuntu 20.10. It introduced stackable tiled windows and floating window exceptions in auto-tiling mode. Fractional scaling was also introduced, as well as external monitor support for hybrid graphics. Beta News reviewer Brian Fagioli in particular praised the availability of fractional scaling and stacking and noted "what the company does with Pop!_OS, essentially, is improve upon Ubuntu with tweaks and changes to make it even more user friendly. Ultimately, Pop!_OS has become much better than the operating system on which it is based." 21.04 Pop!_OS 21.04 was released on 29 June 2021 and is based upon Ubuntu 21.04. It included the COSMIC (Computer Operating System Main Interface Components) desktop, based on GNOME, but with a custom dock and shortcut controls. Writing in OMG Ubuntu, Joey Sneddon noted, "COSMIC puts a dock on the desktop; separates workspace and applications into individually accessible screens; adds a new keyboard-centric app launcher (that isn’t trying to search all the things™ by default); plumbs in some much-needed touchpad gestures; and — as if all of that wasn’t enough — makes further refinements to its unique window tiling extension (which you’re free to toggle on/off at any point)." He continued, "Pop!_OS 21.04 is sort of what Ubuntu could — some might say ‘should’ — be: a distro that doesn’t patronise its potential users by fixating on an idealised use case drawn up in a meeting. COSMIC wants to help its users work more efficiently on their terms, not impose a predetermined workflow upon them." 21.10 Pop!_OS 21.10 was released on 14 December 2021 and is based upon Ubuntu 21.10. It includes GNOME 40, a new "Vertical Overview" extension, a new Applications menu and support for Raspberry Pi. Release table Pop!_OS is based upon Ubuntu and its release cycle is same as Ubuntu, with new releases every six months in April and October. Long term support releases are made every two years, in April of even-numbered years. Each non-LTS release is supported for three months after the release of the next version, similar to Ubuntu. Support for LTS versions is provided until the next LTS release. This is considerably shorter than Ubuntu which provides 5 year support for LTS releases. See also Debian List of Ubuntu-based distributions References External links Official website Pop!_OS at DistroWatch 2017 software Computer-related introductions in 2017 Free software operating systems Linux distributions Ubuntu derivatives X86-64 Linux distributions
Operating System (OS)
669
Openmoko Linux Openmoko Linux is an operating system for smartphones developed by the Openmoko project. It is based on the Ångström distribution, comprising various pieces of free software. The main targets of Openmoko Linux were the Openmoko Neo 1973 and the Neo FreeRunner. Furthermore, there were efforts to port the system to other mobile phones. Openmoko Linux was developed from 2007 to 2009 by Openmoko Inc. The development was discontinued because of financial problems. Afterwards the development of software for the Openmoko phones was taken over by the community and continued in various projects, including SHR, QtMoko and Hackable1. Components Openmoko Linux uses the Linux kernel, GNU libc, the X.Org Server plus their own graphical user environment built using the EFL toolkit, GTK+ toolkit, Qt toolkit and the illume window manager (previously Matchbox window manager). The OpenEmbedded build framework and opkg package management system, are used to create and maintain software packages. This is a very different approach than that of Android (in which everything except Linux, Webkit, and the Java language inside of Android seems non-standard). Applications targeted for Android must be substantially rewritten and are largely not portable. Many existing Linux desktop apps can be easily ported to Openmoko. (However the limited computational power and screen resolution require substantial reworking of existing applications, in order to render them usable in a finger-oriented, small-screen environment.) See also List of free and open source Android applications References External links Smartphones Mobile operating systems Embedded Linux Openmoko Free mobile software Linux distributions
Operating System (OS)
670
Windows Server Update Services Windows Server Update Services (WSUS), previously known as Software Update Services (SUS), is a computer program and network service developed by Microsoft Corporation that enables administrators to manage the distribution of updates and hotfixes released for Microsoft products to computers in a corporate environment. WSUS downloads these updates from the Microsoft Update website and then distributes them to computers on a network. WSUS is an integral component of Windows Server. History The first version of WSUS was known as Software Update Services (SUS). At first, it only delivered hotfixes and patches for Microsoft operating systems. SUS ran on a Windows Server operating system and downloaded updates for the specified versions of Windows from the remote Windows Update site which is operated by Microsoft. Clients could then download updates from this internal server, rather than connecting directly to Windows Update. Support for SUS by Microsoft was originally planned to end on 6 December 2006, but based on user feedback, the date was extended to 10 July 2007. WSUS builds on SUS by expanding the range of software it can update. The WSUS infrastructure allows automatic downloads of updates, hotfixes, service packs, device drivers and feature packs to clients in an organization from a central server or servers. Operation Windows Server Update Services 2.0 and above operate on a repository of update packages from Microsoft. It allows administrators to approve or decline updates before release, to force updates to install by a given date, and to produce extensive reports on which updates each machine requires. System administrators can also configure WSUS to approve certain classes of updates automatically (critical updates, security updates, service packs, drivers, etc.). One can also approve updates for detection only, allowing an administrator to see which machines will require a given update without also installing that update. WSUS may be used to update computers on a disconnected network. This requires exporting patch data from a WSUS server connected to the internet and, using removable media, importing to a WSUS server set up on the disconnected network. Administrators can use WSUS with Group Policy for client-side configuration of the Automatic Updates client, ensuring that end-users can't disable or circumvent corporate update policies. WSUS does not require the use of Active Directory; client configuration can also be applied by Local Group Policy or by modifying the Windows registry. WSUS uses .NET Framework, Microsoft Management Console and Internet Information Services. WSUS 3.0 uses either SQL Server Express or Windows Internal Database as its database engine, whereas WSUS 2.0 uses WMSDE. System Center Configuration Manager (SCCM) interoperates with WSUS and can import third party security updates into the product. Licensing WSUS is a feature of the Windows Server product and therefore requires a valid Windows Server license for the machine hosting the service. The fact that user workstations authenticate themselves on the WSUS service to retrieve their updates makes it necessary to acquire a fileserver client access license (CAL) for each workstation connecting to the WSUS service. Fileserver CAL for WSUS is the same CAL as the one required for connecting to a Microsoft Active Directory, fileserver and printserver, and has to be acquired once for a device or a user. WSUS is often considered as a free product because fileserver CAL are already paid for in an enterprise network that has a Microsoft Active Directory and thus do not need to be acquired again. In a network using Samba Active Directory, it is not necessary to purchase CALs to connect to the domain controller or connect to a Samba file server. However, the use of a WSUS server will still require the purchase of client access licenses for all Windows workstations that will connect to the WSUS server. Version history References External links on Microsoft Docs WSUS Product Team Blog Windows Server Microsoft server technology Patch utilities
Operating System (OS)
671
Open Source Seed Initiative The Open Source Seed Initiative (OSSI) is an organization that developed and maintains a mechanism through which plant breeders can designate the new crop varieties they have bred as open source. This mechanism is advanced as an alternative to patent-protected seeds sold by large agriculture companies such as Monsanto or DuPont. OSSI is a U.S. based not-for-profit 501(c)(3) organization focusing on establishing a protected commons of open source varieties and on educational and outreach activities associated with the development of this open source seed commons and on seed rights and issues related to the control of seed. The OSSI was founded in 2012 by a group of plant breeders, farmers, and seed companies. Founders include Jack Kloppenburg, Irwin Goldman, Claire Luby, Thomas Michaels, Frank Morton, Jonathan Spero, Alejandro Argumedo, and Jahi Chappell. Tom Stearns was an early supporter and advisor to the OSSI founders. Carol Deppe and C.R. Lawn joined the OSSI board of directors in its early stages, providing invaluable contributions from the freelance breeding community and the seed industry. OSSI is governed by a board of directors and includes 36 plant breeders and 46 seed company partners in its work. Members of the group are unhappy with the patenting of plant varieties, as they say the patenting of seeds restricts plant breeders' freedom and increases the power of large seed companies. Taking inspiration from open source software, the OSSI seeks to create a "protected commons" of open-source seed varieties as an alternative to patented or otherwise legally restricted seeds. At first the OSSI tried to draft a legally-defensible license, but they found that the principle of software licenses did not translate easily to plants, as a license on plant seeds would need to continue to each new generations of plants, quickly creating a huge amount of legal work. The OSSI eventually decided to use an informal Pledge printed on every seed packet or transmitted along with the seed, both for simplicity and because they felt this less restrictive approach was more in line with the goals of the project. Pledge and mission The Open Source Seed Initiative Pledge asks farmers, gardeners, and plant breeders who use the seed to refrain from patenting or licensing the seed or derivatives from it, and to pass on the Pledge to any derivatives made. The Pledge states: "You have the freedom to use these OSSI-Pledged seeds in any way you choose. In return, you pledge not to restrict others' use of these seeds or their derivatives by patents or other means, and to include this Pledge with any transfer of these seeds or their derivatives." Use of the Pledge ensures the four open source seed freedoms for this and future generations, including: The freedom to save or grow seed for replanting or for any other purpose. The freedom to share, trade, or sell seed to others. The freedom to trial and study seed and to share or publish information about it. The freedom to select or adapt the seed, make crosses with it, or use it to breed new lines and varieties. OSSI's mission bears some similarities to the mission of organizations such as Seed Savers Exchange, but it is different in that OSSI provides an explicit Pledge with its seeds that is designed to keep seeds free through the establishment of a protected commons. OSSI differs from plant breeders' rights and plant variety protection in that the Pledge allows recipients to do anything they want with the seed except restrict it. In addition, it automatically extends the Pledge to new varieties developed from OSSI-Pledged parents. In addition, OSSI does not Pledge heirlooms or indigenous varieties. It only Pledges varieties contributed and Pledged by their breeders. OSSI involves plant breeders and seed company partners in its mission. First OSSI works with plant breeders who commit to making one or more of their varieties available exclusively under the OSSI Pledge. OSSI Partner Seed Companies sell OSSI-Pledged varieties, acknowledge the OSSI breeders in their variety descriptions, label OSSI-Pledged varieties with the OSSI logo, and include the Pledge and information about OSSI in their catalogs and on their websites. The Seed List of OSSI-Pledged varieties gives complete descriptions and photos for each OSSI-Pledged variety and links to every OSSI Partner Seed Company that carries each variety. OSSI also places articles in magazines for gardeners and farmers during the seed ordering season so as to attract visitors to its website and channel those visitors to its seed company partners where they can buy the seed. See, for example, Carol Deppe's article in the January issue of Acres/USA- Thirty-three Great Open-Source Organic-Adapted Vegetable Varieties. OSSI thus creates a market for ethically produced, "freed seed" analogous to the markets for "fair trade" and "organic" products. Influences and early history The work of University of Wisconsin sociologist Jack Kloppenburg, particularly his book First the Seed: The Political Economy of Plant Biotechnology, 1492-2000 (2nd ed.) influenced the development of OSSI. Originally published in 1990, then updated in 2000, this book chronicles the vast changes in seed sovereignty that took place during the 20th century through the rise of modern plant breeding approaches, the expansion of intellectual property rights, and emerging crop biotechnologies. Kloppenburg's work explored global consequences of legal control over crop seeds during an era of heavy consolidation in the seed industry. Kloppenburg himself was inspired by the work of writers and activists Pat Roy Mooney, a Canadian, and Cary Fowler, an American, who began engaging with issues of the public versus private ownership of seed and genetic resources in the 1970s. While some of these views were criticized by plant breeders in the 1980s and 1990s, Kloppenburg argued that the expansion of intellectual property rights over crop genetic resources, including cultivars, genes, and plant traits, is an issue of concern for the future of global agriculture. Kloppenburg documented the ever-increasing encroachments upon the traditional rights of farmers, gardeners, and plant breeders to save, replant, share, or breed with seed, as well as forced plant breeders and all others interested in crop genetic resources to confront the degree to which they had lost or were losing "freedom to operate" with their seeds. (This term "freedom to operate" has come to mean the degree to which seeds and the genes contained in those seeds can be freely used by breeders, gardeners, farmers, and seed producers without legal restriction.) One of the consequences of an increasing global awareness of the finite nature of crop genetic resources and the debate over ownership of these resources has been the establishment of gene banks and seed banks in many countries, notably Fowler's recent efforts to develop the Svalbard Global Seed Vault off the coast of Norway in the Arctic Svalbard Archipelago. Kloppenburg's focus on the ownership and control of those resources still remains one of the most pressing questions for future generations. International efforts, coordinated through the Food and Agriculture Organization of the United Nations and the United Nations Environment Program, and manifest through agreements such as the International Treaty on Plant Genetic Resources for Food and Agriculture and the Convention on Biological Diversity, have sought global solutions to sustainable and fair use of the planet's crop genetic resources. While these efforts are ongoing, significant limitations to global germplasm exchange still challenge and limit their potential gains. Building from these ideas, plant breeder Thomas Michaels proposed General Public License for Plant Germplasm (GPLPG) in 1999. The objective of the license was to build a pool of shared plant germplasm that could be freely used for breeding new crop varieties. The two key features of this license were 1) that GPLPG varieties were freely available for use in as a parent any breeding program and 2) that new varieties developed using one or more GPLPG parents must also be designated as GPLPG. The license was explicitly modeled on the General Public License that had been developed by Richard Stallman and others in the computer software community. General Public License for Plant Germplasm was the first license of its kind to treat the plant genotype as if it were computer source code that can be freely used in a new program (crop variety) so long as the new program (crop variety) is also designated as General Public License. Open Source Seed Initiative, founded some 15 years later, creates a seed commons involving the method of germplasm exchange based upon a Pledge. Any user can gain access to the germplasm and use it for any purpose, as long as they pledge not to restrict others' use of this same germplasm as well as to pass the Pledge along if they share or sell the germplasm. OSSI provides maximal freedom to operate for those who wish to save, replant, share, sell, trade, breed, and otherwise innovate with seeds. Release of OSSI varieties and current activity In April 2014, the OSSI released its first 36 open-source seed varieties. NPR opined in their report in 2014 that large seed companies would be unlikely to use open-source seeds, as patented seeds are more profitable. And they speculated that farmers may have trouble finding open-source seeds for sale. However by July, 2017, OSSI had over 375 varieties of more than 50 crops bred by 36 breeders and being sold by 46 seed company partners. While varieties have been contributed by public sector plant breeders at universities and not-for-profit organizations, most OSSI varieties have been contributed by freelance plant breeders and seed companies. On 10 August 2015 an OSSI-Pledged red romaine lettuce variety called 'Outredgeous' bred by farmer-breeder Frank Morton became the first plant variety to be planted, harvested and eaten entirely in space, as a part of Expedition 44 to the International Space Station. OSSI has also become a topic of academic research in both the biological and social sciences. OSSI appears in The Sociology of Food and Agriculture by Michael Carolan. Several scientific journal articles have explored ideas surrounding open source plant breeding, genetic variation, and intellectual property. OSSI was highlighted in Rachel Cernansky's piecem "How 'Open Source' Seed Producers from the US to India are Changing Global Food Production", originally published in Ensia magazine, and reprinted in many different outlets, including Vox and Global Voices. OSSI has developed a relationship with Seed Savers Exchange. Seed Saver's new online and print editions of the Garden Seed Inventory will label all OSSI-Pledged varieties with the OSSI logo and the name of the breeder, and will include the Pledge in the beginning of the book. References External links Horticultural organizations based in the United States Community seed banks
Operating System (OS)
672
Kickstart (Amiga) Kickstart is the bootstrap firmware of the Amiga computers developed by Commodore International. Its purpose is to initialize the Amiga hardware and core components of AmigaOS and then attempt to boot from a bootable volume, such as a floppy disk. Most Amiga models were shipped with the Kickstart firmware stored on ROM chips. Versions Commodore's AmigaOS was formed of both the Kickstart firmware and a software component provided on disk (with the software portion often termed as Workbench). For most AmigaOS updates the Kickstart version number was matched to the Workbench version number. Confusingly, Commodore also used internal revision numbers for Kickstart chips. For example, there were several Kickstart revisions designated as version 2.0. Version summary The first Amiga model, the A1000, required that Kickstart 1.x be loaded from floppy disk into a 256 KB section of RAM called the writable control store (WCS). Some A1000 software titles (notably Dragon's Lair) provided an alternative code-base in order to use the extra 256 KB for data. Later Amiga models had Kickstart embedded in a ROM chip, thus improving boot times. Many Amiga 1000 computers were modified to take these chips. Kickstart was stored in 256 KB ROM chips for releases prior to AmigaOS 2.0. Later releases used 512 KB ROM chips containing additional and improved functionality. The Amiga CD32 featured a 1 MB ROM (Kickstart 3.1) with additional firmware and an integrated file system for CD-ROM. Early A3000 models were, like the A1000, also shipped with Kickstart on floppy disk, and used a 1.4 BETA ROM as bootstrap. Either Kickstart 1.3 or 2.0 could be extracted to a partition specifically named WB_1.3 or WB_2.x, respectively, and put in DEVS:kickstart, an absolute system location from where the A3000 system will find it at bootstrap and copy its image into RAM. This early A3000 supported both ROM based Kickstarts and disk-based Kickstarts, although not simultaneously. An A3000 configured to use disk-based Kickstart images had the benefit of being able to boot various versions of AmigaOS without additional tools, simply by selecting the appropriate Kickstart image at boot time. The Commodore CDTV featured additional firmware ROMs which are not technically part of the Amiga Kickstart. The CDTV's original firmware ROMs must be upgraded in order to install a Kickstart version later than 1.3. AmigaOS 2.1 was a pure software update and did not require matching Kickstart ROM chips. Workbench 2.1 ran on all Kickstart ROMs of the 2.0x family. Later releases of AmigaOS (3.5 and 3.9) were also software only and did not include matching ROM upgrades instead requiring Kickstart 3.1, with ROM-file based Kickstart components replacing those in ROM. Kickstart modules of AmigaOS 4 are stored on the boot disk partition. Up to Kickstart v2.0 (V36) only 512-byte blocks were supported. Motorola 68040 uses write caches that requires the use of the functions CacheClearU() and CacheControl() to flush cache when program code has been modified. These functions are only available in or better. Function Upon start-up or reset the Kickstart performs a number of diagnostic and system checks and then initializes the Amiga chipset and some core OS components. It will then check for connected boot devices and attempt to boot from the one with the highest boot priority. If no boot device is present a screen will be displayed asking the user to insert a boot disk typically a floppy disk. Insertion of such a bootable disk (other than workbench-like disk) will result in: a) a command line interface ("CLI") prompt to operate with ROM-internal and disks commands (including programs, scripts) (if the disk is non-workbench, or empty), or b) a (basic) point and click UI named "Workbench" if the disk contains at least "loadwb" in the "startup-sequence" script residing inside the "s"-folder on this disk. c) the disk booting into a customized workbench or an application, keeping the OS "alive" in the background. d) a game or other application directly starting up, taking over all the hardware resources of this computer by avoiding to establish core Exec multitasking, driver initialization etc. The Kickstart contains many of the core components of the Amiga's operating system, such as: Exec – the Amiga's multi-tasking kernel Intuition – functionality for GUI, screens, windowing and handling of input/output devices Autoconfig – functionality to automatically initialize or boot from compliant expansion hardware Floppy disk device driver and file system to read and boot from floppy disk DOS library for file access and handling AmigaDOS – Command Line Interface (CLI) functionality and a number of core CLI commands Graphics library for basic drawing and raster graphics functions using the native Amiga chipset Audio device driver for the native Amiga sound hardware Device drivers for the Amiga keyboard and mouse/gameports Kickstart 1.3 is the first version to support booting from a hard disk drive. From AmigaOS release 2.0 onwards Kickstart also contained device drivers to boot from devices on IDE controllers, support for PC Card ports and various other hardware built into Amiga models. Diagnostic test The screen color after power-on shows the result of the self-test, this test is loaded from ROM. If everything is working the following screen color sequence will be displayed: Dark grey – Hardware working and the registers are readable. Light grey – ROM verified. White – Initialization is alright. Ready to boot. These colors indicate a problem: – Bad result on Kickstart-ROM test (Checksum error). – Bad result on chip RAM test. – Custom chip problem (Denise, Paula, Agnus) – CPU exception occurred, this is a CPU error detection by the CPU it self and can be an illegal instruction executed or address bus error – Mostly a bad CPU or a bad Zorro expansion card. CPU exception happened before the "Guru Meditation" trapping software was enabled. – Bad Paula on older kickstart Amigas. – CIA problem – If it stops at grey, the CIA may be defective Black/stripes/glitching – random code (ROMs swapped/ROM garbage) or CIA problem Black – No video output. CPU not running. The keyboard LED uses blink codes that come from the keyboard controller chip where: One blink means the keyboard ROM has a checksum error Two blinks means keyboard RAM failure Three blinks means watchdog timer failure. When the Caps Lock key is repeatedly pressed approx. 10 times, and the Caps Lock LED is not turning on and off each time you press, the CPU is not reading out key presses and mostly indicate a CPU crash. CIA-A serial register is used with a CIA interrupt to pickup keypresses from the keyboard buffer. If the Caps Lock LED sticks on or off, the CPU is probably not servicing CIA interrupt requests. Usage In general, to run a specific Workbench version a Kickstart with a matching or greater version number is required. It is not generally possible to boot directly into the Workbench windowing environment from Kickstart alone. Though much of the functionality required for Workbench is contained in Kickstart some disk-based components are needed to launch it. From release 2.0 onwards it is possible to enter a boot menu by holding down both mouse buttons at power on or reset. This allows the user to choose a boot device, set parameters for backwards compatibility and examine Autoconfig hardware. With third-party software, it is possible to use an alternate Kickstart to the version stored in the embedded ROM chip. Such software allows a Kickstart version to be loaded from file into RAM for example Kickstart 1.3 may be loaded in order to run old software incompatible with Kickstart 2.0 or later. Several third-party vendors produced hardware Kickstart switchers (dual-boot systems) in the form of socket doublers in order to allow two ROM chips to plug into a single motherboard socket with some mechanism to switch between them. These became popular with users who had problems with later Kickstart versions causing incompatibility with earlier software titles. An MMU-enabled Amiga is able to "shadow" Kickstart from the embedded ROM chip (or from file) into RAM and pass control to it at start-up. This is often preferable as RAM access times are significantly faster than ROM, particularly on expanded systems. At subsequent resets the copy of Kickstart is re-used, reducing boot time and allowing faster access and execution of Kickstart functionality. Similar shadowing functions were also developed for some devices without MMU hardware. References AmigaOS Amiga ru:AmigaOS#Kickstart
Operating System (OS)
673
Windows NT booting process The Windows NT booting process is the process by which Windows NT 4.0, Windows 2000, Windows XP and Windows Server 2003 operating systems initialize. In Windows Vista and later, this process has changed significantly; see Windows NT 6 startup process for information about what has changed. Installer The Windows NT installer works very similarly to a regular Windows NT install except that it runs from a CD-ROM. For this boot method to work, the BIOS must be compatible with the El Torito specification. The ISO 9660 file system on the install CD is not fully compatible with the standard. Although it is "Level 1", the file names don't have the file version appended to it. The boot image is of the "no emulation" type, 1 sector long (2048 bytes) and is loaded at segment 0x7c0. It can be extracted from an ISO image by using a file-extraction program such as 7-Zip or WinZip. The ISO image is also not hybridized like ISO images from most Linux distributions and therefore it does not contain any master boot record (MBR) which makes it unable to boot by just copying the image over a block device such as a pen drive. The installer can also be run from a MS-DOS command prompt so previous versions of Microsoft Windows that are already installed can be upgraded. To run the installer from a MS-DOS based operating system such as Windows 98 or Windows ME, the user must start the system "in DOS mode" and then execute I386/WINNT.EXE on the CD-ROM. A floppy disk containing MS-DOS can be used to start the installer. Versions of the installer in floppies were also available for sale. From Windows Vista onwards, the installer runs from BOOT.WIM which contains a bootable version of Windows PE. Windows PE 2.0 is based on the Windows Vista kernel, later Windows PE versions are based on later Windows versions. CD-ROM boot image phase On a regular CD-ROM install, the BIOS executes the POST and then searches for a boot descriptor on the CD-ROM. The boot descriptor points to a boot catalog file on the ISO 9660 file system. The BIOS searches for a boot image compatible with the current architecture, loads it into memory and then runs it. The boot image is analogous to the boot sector on a hard drive. The boot image loads SETUPLDR.BIN which is analogous to NTLDR. If this fails for any reason, a message is displayed saying that NTLDR was not found, which may of itself be misleading; moreover, the NTLDR on the CD is never used during the loading phase of the installer. The process also assumes that file versions are unavailable. Before starting the boot loader, the boot image checks whether there is a Windows install (system) already present and, if so, it starts BOOTFIX.BIN. If no install is found, or if the disk does not have an MBR, then it starts the boot loader directly, thus obviating the need for BOOTFIX.BIN. If BOOTFIX.BIN is started, it displays the string "Press any key to boot from CD." and waits for user input. If none is detected for some seconds, then it boots the next device, and so on. This behavior is essential for booting the second stage of the installer, which starts from the hard disk. Boot loader phase Both SETUPLDR.BIN and NTLDR are composed of two binary files concatenated. They also can be found on the installation CD as compressed CAB files. The first file is STPBOOT.BIN, which is a flat binary file that just loads the second file. The second file is a regular EXE file in the Portable Executable format. In SETUPLDR.BIN the second file is SETUPLDR.EXE and in NTLDR the second file is OSLOADER.EXE. Both SETUPLDR.EXE and OSLOADER.EXE have embedded file system drivers for basic access on FAT, NTFS and ISO 9660. Differently from regular *.SYS drivers, the boot loader uses BIOS interrupts to access the boot disk. It also contains a built-in INI parser and CAB decompressor. After the installer starts running, it prints the string "Setup is inspecting your computer's hardware configuration...". NTDETECT.COM is called and the system information is stored in memory. It then displays a blue screen in VGA text mode with the title "Windows Setup", or "Windows *version name here* Setup", with a white line on the bottom that serves as a status bar.NTDETECT.COMthen looks for TXTSETUP.SIF and parses it. This file works as a key-value database just like the registry or an *.INI or *.INF file. The keys may either contain a list of files associated with their install location or a script line. Therefore, the database stores data and code. During the parsing, the blank lines are ignored and sections with the same name are merged. The file BIOSINFO.INF is also loaded to resolve hardware quirks. The key-value syntax in the SourceDisksFiles section is as follows: filename_on_source = disk_id,subdir,upgrade_code,new_install_code,spare,spare,new_filename The installer asks if any additional drivers need to be loaded and loads text mode drivers. "Press F6 if you need to install a third party SCSI or RAID driver..." "Press F2 to run Automated System Recovery (ASR)..." If so, they can be loaded from a floppy disk only. There is a hidden feature that shows a screen prompting the user to select a computer type if F5 is pressed during the first message. Text mode drivers are a different from PnP drivers as they are loaded regardless if the hardware is present or not. The loading phase of the installer displays some messages on the screen about the current file being loaded. The message is "Setup is loading files ([the file description])...". The files loaded in this phase are those located in sections ending in .Load. In those sections, the key gives a driver name and the value gives a file. The driver name is then looked up in the same section without the .Load suffix to find the driver's user-friendly name. The kernel also needs a registry hive mounted to load the registry from, so SETUPREG.HIV is also loaded. All the file names of the files loaded by the boot loader are hard-coded except for the drivers. As for PnP devices, after being identified by a bus driver, the address in checked in the section HardwareIdsDatabase and a corresponding driver name is given. But those are not used in this step. Kernel phase After all boot files are loaded by the boot loader, the message "Setup is starting Windows" is displayed and the kernel starts. Just like a normal install, it starts the drivers and loads the only service which is setupdd.sys. It runs in kernel mode and starts a GUI still in text mode. From now on, all the drivers are NT based and BIOS interrupts are not used anymore. The user is asked to choose a file system layout. The selected partition is formatted if necessary and the files from TXTSETUP.SIF are copied to the system. Then it creates the registry hives and automatically restarts the system so the NT system can start and bootstrap itself. The section HiveInfs points to the files used to fill the hives with the default values. On a fresh install this section is named HiveInfs.Fresh. The files are not reg files but are also ini files that can be understood by the ini interpreter bundled with the installer. The disk formatter program is statically linked with setupdd.sys. Remastering There are many freeware tools available on the internet that customize TXTSETUP.SIF for the creation of unattended installs or to integrate drivers and hotfixes. This process is sometimes referred as slipstreaming. The following command shows how a remastered CD can be created with a minimum set of options on Linux. It assumes that the current directory is the CD mount point. The image will be created at the home directory. mkisofs -b Bootable_NoEmulation.img -no-emul-boot -N . > ~/ntsetup.iso The file winnt.sif can be used to make the install unattended but it is not required to be present. There is a model file on the CD named UNATTEND.TXT. Setup tries to detect winnt.sif in the I386 directory or on the root directory of a floppy disk. Boot loader phase Windows NT startup process starts when the computer finds a Windows boot loader, a portion of Windows operating system responsible for finding Microsoft Windows and starting it up. Prior to Windows Vista however, the boot loader was NTLDR. Microsoft has also released operating systems for Intel Itanium processors which use IA-64 architecture. The boot loader of these editions of Windows is IA64ldr.efi (later referred as simply IA64ldr). It is an Extensible Firmware Interface (EFI) program. Operating system selection The boot loader, once executed, searches for Windows operating systems. Windows Boot Manager does so by reading Boot Configuration Data (BCD), a complex firmware-independent database for boot-time configuration data. Its predecessor, NTLDR, does so by reading the simpler boot.ini. If the boot.ini file is missing, the boot loader will attempt to locate information from the standard installation directory. For Windows NT and 2000 machines, it will attempt to boot from C:\WINNT. For Windows XP and 2003 machines, it will boot from C:\WINDOWS. Both databases may contain a list of installed Microsoft operating systems that may be loaded from the local hard disk drive or a remote computer on the local network. NTLDR supports operating systems installed on disks whose file system is NTFS or FAT file systems, CDFS (ISO 9660) or UDFS. Windows Boot Manager also supports operating systems installed inside a VHD file, stored on an NTFS disk drive. In Windows 2000 or in later versions of Windows in which hibernation is supported, the Windows boot loader starts the search for operating systems by searching for hiberfil.sys. NTLDR looks into the root folder of the default volume specified in boot.ini. Windows Boot Manager looks up the location of hiberfil.sys in BCD. If this file is found and an active memory set is found in it, the boot loader loads the contents of the file (which is a compressed version of a physical memory dump of the machine) into memory and restores the computer to the state that it was prior to hibernation. Next, the boot loader looks for a list of installed operating system entries. If more than one operating system is installed, the boot loader shows a boot menu and allow the user to select an operating system. If a non NT-based operating system such as Windows 98 is selected (specified by an MS-DOS style of path, e.g. C:\), then the boot loader loads the associated "boot sector" file listed in boot.ini or BCD (by default, this is bootsect.dos if no file name is specified) and passes execution control to it. Otherwise, the boot process continues. Loading the Windows NT kernel The operating system starts when certain basic drivers flagged as "Boot" are loaded into memory. The appropriate file system driver for the partition type (NTFS, FAT, or FAT32) which the Windows installation resides in are amongst them. At this point in the boot process, the boot loader clears the screen and displays a textual progress bar, (which is often not seen due to the initialization speed); Windows 2000 also displays the text "Starting Windows..." underneath. If the user presses F8 during this phase, the advanced boot menu is displayed, containing various special boot modes including Safe mode, with the Last Known Good Configuration, with debugging enabled, and (in the case of Server editions) Directory Services Restore Mode. Once a boot mode has been selected (or if F8 was never pressed) booting continues. The following files are loaded sequentially. ntoskrnl.exe (the kernel) hal.dll (type of hardware abstraction layer) kdcom.dll (Kernel Debugger HW Extension DLL) bootvid.dll (for the Windows logo and side-scrolling bar) config\system (one of the registry hives) Next, NTDETECT.COM and the Windows NT kernel (Ntoskrnl.exe) and the Hardware Abstraction Layer (hal.dll) are loaded into memory. If multiple hardware configurations are defined in the Windows Registry, the user is prompted at this point to choose one. With the kernel in memory, boot-time device drivers are loaded (but not yet initialized). The required information (along with information on all detected hardware and Windows Services) is stored in the HKEY_LOCAL_MACHINE\System portion of the registry, in a set of registry keys collectively called a Control Set. Multiple control sets (typically two) are kept, in the event that the settings contained in the currently-used one prohibit the system from booting. HKEY_LOCAL_MACHINE\System contains control sets labeled ControlSet001, ControlSet002, etc., as well as CurrentControlSet. During regular operation, Windows uses CurrentControlSet to read and write information. CurrentControlSet is a reference to one of the control sets stored in the registry. Windows picks the "real" control set being used based on the values set in the HKLM\SYSTEM\Select registry key: Default will be the boot loader's choice if nothing else overrides this If the value of the Failed key matches Default, then the boot loader displays an error message, indicating that the last boot failed, and gives the user the option to try booting anyway, or to use the "Last Known Good Configuration". If the user choose (or has chosen) Last Known Good Configuration, the control set indicated by the LastKnownGood key is used instead of Default. When a control set is chosen, the Current key gets set accordingly. The Failed key is also set to the same as Current until the end of the boot process. LastKnownGood is also set to Current if the boot process completes successfully. Which services are started and the order which each group is started in are provided by the following keys: HKLM\SYSTEM\CurrentControlSet\Services HKLM\SYSTEM\CurrentControlSet\Control\ServiceGroupOrder For the purposes of booting, a driver may be one of the following: A "Boot" driver that is loaded by the boot loader prior to starting the kernel. "Boot" drivers are almost exclusively drivers for hard-disk controllers and file systems (ATA, SCSI, file system filter manager, etc.); in other words, they are the absolute minimum that the kernel will need to get started with loading other drivers, and the rest of the operating system. A "System" driver which is loaded and started by the kernel after the boot drivers. "System" drivers cover a wider range of core functionality, including the display driver, CD-ROM support, and the TCP/IP stack. An "Automatic" driver which is loaded much later when the GUI already has been started. With this finished, control is then passed from the boot loader to the kernel. Kernel phase The initialization of the kernel subsystem and the Windows Executive subsystems is done in two phases. During the first phase, basic internal memory structures are created, and each CPU's interrupt controller is initialized. The memory manager is initialized, creating areas for the file system cache, paged and non-paged pools of memory. The Object Manager, initial security token for assignment to the first process on the system, and the Process Manager itself. The System idle process as well as the System process are created at this point. The second phase involves initializing the device drivers which were identified by NTLDR as being system drivers. Through the process of loading device drivers, a "progress bar" is visible at the bottom of the display on Windows 2000 systems; in Windows XP and Windows Server 2003, this was replaced by an animated bar which does not represent actual progress. Prior to Windows XP, this part of the boot process took significantly longer; this is because the drivers would be initialized one at a time. On Windows XP and Server 2003, the drivers are all initialized asynchronously. Session Manager Once all the Boot and System drivers have been loaded, the kernel (system thread) starts the Session Manager Subsystem (smss.exe). Before any files are opened, Autochk is started by smss.exe. Autochk mounts all drives and checks them one at a time to see whether or not they were cleanly unmounted. If autochk determines one or more volumes are dirty, it will automatically run chkdsk and provides the user with a short window to abort the repair process by pressing a key within 10 seconds (introduced in Windows NT 4.0 Service Pack 4; earlier versions would not allow the user to abort chkdsk). Since Windows 2000, XP and 2003 show no text screen at that point (unlike NT 3.1 to 4.0, which displayed a blue text screen), the user will see a different background picture holding a mini-text-screen in the center of the screen and show the progress of chkdsk there. At boot time, the Session Manager Subsystem: Creates environment variables (HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Environment) Starts the kernel-mode side of the Win32 subsystem (win32k.sys). This allows Windows to switch into graphical mode as there is now enough infrastructure in place. Starts the user-mode side of the Win32 subsystem, the Client/Server Runtime Server Subsystem (csrss.exe). This makes Win32 available to user-mode applications. Creates virtual memory paging files (HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management) Performs any rename operations (HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\PendingFileRenameOperations) that are queued up. This allows previously in-use files (e.g. drivers) to be replaced as part of a reboot. Executes any programs listed in HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\BootExecute such as autocheck and convert. Starts the Windows Logon Manager (winlogon.exe). Winlogon is responsible for handling interactive logons to a Windows system (local or remote). The Graphical Identification aNd Authentication (GINA) library is loaded inside the Winlogon process, and provides support for logging in as a local or Windows domain user. The Session Manager stores its configuration at HKLM\SYSTEM\CurrentControlSet\Control\Session Manager. The exact operation of most of these items is based on the configuration set in the registry. Authentication Winlogon starts the Local Security Authority Subsystem Service (LSASS) and Service Control Manager (SCM), which in turn will start all the Windows services that are set to Auto-Start. It is also responsible for responding to the secure attention sequence (SAS), loading the user profile on logon, and optionally locking the computer when a screensaver is running. The login process is as follows: The Session Manager Subsystem starts Winlogon. Winlogon starts the Service Control Manager (services.exe). Starts the auto-start services. Updates the Control Sets; the LastKnownGood control set is updated to reflect the current control set. (Windows XP) Winlogon starts UIHost (logonui.exe), a full-screen graphical UI. Winlogon loads GinaDll (msgina.dll) (Optional) Login prompt is displayed by GINA, and the user presses the Secure Attention Sequence (SAS) (Control-Alt-Delete). Winlogon checks if the system is configured to log into a specific account automatically (AutoAdminLogon). Login dialog is displayed by GINA User enters credentials (username, password, and domain) GINA passes credentials back to Winlogon Winlogon passes credentials to LSASS LSASS tries to use cached data in the LSA database (SYSTEM hive) If there is none, LSASS determines which account protocol is to be used by using the Security Packages listed in the key HKLM/SYSTEM/CurrentControlSet/Control/Lsa: msv1_0.dll implements the NT LAN Manager protocols. This package is used in stand-alone systems and domain-member systems for backward compatibility. Kerberos.dll provides remote login by using Active Directory. LSASS enforces the local security policy (checking user permissions, creating audit trails, doling out security tokens, etc.). Control is passed back to Winlogon to prepare for passing the control to the user. Create Windows Stations (WinSta0) Create the desktops (Winlogon, Default and ScreenSaver) It then starts the program specified in the Userinit value which defaults to userinit.exe. This value supports multiple executables. If the user is trying to log into the local host then the HKLM/SAM key will be used as database. If the user is trying to log into another host then the NetLogon service is used to carry the data. msv1_0.dll<->netlogon<->remote netlogon<->remote msv1_0.dll<->remote SAM On Windows XP, GINA is only shown if the user presses the secure attention sequence. Winlogon has support for plugins that get loaded and notified about specific events and LSASS also supports plugins (security packages). Some rootkits bundle Winlogon plugins because they are loaded before any user logs in. Some keys allow multiple comma-separated values to be supplied that allow a malicious program to be executed at the same time as a legitimate system file. The hashing algorithms used to store credentials in the SAM database are weak and can be brute-forced quickly on consumer hardware. Winlogon's responsibilities and the login process have changed significantly from the above in Windows Vista. Shell Userinit is the first program that runs with the user credentials. It is responsible to start all the other programs that compose the user shell environment. The shell program (typically Explorer.exe) is started from the registry entry Shell= pointed to by the same registry entry in key HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\IniFileMapping\system.ini\Boot; its default value is SYS:Microsoft\Windows NT\CurrentVersion\Winlogon, which evaluates to HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon. Userinit loads the user profile. There are a few types of user profiles and it can be local or remote. This process can be very slow if the user profile is of the "roaming" type. User and Computer Group Policy settings are applied. Run user scripts Run machine scripts Run proquota.exe Runs the startup programs before the shell gets started. Starts the shell configured in registry, which defaults to explorer.exe. Userinit exits and the shell program continues running without a parent process. Userinit runs startup programs from the following locations: HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\RunOnce HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\Explorer\Run HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Run HKCU\Software\Microsoft\Windows NT\CurrentVersion\Windows\Load HKCU\Software\Microsoft\Windows NT\CurrentVersion\Windows\Run HKCU\Software\Microsoft\Windows\CurrentVersion\Run HKCU\Software\Microsoft\Windows\CurrentVersion\RunOnce %ALLUSERSPROFILE%\Start Menu\Programs\Startup\ (this path is localized on non-English versions of Windows before Vista) %USERPROFILE%\Start Menu\Programs\Startup\ (this path is localized on non-English versions of Windows before Vista) Remote booting and installation To successfully boot, the client must support PXE booting and the Windows Deployment Services (WDS) component must be installed on the server. It is not installed by default. WDS is the successor of Remote Installation Services (RIS). The PXE program is found on the BIOS or on a ROM chip on the network card. PXE booting is not a technology specific to Windows and can also be used to start a Linux system. In fact, a Linux system can act as a server to service DHCP or TFTP. PXE can be used to start Windows Setup to install the system on the client computer or to run the operating system from RAM. The latter, called Remote Boot, was introduced by Windows XP Embedded SP1 and is only available for this flavor of Windows. The general process for both methods is as follows: PXE boot DHCP request broadcast (Optional) DHCP router redirects to the server The server sends the Network Bootstrap Program (NBP) (PXEboot.com) through TFTP The NBP program downloads the required files through the BINL protocol The Boot Information Negotiation Layer (BINL) is a Windows 2000 service running on the server that communicates with the client after the NBP was already loaded by the PXE. See also Architecture of Windows NT Windows Startup Process Linux startup process Booting Master boot record Power-on self-test BootVis References Further reading External links Startup Applications List How to edit SETUPAPI.DLL Windows NT architecture Booting
Operating System (OS)
674
Microsoft Windows version history Microsoft Windows was announced by Bill Gates on November 10, 1983. Microsoft introduced Windows as a graphical user interface for MS-DOS, which had been introduced two years earlier. The product line evolved in the 1990s from an operating environment into a fully complete, modern operating system over two lines of development, each with their own separate codebase. The first versions of Windows (1.0 through to 3.11) were graphical shells that ran from MS-DOS. Windows 95, though still being based on MS-DOS, was its own operating system, using a 16-bit DOS-based kernel and a 32-bit user space. Windows 95 introduced many features that have been part of the product ever since, including the Start menu, the taskbar, and Windows Explorer (renamed File Explorer in Windows 8). In 1997, Microsoft released Internet Explorer 4 which included the (at the time controversial) Windows Desktop Update. It aimed to integrate Internet Explorer and the web into the user interface and also brought many new features into Windows, such as the ability to display JPEG images as the desktop wallpaper and single window navigation in Windows Explorer. In 1998, Microsoft released Windows 98, which also included the Windows Desktop Update and Internet Explorer 4 by default. The inclusion of Internet Explorer 4 and the Desktop Update led to an anti-trust case in the United States. Windows 98 included USB support out of the box, and also plug and play, which allows devices to work when plugged in without requiring a system reboot or manual configuration. Windows Me, the last DOS-based version of Windows, was aimed at consumers and released in 2000. It introduced System Restore, Help and Support Center, updated versions of the Disk Defragmenter and other system tools. In 1993, Microsoft released Windows NT 3.1, the first version of the newly developed Windows NT operating system. Unlike the Windows 9x series of operating systems, it is a fully 32-bit operating system. NT 3.1 introduced NTFS, a file system designed to replace the older File Allocation Table (FAT) which was used by DOS and the DOS-based Windows operating systems. In 1996, Windows NT 4.0 was released, which includes a fully 32-bit version of Windows Explorer written specifically for it, making the operating system work like Windows 95. Windows NT was originally designed to be used on high-end systems and servers, but with the release of Windows 2000, many consumer-oriented features from Windows 95 and Windows 98 were included, such as the Windows Desktop Update, Internet Explorer 5, USB support and Windows Media Player. These consumer-oriented features were further extended in Windows XP, which introduced a new visual style called Luna, a more user-friendly interface, updated versions of Windows Media Player and Internet Explorer, and extended features from Windows Me, such as the Help and Support Center and System Restore. Windows Vista focused on securing the Windows operating system against computer viruses and other malicious software by introducing features such as User Account Control. New features include Windows Aero, updated versions of the standard games (e.g. Solitaire), Windows Movie Maker, and Windows Mail to replace Outlook Express. Despite this, Windows Vista was critically panned for its poor performance on older hardware and its at-the-time high system requirements. Windows 7 followed two and a half years later, and despite it technically having higher system requirements, reviewers noted that it ran better than Windows Vista. Windows 7 removed many applications, such as Windows Movie Maker, Windows Photo Gallery and Windows Mail, instead requiring users to download separate Windows Live Essentials to gain some of those features and other online services. Windows 8 introduced many controversial changes, such as the replacement of the Start menu with the Start Screen, the removal of the Aero interface in favor of a flat, colored interface as well as the introduction of "Metro" apps (later renamed to Universal Windows Platform apps), and the Charms Bar user interface element, all of which received considerable criticism from reviewers. Windows 8.1, a free upgrade to Windows 8, was released in 2013. The following version of Windows, Windows 10, reintroduced the Start menu and added the ability to run Universal Windows Platform apps in a window instead of always in full screen. Windows 10 was generally well-received, with many reviewers stating that Windows 10 is what Windows 8 should have been. The latest version of Windows, Windows 11, was released on October 5, 2021. Windows 11 incorporates a redesigned user interface, including a new Start menu, a visual style featuring rounded corners, and a new layout for the Microsoft Store. Windows 1.0 The first independent version of Microsoft Windows, version 1.0, released on November 20, 1985, achieved little popularity. The project was briefly codenamed "Interface Manager" before the windowing system was implemented—contrary to popular belief that it was the original name for Windows and Rowland Hanson, the head of marketing at Microsoft, convinced the company that the name Windows would be more appealing to customers. Windows 1.0 was not a complete operating system, but rather an "operating environment" that extended MS-DOS, and shared the latter's inherent flaws. The first version of Microsoft Windows included a simple graphics painting program called Windows Paint; Windows Write, a simple word processor; an appointment calendar; a card-filer; a notepad; a clock; a control panel; a computer terminal; Clipboard; and RAM driver. It also included the MS-DOS Executive and a game called Reversi. Microsoft had worked with Apple Computer to develop applications for Apple's new Macintosh computer, which featured a graphical user interface. As part of the related business negotiations, Microsoft had licensed certain aspects of the Macintosh user interface from Apple; in later litigation, a district court summarized these aspects as "screen displays". In the development of Windows 1.0, Microsoft intentionally limited its borrowing of certain GUI elements from the Macintosh user interface, to comply with its license. For example, windows were only displayed "tiled" on the screen; that is, they could not overlap or overlie one another. On December 31, 2001, Microsoft declared Windows 1.0 obsolete and stopped providing support and updates for the system. Windows 2.x Microsoft Windows version 2.0 (2.01 and 2.03 internally) came out on December 9, 1987, and proved slightly more popular than its predecessor. Much of the popularity for Windows 2.0 came by way of its inclusion as a "run-time version" with Microsoft's new graphical applications, Excel and Word for Windows. They could be run from MS-DOS, executing Windows for the duration of their activity, and closing down Windows upon exit. Microsoft Windows received a major boost around this time when Aldus PageMaker appeared in a Windows version, having previously run only on Macintosh. Some computer historians date this, the first appearance of a significant and non-Microsoft application for Windows, as the start of the success of Windows. Like prior versions of Windows, version 2.0 could use the real-mode memory model, which confined it to a maximum of 1 megabyte of memory. In such a configuration, it could run under another multitasker like DESQview, which used the 286 protected mode. It was also the first version to support the High Memory Area when running on an Intel 80286 compatible processor. This edition was renamed Windows/286 with the release of Windows 2.1. A separate Windows/386 edition had a protected mode kernel, which required an 80386 compatible processor, with LIM-standard EMS emulation and VxD drivers in the kernel. All Windows and DOS-based applications at the time were real mode, and Windows/386 could run them over the protected mode kernel by using the virtual 8086 mode, which was new with the 80386 processor. Version 2.1 came out on May 27, 1988, followed by version 2.11 on March 13, 1989; they included a few minor changes. Version 2.03, and later 3.0, faced challenges from Apple over its overlapping windows and other features Apple charged mimicked the ostensibly copyrighted "look and feel" of its operating system and "embodie[d] and generated a copy of the Macintosh" in its OS. Judge William Schwarzer dropped all but 10 of Apple's 189 claims of copyright infringement, and ruled that most of the remaining 10 were over uncopyrightable ideas. On December 31, 2001, Microsoft declared Windows 2.x obsolete and stopped providing support and updates for the system. Windows 3.0 Windows 3.0, released in May 1990, improved capabilities given to native applications. It also allowed users to better multitask older MS-DOS based software compared to Windows/386, thanks to the introduction of virtual memory. Windows 3.0's user interface finally resembled a serious competitor to the user interface of the Macintosh computer. PCs had improved graphics by this time, due to VGA video cards, and the protected/enhanced mode allowed Windows applications to use more memory in a more painless manner than their DOS counterparts could. Windows 3.0 could run in real, standard, or 386 enhanced modes, and was compatible with any Intel processor from the 8086/8088 up to the 80286 and 80386. This was the first version to run Windows programs in protected mode, although the 386 enhanced mode kernel was an enhanced version of the protected mode kernel in Windows/386. Windows 3.0 received two updates. A few months after introduction, Windows 3.0a was released as a maintenance release, resolving bugs and improving stability. A "multimedia" version, Windows 3.0 with Multimedia Extensions 1.0, was released in October 1991. This was bundled with "multimedia upgrade kits", comprising a CD-ROM drive and a sound card, such as the Creative Labs Sound Blaster Pro. This version was the precursor to the multimedia features available in Windows 3.1 (first released in April 1992) and later, and was part of Microsoft's specification for the Multimedia PC. The features listed above and growing market support from application software developers made Windows 3.0 wildly successful, selling around 10 million copies in the two years before the release of version 3.1. Windows 3.0 became a major source of income for Microsoft, and led the company to revise some of its earlier plans. Support was discontinued on December 31, 2001. OS/2 During the mid to late 1980s, Microsoft and IBM had cooperatively been developing OS/2 as a successor to DOS. OS/2 would take full advantage of the aforementioned protected mode of the Intel 80286 processor and up to 16 MB of memory. OS/2 1.0, released in 1987, supported swapping and multitasking and allowed running of DOS executables. IBM licensed Windows's GUI for OS/2 as Presentation Manager, and the two companies stated that it and Windows 2.0 would be almost identical. Presentation Manager was not available with OS/2 until version 1.1, released in 1988. Its API was incompatible with Windows. Version 1.2, released in 1989, introduced a new file system, HPFS, to replace the FAT file system. By the early 1990s, conflicts developed in the Microsoft/IBM relationship. They cooperated with each other in developing their PC operating systems, and had access to each other's code. Microsoft wanted to further develop Windows, while IBM desired for future work to be based on OS/2. In an attempt to resolve this tension, IBM and Microsoft agreed that IBM would develop OS/2 2.0, to replace OS/2 1.3 and Windows 3.0, while Microsoft would develop a new operating system, OS/2 3.0, to later succeed OS/2 2.0. This agreement soon fell apart however, and the Microsoft/IBM relationship was terminated. IBM continued to develop OS/2, while Microsoft changed the name of its (as yet unreleased) OS/2 3.0 to Windows NT. Both retained the rights to use OS/2 and Windows technology developed up to the termination of the agreement; Windows NT, however, was to be written anew, mostly independently (see below). After an interim 1.3 version to fix up many remaining problems with the 1.x series, IBM released OS/2 version 2.0 in 1992. This was a major improvement: it featured a new, object-oriented GUI, the Workplace Shell (WPS), that included a desktop and was considered by many to be OS/2's best feature. Microsoft would later imitate much of it in Windows 95. Version 2.0 also provided a full 32-bit API, offered smooth multitasking and could take advantage of the 4 gigabytes of address space provided by the Intel 80386. Still, much of the system had 16-bit code internally which required, among other things, device drivers to be 16-bit code as well. This was one of the reasons for the chronic shortage of OS/2 drivers for the latest devices. Version 2.0 could also run DOS and Windows 3.0 programs, since IBM had retained the right to use the DOS and Windows code as a result of the breakup. Windows 3.1x In response to the impending release of OS/2 2.0, Microsoft developed Windows 3.1 (first released in April 1992), which included several improvements to Windows 3.0, such as display of TrueType scalable fonts (developed jointly with Apple), improved disk performance in 386 Enhanced Mode, multimedia support, and bugfixes. It also removed Real Mode, and only ran on an 80286 or better processor. Later Microsoft also released Windows 3.11, a touch-up to Windows 3.1 which included all of the patches and updates that followed the release of Windows 3.1 in 1992. In 1992 and 1993, Microsoft released Windows for Workgroups (WfW), which was available both as an add-on for existing Windows 3.1 installations and in a version that included the base Windows environment and the networking extensions all in one package. Windows for Workgroups included improved network drivers and protocol stacks, and support for peer-to-peer networking. There were two versions of Windows for Workgroups, WfW 3.1 and WfW 3.11. Unlike prior versions, Windows for Workgroups 3.11 ran in 386 Enhanced Mode only, and needed at least an 80386SX processor. One optional download for WfW was the "Wolverine" TCP/IP protocol stack, which allowed for easy access to the Internet through corporate networks. All these versions continued version 3.0's impressive sales pace. Even though the 3.1x series still lacked most of the important features of OS/2, such as long file names, a desktop, or protection of the system against misbehaving applications, Microsoft quickly took over the OS and GUI markets for the IBM PC. The Windows API became the de facto standard for consumer software. On December 31, 2001, Microsoft declared Windows 3.1 obsolete and stopped providing support and updates for the system. However, OEM licensing for Windows for Workgroups 3.11 on embedded systems continued to be available until November 1, 2008. Windows NT 3.x Meanwhile, Microsoft continued to develop Windows NT. The main architect of the system was Dave Cutler, one of the chief architects of VAX/VMS at Digital Equipment Corporation. Microsoft hired him in October 1988 to create a successor to OS/2, but Cutler created a completely new system instead. Cutler had been developing a follow-on to VMS at DEC called MICA, and when DEC dropped the project he brought the expertise and around 20 engineers with him to Microsoft. DEC also believed he brought MICA's code to Microsoft and sued. Microsoft eventually paid US$150 million and agreed to support DEC's Alpha CPU chip in NT. Windows NT Workstation (Microsoft marketing wanted Windows NT to appear to be a continuation of Windows 3.1) arrived in Beta form to developers at the July 1992 Professional Developers Conference in San Francisco. Microsoft announced at the conference its intentions to develop a successor to both Windows NT and Windows 3.1's replacement (Windows 95, codenamed Chicago), which would unify the two into one operating system. This successor was codenamed Cairo. In hindsight, Cairo was a much more difficult project than Microsoft had anticipated and, as a result, NT and Chicago would not be unified until Windows XP—albeit Windows 2000, oriented to business, had already unified most of the system's bolts and gears, it was XP that was sold to home consumers like Windows 95 and came to be viewed as the final unified OS. Parts of Cairo have still not made it into Windows as of 2020: most notably, the WinFS file system, which was the much touted Object File System of Cairo. Microsoft announced that they have discontinued the separate release of WinFS for Windows XP and Windows Vista and will gradually incorporate the technologies developed for WinFS in other products and technologies, notably Microsoft SQL Server. Driver support was lacking due to the increased programming difficulty in dealing with NT's superior hardware abstraction model. This problem plagued the NT line all the way through Windows 2000. Programmers complained that it was too hard to write drivers for NT, and hardware developers were not going to go through the trouble of developing drivers for a small segment of the market. Additionally, although allowing for good performance and fuller exploitation of system resources, it was also resource-intensive on limited hardware, and thus was only suitable for larger, more expensive machines. However, these same features made Windows NT perfect for the LAN server market (which in 1993 was experiencing a rapid boom, as office networking was becoming common). NT also had advanced network connectivity options and NTFS, an efficient file system. Windows NT version 3.51 was Microsoft's entry into this field, and took away market share from Novell (the dominant player) in the following years. One of Microsoft's biggest advances initially developed for Windows NT was a new 32-bit API, to replace the legacy 16-bit Windows API. This API was called Win32, and from then on Microsoft referred to the older 16-bit API as Win16. The Win32 API had three levels of implementation: the complete one for Windows NT, a subset for Chicago (originally called Win32c) missing features primarily of interest to enterprise customers (at the time) such as security and Unicode support, and a more limited subset called Win32s which could be used on Windows 3.1 systems. Thus Microsoft sought to ensure some degree of compatibility between the Chicago design and Windows NT, even though the two systems had radically different internal architectures. Windows NT was the first Windows operating system based on a hybrid kernel. The hybrid kernel was designed as a modified microkernel, influenced by the Mach microkernel developed by Richard Rashid at Carnegie Mellon University, but without meeting all of the criteria of a pure microkernel. As released, Windows NT 3.x went through three versions (3.1, 3.5, and 3.51), changes were primarily internal and reflected back end changes. The 3.5 release added support for new types of hardware and improved performance and data reliability; the 3.51 release was primarily to update the Win32 APIs to be compatible with software being written for the Win32c APIs in what became Windows 95. Support for Windows NT 3.51 ended in 2001 and 2002 for the Workstation and Server editions, respectively. Windows 95 After Windows 3.11, Microsoft began to develop a new consumer-oriented version of the operating system codenamed Chicago. Chicago was designed to have support for 32-bit preemptive multitasking like OS/2 and Windows NT, although a 16-bit kernel would remain for the sake of backward compatibility. The Win32 API first introduced with Windows NT was adopted as the standard 32-bit programming interface, with Win16 compatibility being preserved through a technique known as "thunking". A new object-oriented GUI was not originally planned as part of the release, although elements of the Cairo user interface were borrowed and added as other aspects of the release (notably Plug and Play) slipped. Microsoft did not change all of the Windows code to 32-bit; parts of it remained 16-bit (albeit not directly using real mode) for reasons of compatibility, performance, and development time. Additionally it was necessary to carry over design decisions from earlier versions of Windows for reasons of backwards compatibility, even if these design decisions no longer matched a more modern computing environment. These factors eventually began to impact the operating system's efficiency and stability. Microsoft marketing adopted Windows 95 as the product name for Chicago when it was released on August 24, 1995. Microsoft had a double gain from its release: first, it made it impossible for consumers to run Windows 95 on a cheaper, non-Microsoft DOS, secondly, although traces of DOS were never completely removed from the system and MS DOS 7 would be loaded briefly as a part of the booting process, Windows 95 applications ran solely in 386 enhanced mode, with a flat 32-bit address space and virtual memory. These features make it possible for Win32 applications to address up to 2 gigabytes of virtual RAM (with another 2 GB reserved for the operating system), and in theory prevented them from inadvertently corrupting the memory space of other Win32 applications. In this respect the functionality of Windows 95 moved closer to Windows NT, although Windows 95/98/Me did not support more than 512 megabytes of physical RAM without obscure system tweaks. Three years after its introduction, Windows 95 was succeeded by Windows 98. IBM continued to market OS/2, producing later versions in OS/2 3.0 and 4.0 (also called Warp). Responding to complaints about OS/2 2.0's high demands on computer hardware, version 3.0 was significantly optimized both for speed and size. Before Windows 95 was released, OS/2 Warp 3.0 was even shipped pre-installed with several large German hardware vendor chains. However, with the release of Windows 95, OS/2 began to lose market share. It is probably impossible to choose one specific reason why OS/2 failed to gain much market share. While OS/2 continued to run Windows 3.1 applications, it lacked support for anything but the Win32s subset of Win32 API (see above). Unlike with Windows 3.1, IBM did not have access to the source code for Windows 95 and was unwilling to commit the time and resources to emulate the moving target of the Win32 API. IBM later introduced OS/2 into the United States v. Microsoft case, blaming unfair marketing tactics on Microsoft's part. Microsoft went on to release five different versions of Windows 95: Windows 95 – original release Windows 95 A – included Windows 95 OSR1 slipstreamed into the installation Windows 95 B (OSR2) – included several major enhancements, Internet Explorer (IE) 3.0 and full FAT32 file system support Windows 95 B USB (OSR2.1) – included basic USB support Windows 95 C (OSR2.5) – included all the above features, plus IE 4.0; this was the last 95 version produced OSR2, OSR2.1, and OSR2.5 were not released to the general public, rather, they were available only to OEMs that would preload the OS onto computers. Some companies sold new hard drives with OSR2 preinstalled (officially justifying this as needed due to the hard drive's capacity). The first Microsoft Plus! add-on pack was sold for Windows 95. Microsoft ended extended support for Windows 95 on December 31, 2001. Windows NT 4.0 Microsoft released the successor to NT 3.51, Windows NT 4.0, on August 24, 1996, one year after the release of Windows 95. It was Microsoft's primary business-oriented operating system until the introduction of Windows 2000. Major new features included the new Explorer shell from Windows 95, scalability and feature improvements to the core architecture, kernel, USER32, COM and MSRPC. Windows NT 4.0 came in five versions: Windows NT 4.0 Workstation Windows NT 4.0 Server Windows NT 4.0 Server, Enterprise Edition (includes support for 8-way SMP and clustering) Windows NT 4.0 Terminal Server Windows NT 4.0 Embedded Microsoft ended mainstream support for Windows NT 4.0 Workstation on June 30, 2002, and ended extended support on June 30, 2004, while Windows NT 4.0 Server mainstream support ended on December 31, 2002, and extended support ended on December 31, 2004. Both editions were succeeded by Windows 2000 Professional and the Windows 2000 Server Family, respectively. Microsoft ended mainstream support for Windows NT 4.0 Embedded on June 30, 2003, and ended extended support on July 11, 2006. This edition was succeeded by Windows XP Embedded. Windows 98 On June 25, 1998, Microsoft released Windows 98 (code-named Memphis), three years after the release of Windows 95, two years after the release of Windows NT 4.0, and 21 months before the release of Windows 2000. It included new hardware drivers and the FAT32 file system which supports disk partitions that are larger than 2 GB (first introduced in Windows 95 OSR2). USB support in Windows 98 is marketed as a vast improvement over Windows 95. The release continued the controversial inclusion of the Internet Explorer browser with the operating system that started with Windows 95 OEM Service Release 1. The action eventually led to the filing of the United States v. Microsoft case, dealing with the question of whether Microsoft was introducing unfair practices into the market in an effort to eliminate competition from other companies such as Netscape. In 1999, Microsoft released Windows 98 Second Edition, an interim release. One of the more notable new features was the addition of Internet Connection Sharing, a form of network address translation, allowing several machines on a LAN (Local Area Network) to share a single Internet connection. Hardware support through device drivers was increased and this version shipped with Internet Explorer 5. Many minor problems that existed in the first edition were fixed making it, according to many, the most stable release of the Windows 9x family. Mainstream support for Windows 98 and 98 SE ended on June 30, 2002, and ended extended support on July 11, 2006. Windows 2000 Microsoft released Windows 2000 on February 17, 2000 as the successor to Windows NT 4.0, 17 months after the release of Windows 98. It has the version number Windows NT 5.0, and it was Microsoft's business-oriented operating system starting with the official release on February 17, 2000, until 2001 when it was succeeded by Windows XP. Windows 2000 has had four official service packs. It was successfully deployed both on the server and the workstation markets. Amongst Windows 2000's most significant new features was Active Directory, a near-complete replacement of the NT 4.0 Windows Server domain model, which built on industry-standard technologies like DNS, LDAP, and Kerberos to connect machines to one another. Terminal Services, previously only available as a separate edition of NT 4, was expanded to all server versions. A number of features from Windows 98 were incorporated also, such as an improved Device Manager, Windows Media Player, and a revised DirectX that made it possible for the first time for many modern games to work on the NT kernel. Windows 2000 is also the last NT-kernel Windows operating system to lack product activation. While Windows 2000 upgrades were available for Windows 95 and Windows 98, it was not intended for home users. Windows 2000 was available in four editions: Windows 2000 Professional Windows 2000 Server Windows 2000 Advanced Server Windows 2000 Datacenter Server Microsoft ended support for both Windows 2000 and Windows XP Service Pack 2 on July 13, 2010. Windows Me On September 14, 2000, Microsoft released a successor to Windows 98 called Windows Me, short for "Millennium Edition". It was the last DOS-based operating system from Microsoft. Windows Me introduced a new multimedia-editing application called Windows Movie Maker, came standard with Internet Explorer 5.5 and Windows Media Player 7, and debuted the first version of System Restore – a recovery utility that enables the operating system to revert system files back to a prior date and time. System Restore was a notable feature that would continue to thrive in all later versions of Windows. Windows Me was conceived as a quick one-year project that served as a stopgap release between Windows 98 and Windows XP. Many of the new features were available from the Windows Update site as updates for older Windows versions (System Restore and Windows Movie Maker were exceptions). Windows Me was criticized for stability issues, as well as for lacking real mode DOS support, to the point of being referred to as the "Mistake Edition." Windows Me was the last operating system to be based on the Windows 9x (monolithic) kernel and MS-DOS, with its successor Windows XP being based on Microsoft's Windows NT kernel instead. Windows XP, Server 2003 series and Fundamentals for Legacy PCs On October 25, 2001, Microsoft released Windows XP (codenamed "Whistler"). The merging of the Windows NT/2000 and Windows 95/98/Me lines was finally achieved with Windows XP. Windows XP uses the Windows NT 5.1 kernel, marking the entrance of the Windows NT core to the consumer market, to replace the aging Windows 9x branch. The initial release was met with considerable criticism, particularly in the area of security, leading to the release of three major Service Packs. Windows XP SP1 was released in September 2002, SP2 was released in August 2004 and SP3 was released in April 2008. Service Pack 2 provided significant improvements and encouraged widespread adoption of XP among both home and business users. Windows XP lasted longer as Microsoft's flagship operating system than any other version of Windows, beginning with the public release on October 25, 2001 for at least 5 years, and ending on January 30, 2007 when it was succeeded by Windows Vista. Windows XP is available in a number of versions: Windows XP Home Edition, for home users Windows XP Professional, for business and power users contained a number of features not available in Home Edition. Windows XP N, like above editions, but without a default installation of Windows Media Player, as mandated by a European Union ruling Windows XP Media Center Edition (MCE), released in October 2002 for desktops and notebooks with an emphasis on home entertainment. Contained all features offered in Windows XP Professional and the Windows Media Center. Subsequent versions are the same but have an updated Windows Media Center. Windows XP Media Center Edition 2004, released on September 30, 2003 Windows XP Media Center Edition 2005, released on October 12, 2004. Included the Royale theme, support for Media Center Extenders, themes and screensavers from Microsoft Plus! for Windows XP. The ability to join an Active Directory domain is disabled. Windows XP Tablet PC Edition, for tablet PCs Windows XP Tablet PC Edition 2005 Windows XP Embedded, for embedded systems Windows XP Starter Edition, for new computer users in developing countries Windows XP Professional x64 Edition, released on April 25, 2005 for home and workstation systems utilizing 64-bit processors based on the x86-64 instruction set originally developed by AMD as AMD64; Intel calls their version Intel 64. Internally, XP x64 was a somewhat updated version of Windows based on the Server 2003 codebase. Windows XP 64-bit Edition, is a version for Intel's Itanium line of processors; maintains 32-bit compatibility solely through a software emulator. It is roughly analogous to Windows XP Professional in features. It was discontinued in September 2005 when the last vendor of Itanium workstations stopped shipping Itanium systems marketed as "Workstations". Windows Server 2003 On April 25, 2003, Microsoft launched Windows Server 2003, a notable update to Windows 2000 Server encompassing many new security features, a new "Manage Your Server" wizard that simplifies configuring a machine for specific roles, and improved performance. It is based on the Windows NT 5.2 kernel. A few services not essential for server environments are disabled by default for stability reasons, most noticeable are the "Windows Audio" and "Themes" services; users have to enable them manually to get sound or the "Luna" look as per Windows XP. The hardware acceleration for display is also turned off by default, users have to turn the acceleration level up themselves if they trust the display card driver. In December 2005, Microsoft released Windows Server 2003 R2, which is actually Windows Server 2003 with SP1 (Service Pack 1), together with an add-on package. Among the new features are a number of management features for branch offices, file serving, printing and company-wide identity integration. Windows Server 2003 is available in six editions: Web Edition (32-bit) Enterprise Edition (32 and 64-bit) Datacenter Edition (32 and 64-bit) Small Business Server (32-bit) Storage Server (OEM channel only) Windows Server 2003 R2, an update of Windows Server 2003, was released to manufacturing on December 6, 2005. It is distributed on two CDs, with one CD being the Windows Server 2003 SP1 CD. The other CD adds many optionally installable features for Windows Server 2003. The R2 update was released for all x86 and x64 versions, except Windows Server 2003 R2 Enterprise Edition, which was not released for Itanium. Windows XP x64 and Server 2003 x64 Editions On April 25, 2005, Microsoft released Windows XP Professional x64 Edition and Windows Server 2003, x64 Editions in Standard, Enterprise and Datacenter SKUs. Windows XP Professional x64 Edition is an edition of Windows XP for x86-64 personal computers. It is designed to use the expanded 64-bit memory address space provided by the x86–64 architecture. Windows XP Professional x64 Edition is based on the Windows Server 2003 codebase, with the server features removed and client features added. Both Windows Server 2003 x64 and Windows XP Professional x64 Edition use identical kernels. Windows XP Professional x64 Edition is not to be confused with Windows XP 64-bit Edition, as the latter was designed for Intel Itanium processors. During the initial development phases, Windows XP Professional x64 Edition was named Windows XP 64-Bit Edition for 64-Bit Extended Systems. Windows Fundamentals for Legacy PCs In July 2006, Microsoft released a thin-client version of Windows XP Service Pack 2, called Windows Fundamentals for Legacy PCs (WinFLP). It is only available to Software Assurance customers. The aim of WinFLP is to give companies a viable upgrade option for older PCs that are running Windows 95, 98, and Me that will be supported with patches and updates for the next several years. Most user applications will typically be run on a remote machine using Terminal Services or Citrix. While being visually the same as Windows XP, it has some differences. For example, if the screen has been set to 16 bit colors, the Windows 2000 recycle bin icon and some XP 16-bit icons will show. Paint and some games like Solitaire aren't present too. Windows Home Server Windows Home Server (code-named Q, Quattro) is a server product based on Windows Server 2003, designed for consumer use. The system was announced on January 7, 2007 by Bill Gates. Windows Home Server can be configured and monitored using a console program that can be installed on a client PC. Such features as Media Sharing, local and remote drive backup and file duplication are all listed as features. The release of Windows Home Server Power Pack 3 added support for Windows 7 to Windows Home Server. Windows Vista and Server 2008 Windows Vista was released on November 30, 2006 to business customers—consumer versions followed on January 30, 2007. Windows Vista intended to have enhanced security by introducing a new restricted user mode called User Account Control, replacing the "administrator-by-default" philosophy of Windows XP. Vista was the target of much criticism and negative press, and in general was not well regarded, this was seen as leading to the relatively swift release of Windows 7. One major difference between Vista and earlier versions of Windows, Windows 95 and later, was that the original start button was replaced with the Windows icon in a circle (called the Start Orb). Vista also featured new graphics features, the Windows Aero GUI, new applications (such as Windows Calendar, Windows DVD Maker and some new games including Chess, Mahjong, and Purble Place), Internet Explorer 7, Windows Media Player 11, and a large number of underlying architectural changes. Windows Vista had the version number NT 6.0. During its lifetime, Windows Vista had two service packs. Windows Vista shipped in six editions: Starter (only available in developing countries) Home Basic Home Premium Business Enterprise (only available to large business and enterprise) Ultimate (combines both Home Premium and Enterprise) All editions (except Starter edition) were available in both 32-bit and 64-bit versions. The biggest advantage of the 64-bit version was breaking the 4 gigabyte memory barrier, which 32-bit computers cannot fully access. Windows Server 2008 Windows Server 2008, released on February 27, 2008, was originally known as Windows Server Codename "Longhorn". Windows Server 2008 built on the technological and security advances first introduced with Windows Vista, and was significantly more modular than its predecessor, Windows Server 2003. Windows Server 2008 shipped in ten editions: Windows Server 2008 Foundation (for OEMs only) Windows Server 2008 Standard (32-bit and 64-bit) Windows Server 2008 Enterprise (32-bit and 64-bit) Windows Server 2008 Datacenter (32-bit and 64-bit) Windows Server 2008 for Itanium-based Systems (IA-64) Windows HPC Server 2008 Windows Web Server 2008 (32-bit and 64-bit) Windows Storage Server 2008 (32-bit and 64-bit) Windows Small Business Server 2008 (64-bit only) Windows Essential Business Server 2008 (32-bit and 64-bit) Windows 7 and Server 2008 R2 Windows 7 was released to manufacturing on July 22, 2009, and reached general retail availability on October 22, 2009. It was previously known by the codenames Blackcomb and Vienna. Windows 7 has the version number NT 6.1. Since its release, Windows 7 had one service pack. Some features of Windows 7 were faster booting, Device Stage, Windows PowerShell, less obtrusive User Account Control, multi-touch, and improved window management. Features included with Windows Vista and not in Windows 7 include the sidebar (although gadgets remain) and several programs that were removed in favor of downloading their Windows Live counterparts. Windows 7 shipped in six editions: Starter (available worldwide) Home Basic Home Premium Professional Enterprise (available to volume-license business customers only) Ultimate In some countries (Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, United Kingdom, Greece, Hungary, Iceland, Ireland, Italy, Latvia, Liechtenstein, Lithuania, Luxembourg, Malta, Netherlands, Norway, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden, and Switzerland), there were other editions that lacked some features such as Windows Media Player, Windows Media Center and Internet Explorer—these editions were called names such as "Windows 7 N." Microsoft focused on selling Windows 7 Home Premium and Professional. All editions, except the Starter edition, were available in both 32-bit and 64-bit versions. Unlike the corresponding Vista editions, the Professional and Enterprise editions were supersets of the Home Premium edition. At the Professional Developers Conference (PDC) 2008, Microsoft also announced Windows Server 2008 R2, as the server variant of Windows 7. Windows Server 2008 R2 shipped in 64-bit versions (x64 and Itanium) only. Windows Thin PC In 2010, Microsoft released Windows Thin PC or WinTPC, which was a feature-and size-reduced locked-down version of Windows 7 expressly designed to turn older PCs into thin clients. WinTPC was available for software assurance customers and relied on cloud computing in a business network. Wireless operation is supported since WinTPC has full wireless stack integration, but wireless operation may not be as good as the operation on a wired connection. Windows Home Server 2011 Windows Home Server 2011 code named 'Vail' was released on April 6, 2011. Windows Home Server 2011 is built on the Windows Server 2008 R2 code base and removed the Drive Extender drive pooling technology in the original Windows Home Server release. Windows Home Server 2011 is considered a "major release". Its predecessor was built on Windows Server 2003. WHS 2011 only supports x86-64 hardware. Microsoft decided to discontinue Windows Home Server 2011 on July 5, 2012 while including its features into Windows Server 2012 Essentials. Windows Home Server 2011 was supported until April 12, 2016. Windows 8 and Server 2012 On October 26, 2012, Microsoft released Windows 8 to the public. One edition, Windows RT, runs on some system-on-a-chip devices with mobile 32-bit ARM (ARMv7) processors. Windows 8 features a redesigned user interface, designed to make it easier for touchscreen users to use Windows. The interface introduced an updated Start menu known as the Start screen, and a new full-screen application platform. The desktop interface is also present for running windowed applications, although Windows RT will not run any desktop applications not included in the system. On the Building Windows 8 blog, it was announced that a computer running Windows 8 can boot up much faster than Windows 7. New features also include USB 3.0 support, the Windows Store, the ability to run from USB drives with Windows To Go, and others. Windows 8 was given the kernel number NT 6.2, with its successor 8.1 receiving the kernel number 6.3. So far, neither has had any service packs yet, although many consider Windows 8.1 to be a service pack for Windows 8. Windows 8 is available in the following editions: Windows 8 Windows 8 Pro Windows 8 Enterprise Windows RT The first public preview of Windows Server 2012 and was also shown by Microsoft at the 2011 Microsoft Worldwide Partner Conference. Windows 8 Release Preview and Windows Server 2012 Release Candidate were both released on May 31, 2012. Product development on Windows 8 was completed on August 1, 2012, and it was released to manufacturing the same day. Windows Server 2012 went on sale to the public on September 4, 2012. Windows 8 went on sale October 26, 2012. Windows 8.1 and Windows Server 2012 R2 were released on October 17, 2013. Windows 8.1 is available as an update in the Windows Store for Windows 8 users only and also available to download for clean installation. The update adds new options for resizing the live tiles on the Start screen. Windows 10 and later Server versions Windows 10 is the current release of the Microsoft Windows operating system. Unveiled on September 30, 2014, it was released on July 29, 2015. It was distributed without charge to Windows 7 and 8.1 users for one year after release. A number of new features like Cortana, the Microsoft Edge web browser, the ability to view Windows Store apps as a window instead of fullscreen, virtual desktops, revamped core apps, Continuum, and a unified Settings app were all features debuted in Windows 10. Stable releases Version 1507 (codenamed Threshold 1) was the original version of Windows 10 and released in July 2015. Version 1511, announced as the November Update and codenamed Threshold 2. It was released in November 2015. This update added many visual tweaks, such as more consistent context menus and the ability to change the color of window titlebars. Windows 10 can now be activated with a product key for Windows 7 and later, thus simplifying the activation process and essentially making Windows 10 free for anyone who has Windows 7 or later, even after the free upgrade period ended. A "Find My Device" feature was added, allowing users to track their devices if they lose them, similar to the Find My iPhone service that Apple offers. Controversially, the Start menu now displays "featured apps". A few tweaks were added to Microsoft Edge, including tab previews and the ability to sync the browser with other devices running Windows 10. Kernel version number: 10.0.10586. Version 1607, announced as the Anniversary Update and codenamed Redstone 1. It was the first of several planned updates with the "Redstone" codename. Its version number, 1607, means that it was supposed to launch in July 2016, however it was delayed until August 2016. Many new features were included in the version, including more integration with Cortana, a dark theme, browser extension support for Microsoft Edge, click-to-play Flash by default, tab pinning, web notifications, swipe navigation in Edge, and the ability for Windows Hello to use a fingerprint sensor to sign into apps and websites, similar to Touch ID on the iPhone. Also added was Windows Ink, which improves digital inking in many apps, and the Windows Ink Workspace which lists pen-compatible apps, as well as quick shortcuts to a sticky notes app and a sketchpad. Microsoft, through their partnership with Canonical, integrated a full Ubuntu bash shell via the Windows Subsystem for Linux. Notable tweaks in this version of Windows 10 include the removal of the controversial password-sharing feature of Microsoft's Wi-Fi Sense service, a slightly redesigned Start menu, Tablet Mode working more like Windows 8, overhauled emoji, improvements to the lock screen, calendar integration in the taskbar, and the Blue Screen of Death now showing a QR code which users can scan to quickly find out what caused the error. This version of Windows 10's kernel version is 10.0.14393. Version 1703, announced as the Creators Update and codenamed Redstone 2. Features for this update include a new Paint 3D application, which allows users to create and modify 3D models, integration with Microsoft's HoloLens and other "mixed-reality" headsets produced by other manufacturers, Windows My People, which allows users to manage contacts, Xbox game broadcasting, support for newly developed APIs such as WDDM 2.2, Dolby Atmos support, improvements to the Settings app, and more Edge and Cortana improvements. This version also included tweaks to system apps, such as an address bar in the Registry Editor, Windows PowerShell being the default command line interface instead of the Command Prompt and the Windows Subsystem for Linux being upgraded to support Ubuntu 16.04. This version of Windows 10 was released on April 11, 2017 as a free update. Version 1709, announced as the Fall Creators Update and codenamed Redstone 3. It introduced a new design language—the Fluent Design System and incorporates it in UWP apps such as Calculator. It also added new features to the photos application, which were once available only in Windows Movie Maker. Version 1803, announced as the April 2018 Update and codenamed Redstone 4 introduced Timeline, an upgrade to the task view screen such that it has the ability to show past activities and let users resume them. The respective icon on the taskbar was also changed to reflect this upgrade. Strides were taken to incorporate Fluent Design into Windows, which included adding Acrylic transparency to the Taskbar and Taskbar Flyouts. The Settings App was also redesigned to have an Acrylic left pane. Variable Fonts were introduced. Version 1809, announced as the Windows 10 October 2018 Update and codenamed Redstone 5 among new features, introduced Dark Mode for File Explorer, Your Phone App to link Android phone with Windows 10, new screenshot tool called Snip & Sketch, Make Text Bigger for easier accessibility, and Clipboard History and Cloud Sync. Version 1903, announced as the Windows 10 May 2019 Update, codenamed 19H1, was released on May 21, 2019. It added many new features including the addition of a light theme to the Windows shell and a new feature known as Windows Sandbox, which allowed users to run programs in a throwaway virtual window. Version 1909, announced as the Windows 10 November 2019 Update, codenamed 19H2, was released on November 12, 2019. It unlocked many features that were already present, but hidden or disabled, on 1903, such as an auto-expanding menu on Start while hovering the mouse on it, OneDrive integration on Windows Search and creating events from the taskbar's clock. Some PCs with version 1903 had already enabled these features without installing 1909. Version 2004, announced as the Windows 10 May 2020 Update, codenamed 20H1, was released on May 27, 2020. It introduces several new features such as renaming virtual desktops, GPU temperature control and type of disk on task manager, chat-based interface and window appearance for Cortana, and cloud reinstalling and quick searches (depends from region) for search home. Version 20H2, announced as the Windows 10 October 2020 Update, codenamed 20H2, was released on October 20, 2020. It introduces resizing the start menu panels, a graphing mode for Calculator, process architecture view on task manager's Details pane, and optional drivers delivery from Windows Update and an updated in-use location icon on taskbar. Version 21H1, announced as the Windows 10 May 2021 Update, codenamed 21H1, was released on May 18, 2021. Version 21H2, announced as the Windows 10 November 2021 Update, codenamed 21H2, was released on November 16, 2021. Windows Server 2016 Windows Server 2016 is a release of the Microsoft Windows Server operating system that was unveiled on September 30, 2014. Windows Server 2016 was officially released at Microsoft's Ignite Conference, September 26–30, 2016. It is based on the Windows 10 Anniversary Update codebase. Windows Server 2019 Windows Server 2019 is a release of the Microsoft Windows Server operating system that was announced on March 20, 2018. The first Windows Insider preview version was released on the same day. It was released for general availability on October 2, 2018. Windows Server 2019 is based on the Windows 10 October 2018 Update codebase. On October 6, 2018, distribution of Windows version 1809 (build 17763) was paused while Microsoft investigated an issue with user data being deleted during an in-place upgrade. It affected systems where a user profile folder (e.g. Documents, Music or Pictures) had been moved to another location, but data was left in the original location. As Windows Server 2019 is based on the Windows version 1809 codebase, it too was removed from distribution at the time, but was re-released on November 13, 2018. The software product life cycle for Server 2019 was reset in accordance with the new release date. Windows Server 2022 Windows Server 2022 is a release of the Microsoft Windows Server operating system which is based on the Windows 10 November 2021 Update codebase. Windows 11 Windows 11 is the next generation release of Windows NT, and the successor to Windows 10. Codenamed "Sun Valley," it was unveiled on June 24, 2021, and was released on October 5, 2021. It will be distributed for free to all Windows 10 users with compatible PCs via a Windows Update. Microsoft's PC Health Check App lets you check compatibility for your PC. According to Microsoft, Windows 11 will be released for newer PCs first and then the initial release will continue till mid 2022. Windows 11 revamps the GUI and brings modern code, thus making it much faster than Windows 10. It is also noted that Windows 11 updates are significantly compressed, so the updates are downloaded faster. Also, Windows 11 does not show signs of the 'Installing Updates' screen while installing updates during 'Update and Restart' phase, thus finishing updates within 5 minutes. Windows 365 On July 14, 2021 Microsoft announced Windows 365. Since it is going to run on cloud and be streamed to user's device, it can be used from many devices, even on smartphones and other devices. It will be mainly for business users. Now, it supports Windows 10 and Windows 11 too. See also Comparison of operating systems History of operating systems List of Microsoft codenames References Further reading History of Microsoft Microsoft Windows Windows OS/2
Operating System (OS)
675
Sistema 700 Sistema 700 was a personal professional microcomputer, produced by a Brazilian computer company in early 1980s. General information Based on the Zilog Z80A 8-bit, 4MHz microprocessor, it had 64 KiB RAM configuration and two 5-1/4" floppy disk drives with capacity for up to 320 KiB of storage. Its operating system was DOS-700, a version adapted by Prologica's software engineering department from the CP/M-80. It achieved relative commercial success in financial, database and engineering applications. Due to the compatibility with the popular CP/M system, various applications like Fortran ANS, BASIC compiler, COBOL ANSI 74 compiler, Algol, Pascal, PL/I, MUMPS/M, RPG, Faturol C could be used. Other applications like word processors (WordStar), spreadsheets (CalcStar) and databases (DataStar and dBase II) were also compatible. Your applications could be programmed in BASIC, Cobol-80 and Fortran. Models Sistema 700 (1981 - vapourware) Initial model announced in 1981, but never went into production. Super Sistema 700 (1981) Final version with graphite-colored cabinet and rounded contours. Data Storage Data storage was done in audio cassette. Audio cables were supplied with the computer for connection with a regular tape recorder. Accessories P-720 Printer. Bibliography Micro Computador - Curso Básico. Rio de Janeiro: Rio Gráfica, 1984, v. 1, pp. 49–50. References Computer-related introductions in 1981
Operating System (OS)
676
Win4Lin Win4Lin is a discontinued proprietary software application for Linux which allowed users to run a copy of Windows 9x, Windows 2000 or Windows XP applications on their Linux desktop. Win4Lin was based on Merge software, a product which changed owners several times until it was bought by Win4Lin Inc. Citing changes in the desktop virtualization industry, the software's publisher, Virtual Bridges, has discontinued Win4Lin Pro. Products and technology In 2006, Win4Lin came in three different versions, depending on the virtualization requirements of the user. Win4Lin 9x allowed the user to run a full copy of Windows 98 or Windows Me inside a virtual machine. Win4Lin Home allowed users to only emulate applications. Win4Lin Pro offered users the ability to install a fully virtualized Windows 2000 or Windows XP. The Win4Lin 9x/Pro (henceforth the only technology discussed in this section) operates by running Windows applications in a virtual machine. Unlike Wine or CrossOver which are emulation-based, virtualization-based software such as VMware or Win4Lin require users to have a Windows license in order to run applications since they must install a full copy of Windows within the virtual machine. Unlike VMware, however, Win4Lin provides the virtual guest operating system with access to the native Linux filesystem, and allows the Linux host to access the guest's files even when the virtual machine is not running. In addition to the convenience this offers, Computerworld found in their 2002 review that Win4Lin gained significant performance over VMware by using the native Linux filesystem, but also noted that this approach (unlike VMware's) limited the installation of only one version of Windows on a Win4Lin machine. When the Win4Lin application starts it displays a window on the Linux desktop which contains the Windows desktop environment. Users can then install or run applications as they normally would from within Windows. Win4Lin supports Linux printers, internet connections, and Windows networking, but , does not support DirectX and by extension most Windows games. They also offered Win4BSD for FreeBSD. History Win4Lin was initially based on Merge software originally developed at Locus Computing Corporation, and which changed hands several times until it ended in the assets of NeTraverse, which were purchased in 2005 by Win4Lin Inc., which introduced Win4Lin Pro Desktop. This was based on a 'tuned' version of QEMU and KQEMU, and it hosted [Windows NT]-versions of Windows. In June 2006, Win4Lin released Win4VDI for Linux based on the same code base. Win4VDI for Linux served Microsoft Windows desktops to thin clients from a Linux server. Virtual Bridges discontinued support for Win4Lin 9x in 2007. The Win4Lin Pro Desktop product ceased to be supported in March 2010. Reception Many users reported that the 9x version ran windows software at near-native speed, even on quite low-powered machines, such as Pentium-IIs. Nicholas Petereley praised Win4Lin in two of his columns in the year 2000, for its significantly faster performance than its competitor VMware. See also x86 virtualization References External links Win4Lin 5.0 makes big improvements, Linux.com, 2008 Win4Lin Pro Desktop 4.0 lags behind free alternatives, Linux.com, 2007 Break the Hardware Upgrade Cycle with Win4Lin Windows Virtual Desktop Server, Linux Journal, 2007 Run Windows On Linux: Win4Lin Revisited [Win4Lin Pro 3.0 review], Tom's Hardware, 2006 INQUIRER helps debug Win4Lin Pro [2.7], The Inquirer, 2006 Product Review — Running Windows on Linux, Win4Lin 2.7 vs. VMware Workstation 5.5.1., Open Source Magazine, 2006 Review: Win4Lin Pro [2.0], Linux.com, 2005 A Look at Win4Lin 5.1, OSNews, 2004 Review of Win4Lin 4.0, OSNews, 2002 VMware Express 2.0 and Win4Lin 2.0: A Comparison Review, Linux Journal, 2001 TreLOS's Win4Lin (2000) Virtualization software Linux emulation software Discontinued software
Operating System (OS)
677
Tim Paterson Tim Paterson (born 1 June 1956) is an American computer programmer, best known for creating 86-DOS, an operating system for the Intel 8086. This system emulated the application programming interface (API) of CP/M, which was created by Gary Kildall. 86-DOS later formed the basis of MS-DOS, the most widely used personal computer operating system in the 1980s. Biography Paterson was educated in the Seattle Public Schools, graduating from Ingraham High School in 1974. He attended the University of Washington, working as a repair technician for The Retail Computer Store in the Green Lake area of Seattle, Washington, and graduated magna cum laude with a degree in Computer Science in June 1978. He went to work for Seattle Computer Products as a designer and engineer. He designed the hardware of Microsoft's Z-80 SoftCard which had a Z80 CPU and ran the CP/M operating system on an Apple II. A month later, Intel released the 8086 CPU, and Paterson went to work designing an S-100 8086 board, which went to market in November 1979. The only commercial software that existed for the board was Microsoft's Standalone Disk BASIC-86. The standard CP/M operating system at the time was not available for this CPU and without a true operating system, sales were slow. Paterson began work on QDOS (Quick and Dirty Operating System) in April 1980 to fill that void, copying the APIs of CP/M from sources including the published CP/M manual so that it would be highly compatible. QDOS was soon renamed as 86-DOS. Version 0.10 was complete by July 1980. By version 1.14 86-DOS had grown to lines of assembly code. In December 1980, Microsoft secured the rights to market 86-DOS to other hardware manufacturers. While acknowledging that he made 86-DOS compatible with CP/M, Paterson has maintained that the 86-DOS program was his original work and has denied allegations that he referred to CP/M code while writing it. When a book appeared in 2004 claiming that 86-DOS was an unoriginal "rip-off" of CP/M, Paterson sued the authors and publishers for defamation. The judge found that Paterson failed to "provide any evidence regarding 'serious doubts' about the accuracy of the Gary Kildall chapter. Instead, a careful review of the Lefer notes ... provides a research picture tellingly close to the substance of the final chapter" and the case was dismissed on the basis that the book's claims were constitutionally protected opinions and not provably false. Paterson left SCP in April 1981 and worked for Microsoft from May 1981 to April 1982. After a brief second stint with SCP, Paterson started his own company, Falcon Technology, a.k.a. Falcon Systems. In 1983, Microsoft contracted Paterson to port MS-DOS to the MSX computers standard they were developing with ASCII. Paterson accepted the contract to help fund his company and completed the work on the MSX-DOS operating system in 1984. Falcon Technology was bought by Microsoft in 1986 to reclaim one out of two issued royalty-free licenses for MS-DOS (the other belonging to SCP), eventually becoming part of Phoenix Technologies. Paterson worked a second stint with Microsoft from 1986 to 1988, and a third stint from 1990 to 1998, during which time he worked on Visual Basic. After leaving Microsoft a third time, Paterson founded another software development company, Paterson Technology, and also made several appearances on the Comedy Central television program BattleBots. Paterson has also raced rally cars in the SCCA Pro Rally series, and even engineered his own trip computer, which he integrated into the axle of a four-wheel-drive Porsche 911. References External links Paterson Technology, a company founded by Tim Paterson Tim Paterson's blog American computer programmers Microsoft employees DOS people MSX-DOS 1956 births Living people University of Washington alumni
Operating System (OS)
678
CMS file system The CMS file system is the native file system of IBM's Conversational Monitor System (CMS), a component of VM . It was the only file system for CMS until the introduction of the CMS Shared File System with VM/SP. Minidisks CP-67 and VM allow an installation to divide a disk volume into virtual disks called minidisks. A minidisk may be a CMS minidisk, initialized with the CMS file system. Other minidisks might be formatted for use by, e.g., OS/360, but these are not CMS minidisks even if they are assigned to a CMS virtual machine. A CMS virtual machine can have up to ten minidisks accessed at one time. The user references the minidisks by a letter, part of a field called the filemode. The S disk contains CMS system files and is read-only; the Y disk is usually an extension of S. The read/write A disk contains user files such as customization data, program sources, and executables. Other drive letters B through Z can contain data as defined by the user. If a file is opened without a filemode letter specified (FILENAME FILETYPE *) the disks will be searched in alphabetic order. The second character of the filemode is a number indicating read, write, and sharing attributes. The ACCESS command is used to access a minidisk. For example: ACCESS 191 A would access the virtual disk assigned to this user as unit "191" (virtual channel and unit address) as minidisk "A". A CMS minidisk in early versions of CMS is formatted into 800-byte blocks. Later versions of CMS allow minidisks formatted as 1024-, 2048-, or 4096-byte blocks, which increased the limits described here to 231 disk blocks and 231 records. The first two blocks on a minidisk are reserved for IPL. The third block contains the label identifying the minidisk. The fourth block, called the Master File Directory or MFD, is the directory header for the minidisk. The MFD also contains a bitmap called QMSK indicating the status of each 800-byte block on disk, used for allocation. Following the MFD all record types may be scattered and intermixed on a disk. File system structure CMS uses a flat file system. The MFD contains an array of disk addresses of blocks containing File Status Table (FST) (directory) entries. Each FST block contains twenty 40-byte FST entries, each describing a file. The contents of one FST entry are: The FST entry points to the first chain link block for the file. The first chain link block contains the disk addresses of up to 40 additional chain link blocks, followed by the disk addresses of up to 60 data blocks. The remaining chain link blocks each contain the disk addresses of up to 400 data blocks. this results in a maximum size of 16,060 800-byte blocks, or 12,848,000 bytes, for any CMS file. The maximum number of records in one file is 65,533. Records are usually called items in CMS terminology. CMS files can have either fixed or variable record format; record types may not be mixed in a file. For fixed-length records the length is defined by FSTLRECL, and the location of any fixed-length record can be computed by (item_number-1) * record_length/800. The quotient will be the block number and the remainder will be the offset of the item in the block. Variable-length records have a maximum length of FSTLRECL bytes, and are preceded by a two-byte record length field indicating the actual length. In 1979, Virtual Machine/System Extensions (VM/SE or SEPP) Release 2 and Virtual Machine/Basic System Extensions (VM/BSE or BSEPP) Release 2 provided an enhancement to the original CMS file system, called Enhanced Disk Format (EDF), that allows larger files by expanding the FST and introducing multiple levels of chain link blocks. See also IBM System/360 architecture ESA/390 z/Architecture Notes References Disk file systems IBM file systems IBM mainframe operating systems VM (operating system)
Operating System (OS)
679
Thoth (operating system) Thoth is a real-time, message passing operating system (OS) developed at the University of Waterloo in Waterloo, Ontario Canada. History Thoth was developed at the University of Waterloo in Waterloo, Ontario, Canada. The curriculum at Waterloo includes a Real Time Operating Systems course and an associated "Train lab", where students must develop a real-time operating system (RTOS) to control a model track with multiple trains. In 1972, the B programming language, a derivative of BCPL, was brought to Waterloo by Stephen C. Johnson while on sabbatical from Bell Labs. A new language derived from B, named Eh, was developed at Waterloo. Thoth was written originally in Eh with some assembly language. Initial development of Thoth occurred on a Honeywell 6050 computer. It was first run on a Data General Nova 2 in May 1976, and was next ported to a Texas Instruments TI990/10 in August 1976. In October 1976, the University of Waterloo published Laurence S. Melen's Master's Thesis, titled "A Portable Real-Time Executive, Thoth". Eh was later upgraded, in part with the addition of data types, and renamed Zed. Thoth was then rewritten in Zed. One of the early principal developers of Thoth was David Cheriton. Cheriton would go on to develop the Verex kernel, and the V-System OS; both influenced by Thoth. Another early developer was Michael Malcolm, who would later found Waterloo Microsystems, Network Appliances, Inc., Blue Coat Systems, and Kaliedescape, several of whose operating systems are believed to have been derived from or influenced by Thoth. Certain papers describe DEMOS as the inspiration for Thoth. As prior art Cheriton cited Per Brinch Hansen's RC 4000, then listed Thoth, DEMOS, and Accent together as later developments. Other influences on the development of Thoth included Multics, Data General's RTOS, Honeywell GCLS, and Unix. Later references cite Thoth as the original implementation of its particular use of synchronous message passing and multiprocess program structure, which were subsequently applied by other projects. Work on Thoth ended around 1982. Features Thoth was developed to meet four goals: Easily portable to other hardware Programs run as a set of inexpensive, cooperating concurrent processes with efficient inter-process communications (IPC) Suitable for real-time uses as to system response to external events Adaptable and scalable to a wide range of real-time uses Thoth exposes the same abstract machine to application software, regardless of the underlying physical machine. This abstract machine was defined with certain minimal requirements, such that meeting these requirements allowed a given computer to be included in the Thoth Domain of potential Thoth port targets. Processes running under Thoth can be grouped into "Teams". All processes within a team share a common address space and can share data. This is similar to other systems' concepts of "lightweight processes" or threads. Processes not members of the same team communicate using Thoth's IPC. Inter Process Communication in Thoth is primarily accomplished by means of synchronous message passing. This approach greatly simplified message queueing. Although the term was not current when the original papers were written, Thoth has been called a microkernel. Thoth's synchronous message passing IPC lent itself to the application of an anthropomorphic programming model, building on the work of Carl Hewitt's actor model, and of Smalltalk. Legacy The Thoth operating system provided either the basis or the inspiration for several later projects, some of which are listed below. Academic The microNet distributed file server system at the University of Waterloo ran on an operating system named WatSys that was similar to Thoth, and Port. WatSys debuted in 1981. The National Research Council of Canada was the development home of the Harmony operating system, a derivative of Thoth oriented towards real-time robot control. Cheriton took a position at the University of British Columbia, where he was involved in developing Verex, and Distributed Verex, using many of the ideas he had earlier explored in Thoth. Cheriton later moved to Stanford University in the US, where he developed the V-System, which continued to build on earlier work with Thoth. The Sylvan Multiprocessing system's architecture included a coprocessor that implemented Thoth's synchronous message passing primitives (and Ada's extended rendezvous) in hardware. Thoth and its message passing IPC were used to underpin a multi-process paint program that employed the anthropomorphic programming model. Thoth's message passing semantics were part of an experimental parallel-processing version of the computer algebra system (CAS) Maple. The distributed Process Execution And Communication Environment (PEACE) was developed for high-performance applications. The paper cites Thoth as a "major foundation" for the project. The Eindhoven Multi-Processor System (EMPS) executive put an emphasis on efficiency. Thoth provided the inspiration for the design of the EMPS kernel. An experimental human-computer interface environment named the Room system was built on Waterloo Port, which was derived from Thoth and which used its IPC techniques. The Room paper references earlier Thoth papers. The Flash web server, a research project with an emphasis on efficiency and portability, was said to resemble Thoth in its method of multi-process structuring and concept of process teams communicating via message passing. Commercial Gordon Bell and Dan Dodge, developers of the QNX message passing realtime operating system, both worked with Thoth while they were students at Waterloo. AT&T's System 75 Office Communication System was controlled by the Oryx kernel and the Pecos set of essential system processes, jointly referred to as Oryx/Pecos. It used ideas from Thoth, DEMOS, and an internal AT&T project. The commercial Waterloo Port network operating system was derived from Thoth. The associated Zed language was upgraded to become the PORT language for Waterloo Port. Hayes Microcomputer Products acquired Waterloo Microsystems, and rebranded and upgraded the Waterloo Port product to create LANstep. The Auspex storage company produced the Functional Multiprocessing Kernel (FMK), which employed concepts identified as having been first developed in Thoth. Unlike the V-System and Waterloo Port, FMK had no memory management. Early versions of Network Appliance, Inc.'s storage appliance operating system have been described as being very similar to Thoth. NetApp's OS was written by David Hitz, who had previously been at Auspex. In 1996 the CacheFlow web acceleration appliance company released their CacheOS, which was based on Thoth. In 2001 CacheFlow was renamed Blue Coat Systems and, with the addition of a policy engine, CacheOS became the Secure Gateway Operating System (SGOS). References Further reading * Real-time operating systems Microkernel-based operating systems University of Waterloo Operating system families
Operating System (OS)
680
Exit (system call) On many computer operating systems, a computer process terminates its execution by making an exit system call. More generally, an exit in a multithreading environment means that a thread of execution has stopped running. For resource management, the operating system reclaims resources (memory, files, etc.) that were used by the process. The process is said to be a dead process after it terminates. How it works Under Unix and Unix-like operating systems, a process is started when its parent process executes a fork system call. The parent process may then wait for the child process to terminate, or may continue execution (possibly forking off other child processes). When the child process terminates ("dies"), either normally by calling exit, or abnormally due to a fatal exception or signal (e.g., SIGTERM, SIGINT, SIGKILL), an exit status is returned to the operating system and a SIGCHLD signal is sent to the parent process. The exit status can then be retrieved by the parent process via the wait system call. Most operating systems allow the terminating process to provide a specific exit status to the system, which is made available to the parent process. Typically this is an integer value, although some operating systems (e.g., Plan 9 from Bell Labs) allow a character string to be returned. Systems returning an integer value commonly use a zero value to indicate successful execution and non-zero values to indicate error conditions. Other systems (e.g., OpenVMS) use even-numbered values for success and odd values for errors. Still other systems (e.g., IBM z/OS and its predecessors) use ranges of integer values to indicate success, warning, and error completion results. Clean up The exit operation typically performs clean-up operations within the process space before returning control back to the operating system. Some systems and programming languages allow user subroutines to be registered so that they are invoked at program termination before the process actually terminates for good. As the final step of termination, a primitive system exit call is invoked, informing the operating system that the process has terminated and allows it to reclaim the resources used by the process. It is sometimes possible to bypass the usual cleanup; C99 offers the _exit() function which terminates the current process without any extra program clean-up. This may be used, for example, in a fork-exec routine when the exec call fails to replace the child process; calling atexit routines would erroneously release resources belonging to the parent. Orphans and zombies Some operating systems handle a child process whose parent process has terminated in a special manner. Such an orphan process becomes a child of a special root process, which then waits for the child process to terminate. Likewise, a similar strategy is used to deal with a zombie process, which is a child process that has terminated but whose exit status is ignored by its parent process. Such a process becomes the child of a special parent process, which retrieves the child's exit status and allows the operating system to complete the termination of the dead process. Dealing with these special cases keeps the system process table in a consistent state. Examples The following programs terminate and return a success exit status to the system. COBOL: IDENTIFICATION DIVISION. PROGRAM-ID. SUCCESS-PROGRAM. PROCEDURE DIVISION. MAIN. MOVE ZERO TO RETURN-CODE. END PROGRAM. Fortran: program wiki call exit(0) end program wiki Java: public class Success { public static void main(String[] args) { System.exit(0); } } Pascal: program wiki; begin ExitCode := 0; exit end. DR-DOS batch file: exit 0 Perl: #!/usr/bin/env perl exit; PHP: <?php exit(0); Python: #!/usr/bin/env python3 import sys sys.exit(0) Unix shell: exit 0 DOS assembly: ; For MASM/TASM .MODEL SMALL .STACK .CODE main PROC NEAR MOV AH, 4Ch ; Service 4Ch - Terminate with Error Code MOV AL, 0 ; Error code INT 21h ; Interrupt 21h - DOS General Interrupts main ENDP END main ; Starts at main Some programmers may prepare everything for INT 21h at once: MOV AX, 4C00h ; replace the 00 with your error code in HEX Linux 32-bit x86 Assembly: ; For NASM MOV AL, 1 ; Function 1: exit() MOV EBX, 0 ; Return code INT 80h ; # Passes control to interrupt vector # invokes system call—in this case system call # number 1 with argument 0 # For GAS .text .global _start _start: movl $1, %eax # System call number 1: exit() movl $0, %ebx # Exits with exit status 0 int $0x80 # Passes control to interrupt vector # invokes system call—in this case system call # number 1 with argument 0 Linux 64-bit x86 64 Assembly: for FASM format ELF64 executable 3 entry start segment readable executable start: ; STUFF ; exiting mov eax, 60 ; sys_exit syscall number: 60 xor edi, edi ; set exit status to 0 (`xor edi, edi` is equal to `mov edi, 0` ) syscall ; call it OS X 64-bit x86 64 Assembly: for NASM global _main section .text _main: mov rax, 0x02000001 ; sys_exit syscall number: 1 (add 0x02000000 for OS X) xor rdi, rdi ; set exit status to 0 (`xor rdi, rdi` is equal to `mov rdi, 0` ) syscall ; call exit() Windows On Windows, a program can terminate itself by calling ExitProcess or RtlExitUserProcess function. See also exit (command) Child process Execution Exit status Fork Kill command Orphan process Process Parent process SIGCHLD Task Thread Wait References External links C++ reference for std::exit Articles with example C code C standard library Process (computing) POSIX Process termination functions
Operating System (OS)
681
TYPE (DOS command) In computing, is a command in various command-line interpreters (shells) such as COMMAND.COM, cmd.exe, 4DOS/4NT and Windows PowerShell used to display the contents of specified files on the computer terminal. The analogous Unix command is . Implementations The command is available in the operating systems DEC RT-11, OS/8, RSX-11, TOPS-10, TOPS-20, VMS, Digital Research CP/M, MP/M, MetaComCo TRIPOS, Heath Company HDOS, AmigaDOS, DOS, FlexOS, TSL PC-MOS, SpartaDOS X, IBM/Toshiba 4690 OS, IBM OS/2, Microsoft Windows, ReactOS, AROS, and SymbOS. The type command is supported by Tim Paterson's SCP 86-DOS. On MS-DOS, the command is available in versions 1 and later. DR DOS 6.0 also includes an implementation of the command. It is also available in the open source MS-DOS emulator DOSBox and the EFI shell. In Windows PowerShell, is a predefined command alias for the Cmdlet which basically serves the same purpose. originated as an internal command in 86-DOS. The command-syntax and feature set between operating systems and command shell implementations can differ as can be seen in the following examples. DEC RT-11 In Digital Equipment Corporation's RT-11, the command accepts up to six input file specifications. Multiple file specifications are separated with commas. The default filetype is .LST. Wildcards are accepted in place of filenames or filetypes. Syntax The command-syntax on RT-11 is: TYPE[/options] filespecs COPIES:n – Specify the number of times the file will be typed DELETE – Delete the file after typing it LOG – Log the names of the files typed NEWFILES – Only files dated with the current system date will be typed NOLOG – Suppress the log of the files typed QUERY – Require confirmation before typing each file WAIT – Wait for user response before proceeding with the type Examples TYPE/COPIES:3 REPORT TYPE/NEWFILES *.LST DR CP/M, MP/M, FlexOS In Digital Research CP/M, the command expands tabs and line-feed characters (CTRL-I), assuming tab positions are set at every eighth column. The command does not support wildcard characters on FlexOS. Syntax The command-syntax on CP/M is: TYPE ufn Note: ufn = unambiguous file reference In MP/M, the command has a pause mode. It is specified by entering a 'P' followed by two decimal digits after the filename. The specified number of lines will be displayed and then the command will pause until a carriage return is entered. Examples A>TYPE FILE.PLM A>TYPE B:X.PRN 0A>TYPE CODE.ASM P23 TSL PC-MOS The Software Link's PC-MOS includes an implementation of TYPE. Like the rest of the operating system, it is licensed under the GPL v3. It supports an option to display the file content in hexadecimal form. Syntax The command-syntax on PC-MOS is: .TYPE filename [/h] filename – The name of the file to display /h – Display content in hexadecimal form Examples [A:\].TYPE FILE.BIN /h Microsoft Windows, OS/2, ReactOS The command supports wildcard characters. In Microsoft Windows and OS/2 it includes the filename in the output when typing multiple files. Syntax The command-syntax on Microsoft Windows and ReactOS is: type [Drive:][Path]FileName [Drive:][Path]FileName – This parameter specifies the location and name of the file or files to view. Multiple file names need to be separated with spaces. /? – This parameter displays help for the command. Examples C:\>type "my report.txt" C:\>type *.txt See also List of DOS commands List of Unix commands References Further reading External links type | Microsoft Docs CP/M commands Internal DOS commands MSX-DOS commands OS/2 commands ReactOS commands Windows commands Microcomputer software Windows administration
Operating System (OS)
682
BackSlash Linux BackSlash Linux was an Ubuntu and Debian-based operating system developed in India by Kumar Priyansh for AMD64 and Intel x64-based personal computers. It was based on free software and every release of the operating system is named after the characters of the Disney film franchise Frozen. Since the third major release, BackSlash Linux Olaf, BackSlash Linux used its own custom version of KDE, called the BackSlash Shell, as the default user desktop. Design BackSlash Linux's design was very hybrid. It resembles macOS at first glance, with KDE at its bottom, but instead of being a KDE-based distribution, it ships with many GNOME-based applications. Moreover, the buttons on the title bar resemble macOS but are arranged in a Windows-like manner. The top bar resembles GNOME at first glance, but after any app is opened, it displays the Universal Menu Bar, looking similar to macOS or Unity. Development Development of BackSlash Linux was started in mid 2016. Being Ubuntu-based, it is compatible with its repositories and packages and uses Discover Software Center to handle installation/removal of software. Its user interface aims at being intuitive for new users without consuming too many resources. BackSlash Linux is based on Ubuntu's Long Term Support releases, which its developer actively maintains for bugs and security for years even as development continues on the next release. Pre-release Versions Three pre-release versions, codenamed as Alpha, Beta and Gamma were released before the first stable release. These were still available to download until May 2017 but now have been taken off from the website. Anna The first stable version of BackSlash Linux was BackSlash Linux Anna published on 2 November 2016 and based on Ubuntu 14.04 and ran on Linux Kernel 4.2. It ran on the Cinnamon Desktop Environment with some extra plugins and the Plank dock to provide it a new look. The support and download option for this version is still available on BackSlash Linux's Website. An article was also published on Linux.com introducing BackSlash Linux. BackSlash Linux Anna was a complete system in itself providing the end-users with all types of needed daily use software. Some notable applications included Font Viewer, PDF Reader, Simple Scan, Deluge BitTorrent Client, Dropbox, Google Chrome, Google Earth, Pidgin IM, Skype, WPS Office, VLC Media Player, Clementine, Screen Reader and GDebi Package Manager. Wine was pre-installed in order to support Windows based applications and games. Elsa The second release of BackSlash Linux was published on 20 December 2016 and was called BackSlash Linux Elsa. Elsa was built atop the Ubuntu "long-term support" release — Ubuntu 16.04 and ran on the flagship desktop environment of elementary OS, Pantheon. Lot of applications used in BackSlash Linux Anna were dropped in this release to cut down the ISO size but It also added a lot of additional applications besides the core applications shipped with elementary OS and replaced the Epiphany browser by Google Chrome. Many additional utilities, like Skype, Wine, Deluge BitTorrent Client and Dictionary were also shipped out-of-the-box. Inclusion of GNOME Boxes for virtualisation of Desktops was another time-saver. BackSlash Linux Elsa embarked the inclusion of many utilities like Backups, DVD Burner, TeamViewer and Synaptic Package Manager. Pidgin IM was replaced by Empathy IM and it also included Firewall and full LibreOffice suite. Addition of Modem Manager was noticed, which was helpful while managing external Modem cards. Albert was present as a search companion in BackSlash Elsa which can be easily activated using Meta+Space shortcut key. Tweaking the desktop by changing themes, colors and cursors was also present in the desktop which is not present in Pantheon by default. It also introduced an active desktop with right-click options in the Pantheon Desktop. Olaf BackSlash Linux Olaf is the third major and the current release of BackSlash Linux. It was published on 9 May 2017 and introduced 150+ new features over the last versions of BackSlash Linux. It was based on Ubuntu 16.04 and ran BackSlash Shell - a customised KDE Desktop. Due to its heavily modified beautiful Desktop, it gained much attention and received much praise than its earlier versions. This version of BackSlash Linux removed some of the applications to cut down the ISO size and also brought many refinements. Google Chrome was replaced by the Open-Source Chromium and Wine was upgraded to version 2.0.1. Applications like Skype, Firewall, Modem Manager, DVD Burner, Dropbox, Empathy IM, GNOME Boxes, TeamViewer, Deluge and Dictionary which were present in earlier version of BackSlash Linux were dropped in this version. Albert Search Companion was replaced by KRunner Search. LibreOffice lost its position and WPS Office was back in the BackSlash Linux Series by the release of Olaf. Thunderbird was also included in this version for better email experience and the simple idea was to include the best apps available instead of KDE-specific apps. This is the reason we see GNOME apps like Calendar, Disks, and Maps in this release. BackSlash Linux Olaf also introduced a full-screen "AppLauncher" which resembled the "Launchpad" of macOS which was a fork of the discontinued application, "Slingshot". Reviewing BackSlash Linux Olaf, Souris from ProCambodian, said "For me I think this is the best build distro, it feels more complete and you don’t need much time to config it, cause’ everything is working out of the box." Kristoff BackSlash Linux Kristoff was released as a public beta on 13 August 2017, adding features like Fingerprint and HWE Kernel Support to the Linux Distro world which was followed up Deepin Linux, which also added fingerprint support to Deepin 15.5, released on 30 November, 2017. Stable release of BackSlash Linux Kristoff was made on Christmas, December 25, 2017. It builds upon the previous release of BackSlash Linux, Olaf, fixing almost all the bugs and also introduces the BackSlash Shell v2.0. Performance improvements have been highly worked on and it also introduces a new app called MultiView and a system optimizer. Installer issues have been fixed and UI has been redesigned. GTK+ support has been greatly worked on and BackSlash Linux Kristoff also supports Fingerprint Recognition for Lockscreen, Terminal Authentication and other App Authentications. Multitouch gesture support has been also implemented in addition with Wine. Redshift (Night Light) is also introduced with better temperature controls. Additional Components include a sidebar, better notification, audio and network management, Hardware-Enabled (HWE) Kernel. It includes support for snaps, a new login screen with video background and also cover flow task Switcher. I also includes Desktop Cube animation for switching to multiple virtual desktops. Applications included are Geary, Apache OpenOffice, GParted, Dolphin File Manager, Modem Manager, Synaptic Package Manager, VLC Media Player and others. Discontinuation BackSlash Linux's developer, Kumar Priyansh, posted a message on the website as of Sep 15, 2021 regarding the discontinuation of the project due to his personal opinions about the Linux kernel. He also stated that he is working on a UNIX/BSD based operating system and will be releasing it in place of BackSlash Linux. Version Summary table Security Security of BackSlash was based on Ubuntu and all the Security and Software updates are provided by Canonical Ltd. by default, the programs run with standard user privileges, but administrator privileges are given whenever required. For increased security, the sudo tool is used to assign temporary privileges for performing administrative tasks, which allows the root account to remain locked and helps prevent inexperienced users from inadvertently making catastrophic system changes or opening security holes. Most network ports are closed by default to prevent hacking. A built-in firewall allows end-users who install network servers to control access. A GUI is available to configure it. BackSlash compiles its packages using GNU Compiler Collection features such as Position-independent code and buffer overflow protection to Hardening (computing) its software. BackSlash supports full disk encryption, as well as encryption of the home and Private directories. Installation BackSlash Linux's installation is simple and is fully Graphical. BackSlash Linux can be booted and run from a USB flash drive on any PC capable of booting from a USB drive, with the option of saving settings to the flash drive. A Ubuntu Live USB creator program is available to install an Ubuntu-based Distribution on a USB drive. The Windows program "UNetbootin" allows BackSlash Linux USB burning. Installation supports a Logical Volume Manager (LVM) with automatic partitioning only, and disk encryption. UTF-8, the default character encoding, supports a variety of non-Roman type Script (Unicode). System requirements A x64 Intel or AMD64 processor 2 GB random-access memory 20 GB hard disk drive space (or USB flash drive memory card or external drive) Radeon, Intel HD and Iris Graphics or Nvidia Graphics processing unit with 256 MB shared memory for visual effects Video Graphics Array (VGA) compatible screen Either an optical disc drive or a USB for the installer media Internet access is helpful for downloading updates while installing Flavours and Server Edition BackSlash Linux also released two official flavours - MATE and GNOME and also a Server Edition of the Operating System. Soon, the development and support for these versions (Flavous and Server Edition) was discontinued and the development was focused on the mainstream release. References Linux Ubuntu derivatives
Operating System (OS)
683
Software Software is a collection of instructions that tell a computer how to work. This is in contrast to hardware, from which the system is built and actually performs the work. At the lowest programming level, executable code consists of machine language instructions supported by an individual processor—typically a central processing unit (CPU) or a graphics processing unit (GPU). Machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also invoke one of many input or output operations, for example displaying some text on a computer screen; causing state changes which should be visible to the user. The processor executes the instructions in the order they are provided, unless it is instructed to "jump" to a different instruction, or is interrupted by the operating system. , most personal computers, smartphone devices and servers have processors with multiple execution units or multiple processors performing computation together, and computing has become a much more concurrent activity than in the past. The majority of software is written in high-level programming languages. They are easier and more efficient for programmers because they are closer to natural languages than machine languages. High-level languages are translated into machine language using a compiler or an interpreter or a combination of the two. Software may also be written in a low-level assembly language, which has a strong correspondence to the computer's machine language instructions and is translated into machine language using an assembler. History An algorithm for what would have been the first piece of software was written by Ada Lovelace in the 19th century, for the planned Analytical Engine. She created proofs to show how the engine would calculate Bernoulli numbers. Because of the proofs and the algorithm, she is considered the first computer programmer. The first theory about software, prior to the creation of computers as we know them today, was proposed by Alan Turing in his 1935 essay, On Computable Numbers, with an Application to the Entscheidungsproblem (decision problem). This eventually led to the creation of the academic fields of computer science and software engineering; both fields study software and its creation. Computer science is the theoretical study of computer and software (Turing's essay is an example of computer science), whereas software engineering is the application of engineering principles to development of software. Prior to 1946, software was not yet the programs stored in the memory of stored-program digital computers, as we now understand it; the first electronic computing devices were instead rewired in order to "reprogram" them. In 2000, Fred Shapiro, a librarian at the Yale Law School, published a letter revealing that John Wilder Tukey's 1958 paper "The Teaching of Concrete Mathematics" contained the earliest known usage of the term "software" found in a search of JSTOR's electronic archives, predating the OED's citation by two years. This led many to credit Tukey with coining the term, particularly in obituaries published that same year, although Tukey never claimed credit for any such coinage. In 1995, Paul Niquette claimed he had originally coined the term in October 1953, although he could not find any documents supporting his claim. The earliest known publication of the term "software" in an engineering context was in August 1953 by Richard R. Carhart, in a Rand Corporation Research Memorandum. Types On virtually all computer platforms, software can be grouped into a few broad categories. Purpose, or domain of use Based on the goal, computer software can be divided into: Application software uses the computer system to perform special functions beyond the basic operation of the computer itself. There are many different types of application software because the range of tasks that can be performed with a modern computer is so large—see list of software. System software manages hardware behaviour, as to provide basic functionalities that are required by users, or for other software to run properly, if at all. System software is also designed for providing a platform for running application software, and it includes the following: Operating systems are essential collections of software that manage resources and provide common services for other software that runs "on top" of them. Supervisory programs, boot loaders, shells and window systems are core parts of operating systems. In practice, an operating system comes bundled with additional software (including application software) so that a user can potentially do some work with a computer that only has one operating system. Device drivers operate or control a particular type of device that is attached to a computer. Each device needs at least one corresponding device driver; because a computer typically has at minimum at least one input device and at least one output device, a computer typically needs more than one device driver. Utilities are computer programs designed to assist users in the maintenance and care of their computers. Malicious software, or malware, is software that is developed to harm or disrupt computers. Malware is closely associated with computer-related crimes, though some malicious programs may have been designed as practical jokes. Nature or domain of execution Desktop applications such as web browsers and Microsoft Office, as well as smartphone and tablet applications (called "apps"). JavaScript scripts are pieces of software traditionally embedded in web pages that are run directly inside the web browser when a web page is loaded without the need for a web browser plugin. Software written in other programming languages can also be run within the web browser if the software is either translated into JavaScript, or if a web browser plugin that supports that language is installed; the most common example of the latter is ActionScript scripts, which are supported by the Adobe Flash plugin. Server software, including: Web applications, which usually run on the web server and output dynamically generated web pages to web browsers, using e.g. PHP, Java, ASP.NET, or even JavaScript that runs on the server. In modern times these commonly include some JavaScript to be run in the web browser as well, in which case they typically run partly on the server, partly in the web browser. Plugins and extensions are software that extends or modifies the functionality of another piece of software, and require that software be used in order to function. Embedded software resides as firmware within embedded systems, devices dedicated to a single use or a few uses such as cars and televisions (although some embedded devices such as wireless chipsets can themselves be part of an ordinary, non-embedded computer system such as a PC or smartphone). In the embedded system context there is sometimes no clear distinction between the system software and the application software. However, some embedded systems run embedded operating systems, and these systems do retain the distinction between system software and application software (although typically there will only be one, fixed application which is always run). Microcode is a special, relatively obscure type of embedded software which tells the processor itself how to execute machine code, so it is actually a lower level than machine code. It is typically proprietary to the processor manufacturer, and any necessary correctional microcode software updates are supplied by them to users (which is much cheaper than shipping replacement processor hardware). Thus an ordinary programmer would not expect to ever have to deal with it. Programming tools Programming tools are also software in the form of programs or applications that developers use to create, debug, maintain, or otherwise support software. Software is written in one or more programming languages; there are many programming languages in existence, and each has at least one implementation, each of which consists of its own set of programming tools. These tools may be relatively self-contained programs such as compilers, debuggers, interpreters, linkers, and text editors, that can be combined to accomplish a task; or they may form an integrated development environment (IDE), which combines much or all of the functionality of such self-contained tools. IDEs may do this by either invoking the relevant individual tools or by re-implementing their functionality in a new way. An IDE can make it easier to do specific tasks, such as searching in files in a particular project. Many programming language implementations provide the option of using both individual tools or an IDE. Topics Architecture People who use modern general purpose computers (as opposed to embedded systems, analog computers and supercomputers) usually see three layers of software performing a variety of tasks: platform, application, and user software. Platform software The platform includes the firmware, device drivers, an operating system, and typically a graphical user interface which, in total, allow a user to interact with the computer and its peripherals (associated equipment). Platform software often comes bundled with the computer. On a PC one will usually have the ability to change the platform software. Application software Application software is what most people think of when they think of software. Typical examples include office suites and video games. Application software is often purchased separately from computer hardware. Sometimes applications are bundled with the computer, but that does not change the fact that they run as independent applications. Applications are usually independent programs from the operating system, though they are often tailored for specific platforms. Most users think of compilers, databases, and other "system software" as applications. User-written software End-user development tailors systems to meet users' specific needs. User software includes spreadsheet templates and word processor templates. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is. Depending on how competently the user-written software has been integrated into default application packages, many users may not be aware of the distinction between the original packages, and what has been added by co-workers. Executionpammi Computer software has to be "loaded" into the computer's storage (such as the hard drive or memory). Once the software has loaded, the computer is able to execute the software. This involves passing instructions from the application software, through the system software, to the hardware which ultimately receives the instruction as machine code. Each instruction causes the computer to carry out an operation—moving data, carrying out a computation, or altering the control flow of instructions. Data movement is typically from one place in memory to another. Sometimes it involves moving data between memory and registers which enable high-speed data access in the CPU. Moving data, especially large amounts of it, can be costly; this is sometimes avoided by using "pointers" to data instead. Computations include simple operations such as incrementing the value of a variable data element. More complex computations may involve many operations and data elements together. Quality and reliability Software quality is very important, especially for commercial and system software. If software is faulty, it can delete a person's work, crash the computer and do other unexpected things. Faults and errors are called "bugs" which are often discovered during alpha and beta testing. Software is often also a victim to what is known as software aging, the progressive performance degradation resulting from a combination of unseen bugs. Many bugs are discovered and fixed through software testing. However, software testing rarely—if ever—eliminates every bug; some programmers say that "every program has at least one more bug" (Lubarsky's Law). In the waterfall method of software development, separate testing teams are typically employed, but in newer approaches, collectively termed agile software development, developers often do all their own testing, and demonstrate the software to users/clients regularly to obtain feedback. Software can be tested through unit testing, regression testing and other methods, which are done manually, or most commonly, automatically, since the amount of code to be tested can be large. Programs containing command software enable hardware engineering and system operations to function much easier together. License The software's license gives the user the right to use the software in the licensed environment, and in the case of free software licenses, also grants other rights such as the right to make copies. Proprietary software can be divided into two types: freeware, which includes the category of "free trial" software or "freemium" software (in the past, the term shareware was often used for free trial/freemium software). As the name suggests, freeware can be used for free, although in the case of free trials or freemium software, this is sometimes only true for a limited period of time or with limited functionality. software available for a fee, which can only be legally used on purchase of a license. Open-source software comes with a free software license, granting the recipient the rights to modify and redistribute the software. Patents Software patents, like other types of patents, are theoretically supposed to give an inventor an exclusive, time-limited license for a detailed idea (e.g. an algorithm) on how to implement a piece of software, or a component of a piece of software. Ideas for useful things that software could do, and user requirements, are not supposed to be patentable, and concrete implementations (i.e. the actual software packages implementing the patent) are not supposed to be patentable either—the latter are already covered by copyright, generally automatically. So software patents are supposed to cover the middle area, between requirements and concrete implementation. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid—although since all useful software has effects on the physical world, this requirement may be open to debate. Meanwhile, American copyright law was applied to various aspects of the writing of the software code. Software patents are controversial in the software industry with many people holding different views about them. One of the sources of controversy is that the aforementioned split between initial ideas and patent does not seem to be honored in practice by patent lawyers—for example the patent for aspect-oriented programming (AOP), which purported to claim rights over any programming tool implementing the idea of AOP, howsoever implemented. Another source of controversy is the effect on innovation, with many distinguished experts and companies arguing that software is such a fast-moving field that software patents merely create vast additional litigation costs and risks, and actually retard innovation. In the case of debates about software patents outside the United States, the argument has been made that large American corporations and patent lawyers are likely to be the primary beneficiaries of allowing or continue to allow software patents. Design and implementation Design and implementation of software varies depending on the complexity of the software. For instance, the design and creation of Microsoft Word took much more time than designing and developing Microsoft Notepad because the latter has much more basic functionality. Software is usually developed in integrated development environments (IDE) like Eclipse, IntelliJ and Microsoft Visual Studio that can simplify the process and compile the software. As noted in a different section, software is usually created on top of existing software and the application programming interface (API) that the underlying software provides like GTK+, JavaBeans or Swing. Libraries (APIs) can be categorized by their purpose. For instance, the Spring Framework is used for implementing enterprise applications, the Windows Forms library is used for designing graphical user interface (GUI) applications like Microsoft Word, and Windows Communication Foundation is used for designing web services. When a program is designed, it relies upon the API. For instance, a Microsoft Windows desktop application might call API functions in the .NET Windows Forms library like Form1.Close() and Form1.Show() to close or open the application. Without these APIs, the programmer needs to write these functionalities entirely themselves. Companies like Oracle and Microsoft provide their own APIs so that many applications are written using their software libraries that usually have numerous APIs in them. Data structures such as hash tables, arrays, and binary trees, and algorithms such as quicksort, can be useful for creating software. Computer software has special economic characteristics that make its design, creation, and distribution different from most other economic goods. A person who creates software is called a programmer, software engineer or software developer, terms that all have a similar meaning. More informal terms for programmer also exist such as "coder" and "hacker"although use of the latter word may cause confusion, because it is more often used to mean someone who illegally breaks into computer systems. See also Computer program Independent software vendor* Open-source software Outline of software Software asset management Software release life cycle References Sources External links Mathematical and quantitative methods (economics)
Operating System (OS)
684
Outage management system An outage management system (OMS) is a computer system used by operators of electric distribution systems to assist in restoration of power. Major functions of an OMS Major functions usually found in an OMS include: Prediction of location of transformer, fused, recloser or breaker that opened upon failure. Prioritizing restoration efforts and managing resources based upon criteria such as locations of emergency facilities, size of outages, and duration of outages. Providing information on extent of outages and number of customers impacted to management, media and regulators. Calculation of estimation of restoration times. Management of crews assisting in restoration. Calculation of crews required for restoration. OMS principles and integration requirements At the core of a modern outage management system is a detailed network model of the distribution system. The utility's geographic information system (GIS) is usually the source of this network model. By combining the locations of outage calls from customers, a rules engine is used to predict the locations of outages. For instance, since the distribution system is primarily tree-like or radial in design, all calls in particular area downstream of a fuse could be inferred to be caused by a single fuse or circuit breaker upstream of the calls. The outage calls are usually taken by call takers in a call center utilizing a customer information system (CIS). Another common way for outage calls to enter into the CIS (and thus the OMS) is by integration with an interactive voice response (IVR) system. The CIS is also the source for all the customer records which are linked to the network model. Customers are typically linked to the transformer serving their residence or business. It is important that every customer be linked to a device in the model so that accurate statistics are derived on each outage. Customers not linked to a device in the model are referred to as "fuzzies". More advanced automatic meter reading (AMR) systems can provide outage detection and restoration capability and thus serve as virtual calls indicating customers who are without power. However, unique characteristics of AMR systems such as the additional system loading and the potential for false positives requires that additional rules and filter logic must be added to the OMS to support this integration. Outage management systems are also commonly integrated with SCADA systems which can automatically report the operation of monitored circuit breakers and other intelligent devices such as SCADA reclosers. Another system that is commonly integrated with an outage management system is a mobile data system. This integration provides the ability for outage predictions to automatically be sent to crews in the field and for the crews to be able to update the OMS with information such as estimated restoration times without requiring radio communication with the control center. Crews also transmit details about what they did during outage restoration. It is important that the outage management system electrical model be kept up to current so that it can accurately make outage predictions and also accurately keep track of which customers are out and which are restored. By using this model and by tracking which switches, breakers and fuses are open and which are closed, network tracing functions can be used to identify every customer who is out, when they were first out and when they were restored. Tracking this information is the key to accurately reporting outage statistics. (P.-C. Chen, et al., 2014) OMS benefits OMS benefits include: Reduced outage durations due to faster restoration based upon outage location predictions. Reduced outage duration averages due to prioritizing Improved customer satisfaction due to increase awareness of outage restoration progress and providing estimated restoration times. Improved media relations by providing accurate outage and restoration information. Fewer complaints to regulators due to ability to prioritize restoration of emergency facilities and other critical customers. Reduced outage frequency due to use of outage statistics for making targeted reliability improvements. OMS based distribution reliability improvements An OMS supports distribution system planning activities related to improving reliability by providing important outage statistics. In this role, an OMS provides the data needed for the calculation of measurements of the system reliability. Reliability is commonly measured by performance indices defined by the IEEE P1366-2003 standard. The most frequently used performance indices are SAIDI, CAIDI, SAIFI and MAIFI. An OMS also support the improvement of distribution reliability by providing historical data that can be mined to find common causes, failures and damages. By understanding the most common modes of failure, improvement programs can be prioritized with those that provide the largest improvement on reliability for the lowest cost. While deploying an OMS improves the accuracy of the measured reliability indices, it often results an apparent degradation of reliability due to improvements over manual methods that almost always underestimate the frequency of outages, the size of outage and the duration of outages. To compare reliability in years before an OMS deployment to the years after requires adjustments to be made to the pre-deployment years measurements to be meaningful. References Sastry, M.K.S. (2007), "Integrated Outage Management System: an effective solution for power utilities to address customer grievances", International Journal of Electronic Customer Relationship Management, vol. 1, no. 1, pages: 30-40 Burke, J. (2000), "Using outage data to improve reliability", Computer Applications in Power, IEEE volume 13, issue 2, April 2000 Page(s):57 - 60 Frost, Keith (2007), "Utilizing Real-Time Outage Data for External and Internal Reporting", Power Engineering Society General Meeting, 2007. IEEE 24–28 June 2007 pages 1 – 2 Hall, D.F. (2001), "Outage management systems as integrated elements of the distribution enterprise", Transmission and Distribution Conference and Exposition, 2001 IEEE/PES volume 2, 28 October - 2 November 2001, pages 1175 - 1177 Kearney, S. (1998), "How outage management systems can improve customer service", Transmission & Distribution Construction, Operation & Live-Line Maintenance Proceedings, 1998. ESMO '98. 1998 IEEE 8th International Conference on 26–30 April 1998, pages 172 – 178 Nielsen, T.D. (2002), "Improving outage restoration efforts using rule-based prediction and advanced analysis", IEEE Power Engineering Society Winter Meeting, 2002, volume 2, 27–31 January 2002, pages 866 - 869 Nielsen, T. D. (2007), "Outage Management Systems Real-Time Dashboard Assessment Study", Power Engineering Society General Meeting, 2007. IEEE, 24–28 June 2007, pages 1 – 3 Robinson, R.L.; Hall, D.F.; Warren, C.A.; Werner, V.G. (2006), "Collecting and categorizing information related to electric power distribution interruption events: customer interruption data collection within the electric power distribution industry", Power Engineering Society General Meeting, 2006. IEEE 18–22 June 2006, page 5. P.C. Chen, T. Dokic, and M. Kezunovic, "The Use of Big Data for Outage Management in Distribution Systems," International Conference on Electricity Distribution (CIRED) Workshop, 2014. Electric power
Operating System (OS)
685
QIO QIO (Queue I/O) is a term used in several computer operating systems designed by the former Digital Equipment Corporation (DEC) of Maynard, Massachusetts. I/O operations on these systems are initiated by issuing a QIO call to the kernel. There are two types of QIO - Queue I/O, and Queue I/O and Wait. For QIO without wait, the call returns immediately. If the request is successfully enqueued, the actual operation occurs asynchronously. On completion, status is returned in the QIO status doubleword. The QIO request may also specify that completion set an event flag or issue an Asynchronous System Trap (AST). The call may also be issued as QIOW (Queue I/O and Wait for completion), allowing synchronous I/O. In this case, the wait-for-event-flag operation is combined so the call does not return until the I/O operation completes or fails. The following operating systems implemented QIO(W): RSX-15 RSX-11 (including all of the variants) RSTS/E (synchronous only, emulated by the RSX run-time system) OpenVMS QIO arguments in VMS Under VMS, the arguments to the QIO call are: The event flag to set when the operation completes. It isn't possible to not specify an event flag; flag 0 is valid. It is perfectly permissible to have multiple simultaneous operations that set the same event flag on completion. It is then up to the application to sort out any confusion this might cause, or just ignore that event flag. The channel, a small integer previously associated with the device. At this level, all operations on disk files and directories (filename parsing, directory lookup, file opening/closing) are done by appropriate QIO requests. The function code to be performed. 6 bits are assigned to the basic code (such as read, write), with a further 10 bits for "modifiers" whose meaning depend on the basic code. The optional I/O status block (IOSB), which is cleared by the QIO call, and filled in on completion of the I/O operation. The first two bytes hold the completion status (success, end of file reached, timeout, I/O error, etc.), while the next two bytes normally return the number of bytes read or written in the operation. The meaning, if any, of the last four bytes is operation-dependent. The optional AST routine to invoke when the operation completes. An additional parameter (whose meaning is up to the caller) to be passed to the AST routine. A partially standardized list of up to six parameters known as P1 through P6. The first two parameters typically specify the I/O buffer starting address (P1), and the I/O byte count (P2). The remaining parameters vary with the operation, and the particular device. For example, for a computer terminal, P3 might be the time to allow for the read to complete whereas, for a disk drive, it might be the starting block number of the transfer. QIO completion There are three different ways to sense when the queued I/O operation has completed: When the event flag becomes set. When the first two bytes of the IOSB become nonzero. When the AST routine executes. Unusual QIOs that require complex processing Simple QIOs, such as read or write requests, are either serviced by the kernel itself or by device drivers. Certain more complicated requests, specifically those involving tape drives and file-level operations, were originally executed by an Ancillary Control Processor (ACP) (a special purpose task with its own address mapping). The Files-11 ODS-1 file system on RSX-11 was implemented by a subroutine library that communicated with a task named F11ACP using a special set of QIOs called the "ACP QIOs." The equivalent functionality for controlling magnetic tape devices was provided by a task named MTAACP. Originally, the Files-11 ODS-2 file system was provided by F11BACP on VMS, but the functionality of F11BACP was later incorporated into the VMS kernel to save the overhead of process context switches, and is now called an XQP (eXtended Qio Processor). IO$_READPROMPT Probably the most complex single QIO request possible is the VMS terminal driver's IO$_READPROMPT call with the IO$M_TIMED modifier; this QIO requires all six additional parameters: P1 is the address of the buffer into which the input characters are received P2 is the length of the buffer, limiting the maximum number of characters to read. If the buffer is filled, the read will complete successfully, even if the user does not type a line-terminator character. Zero is allowed, in which case the read will terminate successfully with zero characters read. P3 is the maximum number of seconds to wait for more input. This is only used if the IO$M_TIMED modifier is present, and a value of zero means zero seconds: the read will terminate immediately, so the only possible input will be whatever had been "typed ahead" by the user. P4 is the address of the optional "terminator mask", specifying which ASCII characters terminate the read. If omitted, this defaults to the usual VMS line delimiters including carriage-return (but not line-feed). It is possible to specify a mask with no line terminators, in which case the read will only complete when the buffer is full, or the timeout has elapsed. P5 is the address of a prompt string to be displayed to the user before accepting input. The advantage of providing this prompt, instead of as a prior write operation, is automatic redisplay in any situation requiring a refresh of the input line while the read is in progress (such as after an operator message has been broadcast to the terminal, or the user hits CTRL/R to redisplay the line). P6 is the length of the prompt string. By appropriate choices of the above parameters, it is possible to do both terminal input and output with the one call, there is no need to use the regular IO$_WRITEVBLK call for terminal output at all. References OpenVMS
Operating System (OS)
686
Amiga support and maintenance software Amiga support and maintenance software performs service functions such as formatting media for a specific filesystem, diagnosing failures that occur on formatted media, data recovery after media failure, and installation of new software for the Amiga family of personal computers—as opposed to application software, which performs business, education, and recreation functions. The Amiga came with some embedded utility programs, but many more were added over time, often by third-party developers and companies. Original utilities Commodore included utility programs with the operating system. Many of these were original features, which were adopted into other systems: Installer is a tool for the installation of Amiga software. It features a LISP-like language to handle installations. The Amiga Installer does not support dependencies or track where the installed files are delivered; it simply copies them. AmigaGuide is a hypertext markup scheme and a browser for writing and reading web page-like documents. AmigaGuide files are text files in a simple markup language, which facilitates editing and localization in any ASCII text editor. Commodore developed the AmigaGuide format before the World Wide Web was widely known. Consumers who bought Amiga computers in a store did not receive documentation on how to write AmigaGuide documents. Utilities borrowed from other systems Update tools: Updater is a utility to keep system and third party files up to date AmiUpdate was developed by Simon Archer to keep installed third-party programs up to date. Grunch is a software center for AmigaOS and MorphOS. MorphUP allows MorphOS users to install and update new third-party software. None of these update systems was widely used by the Amiga community. Commodities and utilities Amiga places system utilities in two standard directories: The Utilities directory contains programs like IconEdit. The Commodities directory (volume SYS:Tools/Commodities/ or SYS:Utilities/Commodities under AmigaOS4) contains executable applet-like utilities which enhance system usability, like for example the ScreenBlanker, the default screen saver shipped with AmigaOS. Commodities are usually loaded at system startup. Many require no interaction and do not feature any GUI interface. A system utility called Exchange allows the user to disable, enable, hide, show, and quit Commodities. Hard disk partitioning AmigaOS features a standard centralized utility to partition and format hard disks, called HDToolBox. MorphOS uses an updated version of the SCSIConfig utility (since MorphOS version 2, HDConfig) implemented by third party vendor Phase5. In spite of the name, "SCSIConfig" possessed a unique feature at the time, which was providing a consistent mechanism to manage all types of disk interfaces, including IDE, irrespective of which interface the disk(s) in question used. Diagnostic tools AmigaOS diagnostic tools are usually programs which display the current state of Exec and AmigaDOS activities. Active process explorer: Scout, Ranger System calls and messages: SnoopDOS, Snoopium Memory management: CyberGuard, Enforcer, MemMungWall, TLSFMem by Chris Hodges Virtual memory: GigaMem, VMM Benchmark utilities AmiBench, AIBB Degrading tools: Degrader (which "degrades" modern Amiga systems to performance and hardware equivalents of legacy Amiga models) Promoting tools Promoter and ForceMonitor are utilities that allow the user to control the resolution of Intuition screens for Amiga programs. Game loaders WHDLoad is a utility to install legacy Amiga games on a hard disk and load them from Workbench desktop instead of floppies, on which they were often delivered. jst is an older utility which the developer abandoned in order to concentrate efforts on WHDLoad. Old jstloaders can be read with WHDLoad, and jst itself has some early level of WHDLoad compatibility. Command line interfaces and text-based shells The original Amiga CLI (Command Line Interface) had some basic editing capabilities, command templates, and other features such as ANSI compatibility and color selection. In AmigaOS 1.3, the program evolved into a complete text-based shell called AmigaShell, with command history and enhanced editing capabilities. Third-party developers created improved shells because the console-handler standard command line device driver (or "handler" in Amiga technical language) is independent of the command-line interpreter. This program controlled text-based interfaces into Amiga. Console-handler replacements include KingCON, ViNCEd, and Conman. Some well-known shells from other platforms were ported to Amiga. These included bash (Bourne Again SHell), CSH (C Shell), and ZSH (Z shell). The shells taken from Unix and Linux were adapted into Amiga and improved with its peculiar capabilities and functions. The MorphOS Shell is an example of Z shell mixed with the KingCON console handler. It originated as a Unix-like shell and is provided with all the features expected from such a component: AmigaDOS commands (more than 100 commands, most of which are Unix-like), local and global variables, command substitution, command redirection, named and unnamed pipes, history, programmable menus, multiple shells in a window, ANSI compatibility, color selection, and so on. It also includes all the necessary commands for scripting. Amiga WIMP GUI interfaces Starting from the original Amiga WIMP standard desktop, Workbench, Amiga interfaces were enhanced by third-party developers. Amiga users are free to replace the original Workbench interface with Scalos and Directory Opus. The standard GUI toolkit, called Intuition, was enhanced in OS2.x with the introduction of GadTools; and third parties created their own toolkits such as Magic User Interface (MUI) (the standard on MorphOS systems) and ClassAct, which evolved into ReAction GUI (the standard GUI on AmigaOS 4.0). Amiga Advanced Graphics Systems Many users have added advanced graphics drivers to their Amiga. This lets the AmigaOS handle high resolution graphics, enhanced with millions of colors. Standard GUI interfaces with this capability are CyberGraphX, EGS, and Picasso96. Graphical engines Graphical libraries available on the Amiga include: Warp3D, a 3D graphic engine for Amiga TinyGL (MorphOS) and MiniGL (AmigaOS), implementations of subsets of the OpenGL graphics engine X11, also available through the Amiga versions of Cygnix Cairo Vector Library, available on AmigaOS 4 and MorphOS SSA (Super Smooth Animation), a proprietary system for playback at 50 Hz or 60 Hz. proDAD Adorage was the first product to use this. GTK. On Amiga it is being developed as a GTK_MUI wrapper, to map any existing graphical features of GTK to the standard Magic User Interface (MUI) graphic user interface system. All Amiga systems can also support the SDL (Simple DirectMedia Layer) cross-platform, multimedia, and free software libraries written in C which creates an abstraction over various platforms' graphics, sound, and input APIs, allowing a developer to write a computer game or other multimedia application once and run it on many operating systems. PostScript Amiga supports PostScript through Ghostscript and SaxonScript (included with Saxon Publisher). Ghostview is the foremost used graphical GUI for GhostScript on the Amiga. Since AmigaOS 2.1, in the Prefs (Preferences) system directory, there is a printer preferences program called PrinterPS, which allows the use of PostScript printers on the Amiga. TrueType fonts, color and anim fonts Original Amiga outline fonts (also called vector fonts) were Agfa Compugraphic fonts available since AmigaOS 2.0 with the standard utility Fountain (later called IntelliFont) from Commodore. Third-party developers added support for TrueType fonts using various libraries, such as TrueType Library I and II, and LibFreeType library. The standard diskfont.library also supported bitmap multicolour fonts (ColorFonts), such as the commercial Kara Fonts, or even animated fonts also originally created by Kara Computer Graphics. Font designer software Commodore provided a bitmap font editor called FED. Personal Fonts Maker was the most widely used Amiga software to create bitmap fonts, while TypeSmith v.2.5b was the de facto standard utility to create outline fonts. File management Backup and recovery In the first Amiga OS releases, Commodore included a standard floppy disk recovery utility called DiskDoctor. Its purpose was to recover files from mangled floppy disks. Unfortunately, this utility worked only with AmigaDOS standard disks. A major fault was that it did not save the recovered data on different disks, rather it saved the info on the original and performed its operations directly on the original. It wrote on original disks and destroyed non-AmigaDOS disks (mainly autobooting games) by overwriting their bootblock. DiskDoctor renamed recovered disks to "Lazarus" (after the resurrected man in the New Testament). These features were undocumented and led to an Amiga urban legend that there was a computer virus nicknamed the Lazarus Virus, whose final purpose was to make disks unreadable and renaming it with that name. Third-party developers released data recovery programs such as DiskSalv, which was more often used to validate Amiga filesystems on hard disk partitions. Other Amiga disk repair and backup tools included: Floppy only: Disk Mechanic, Disk Repair, Dr. Ami Floppy and hard drive: Ami-Back Tools, Ami-Filesafe Pro, Quarterback Tools, Amiga Tools DeLuxe, Diavolo Backup Smart File System (SFS): SFS Recover Tool, SFSDoctor, SFSCheck 2, SFSResize 1.0 Disk copiers During the 8 bit and 16/32 bit era, copying software was not considered illegal in many countries, and piracy was not perceived as being a crime by the users of home computers (usually young people). Commodore 64 and Sinclair ZX Spectrum software was copied using cassette decks, while IBM PC, Atari, and Amiga software was copied using special programs called disk copiers which were engineered to copy any floppy disk surface byte by byte, often using special, efficient, and advanced techniques of programming and "Disk Track driving" to maintain Floppy Disk read/write head alignment. In the early days of the Amiga platform, about 16 disk copiers were created in a short amount of time (1985–1989) that enabled copying Amiga floppy disks, including Nibbler, QuickNibble, ZCopier, XCopy/Cachet, FastCopier, Disk Avenger, Tetra Copy (which enabled the user to play Tetris while copying disks), Cyclone, Maverick, D-Copy, Safe II, PowerCopier, Quick Copier, Marauder II (styled as "Marauder //"), Rattle Copy, and BurstNibble. Many were legal in many countries until years later. These programs (for example, Marauder, X-Copy, and Nibbler) were then sold in packages complete with instructions, warranty, and EULA like other productivity software. Some floppy drives included LED track indicators to show if the disks were hacked by the original programmers to support up to track 82 of the disk. There were also copying solutions that included both hardware and software, like Super Card Ami II and Syncro Express I/II/III. DFC5 could only copy standard AmigaOS formatted disks for backup purposes; however, it multitasked inside of the Amiga Workbench GUI. X-COPY III, and later the final version, X-COPY Pro, were the most popular Amiga copy programs. They were capable of bit-by-bit copying, also called "nibbling". Although incapable of true multitasking, the programs were capable of taking advantage of Amiga configurations with multiple floppy drives; for instance, on Amiga systems with four floppy drives, X-COPY was capable of simultaneously copying from a source drive to three others. Coupled with excellent bit-by-bit replication capabilities, these features made X-COPY the de facto standard for copying floppy disks on the Amiga. Another popular copying program was D-COPY, by a Swedish group "D-Mob", which, in spite of some innovative features and better/faster copying routines, failed to gain dominance. Archives and compression utilities The most popular archivers were LhA and LZX. Programs to archive ZIP, Gzip, Bzip2, and RAR files were available but seldom used, and many have an Amiga counterpart, such as 7-Zip. Utilities were available for reading and writing archive formats such as ARC, ARJ (unarchive only), the CAB files common in Windows installation, StuffIt SIT archives from Macintosh, Uuencode (used for encoding binary attachments of e-mail messages), TAR (common on UNIX and Linux), RPM (from Red Hat), and more. Amiga supported "packed" or "crunched" (meaning compressed) executables, which were common in the age of floppy disks, when disk space and memory conservation was critical. These executable binary files had a decompress routine attached to them that would automatically unpack or decrunch (decompress) the executable upon loading into memory. The Amiga also included "level depacking", implemented by "Titanics Cruncher", which enabled a binary executable to be decrunched as it was being loaded, requiring a very small amount of memory to do so. In general, packing and crunching was taken from the Commodore 64 cracking scene. Some crunchers, such as Time Cruncher, were "ported" from Commodore 64, displaying the same visual effects during decrunching. The CPU in the Amiga was completely different from the one in the Commodore 64, requiring a complete rewrite. Noteworthy were TurboImploder and PowerPacker, as they were easy to use, with graphical interfaces. Other popular crunchers were DefjamPacker, TetraPack, DoubleAction, Relokit, StoneCracker, Titanics and CrunchMania. The ability to compress and decompress single files and directories on the fly has been present on the AmigaOS since at least 1994. A similar feature was implemented relatively recently as a property in the ZFS filesystem. The AmigaOS packers and cruncher libraries are centralized by using the XPK system. The XPK system consists of a master library and several (de)packer sublibraries. Programs use only the master library directly, while sublibraries (akin to plug-ins) implement the actual (de)compression. When unpacking/decrunching, the applications do not need to know which library was used to pack or crunch the data. XPK is a wrapper for crunchers; to decrunch non-XPK packed formats requires XFD. Another important invention on the Amiga platform was the ADF format for creating images of Amiga floppy disks, either standard AmigaDOS floppies or non-DOS ("NDOS") ones, for use in Amiga emulators, such as WinUAE. Amiga emulators and AmigaOS (with third-party software) can use these files as if they were virtual floppy disks. Unlimited virtual floppies could be created on modern Amigas, although WinUAE on a real PC can handle only four at a time, the maximum number of floppy drives that the Amiga hardware could have connected at any one time. All the popular Amiga compression implementations and archive files are now centralized and implemented by a single system library called XAD, which has a front-end GUI named Voodoo-X. It is included in AmigaOS 3.9 and up with UnArc. This library is modular and can handle more than 80 compression formats. Filesystems Amiga can use various filesystems. The historical standards are the original Amiga filesystem, called the Old File System. This was good for floppy disks but wasted space on hard disks and is considered obsolete. The Fast File System (FFS) can handle file names up to 30 characters, has international settings (it can optionally recognise upper- and lower-case accented letters as equivalent) and could also be cached, if the users chose to format the partition with the cache option. The FFS filesystem evolved into FFS2. Modern journaling file systems for Amiga are the Smart File System (SFS) and Professional File System (PFS). The MultiUser File System (MuFS) supports multiple users. Using MuFS the owner of the system could grant various privileges on files by creating privileges for groups and users. It was first available with the Ariadne Ethernet card, and later standalone. The Professional File System suite has a utility to let PFS to be patched to support MuFS and MuFS features. The latest version is 1.8 and was released in 2001. CrossDOS is a utility to read MS-DOS formatted floppy disks in FAT12 and FAT16 filesystem, either 720 KiB double-density format or high-density (1440 KiB) (on connected floppy drives that can read 1440 MS-DOS disks). It is a commercial product, and a slightly cut-down version was included with AmigaOS beginning with version 2.1. The FAT95 library recognizes partitions of various filesystems common in other systems such as FAT16 and FAT32. It also reads DOS floppies and USB pen drives formatted with FAT16 or FAT32. Filesystems like ext2 for Linux, NTFS from Microsoft, and more are supported by third-party developers. MorphOS natively supports SFS, FFS/FFS2, PFS, MacOS HFS, HFS+, Linux Ext2, FAT16, FAT32, and NTFS filesystems. Data/file types The Datatype system of AmigaOS is a centralized, expandable, modular system describing any kind of file (text, music, image, videos). Each has a standard load/save module. Any experienced programmer, using the Amiga Datatype programming guidelines, could create new standard datatype modules. The module could be left visible to the whole Amiga system (thus to all Amiga programs) by copying the datatype into the system directory SYS:Classes/DataTypes/, and the descriptor (used to identify files) into DEVS:DataTypes/. This allows programs to load and save any files for which the corresponding datatypes exist. File descriptors did not need to be embedded in the executable code. An independent system of loaders was not needed for new productivity software. Amiga productivity software tools therefore have a smaller size and a more clean design than similar programs running in other operating systems. Supported Amiga datatypes include: MultiView MultiView is the Amiga universal viewer. It can load and display any file for which a corresponding datatype exists. MIME types Modern Amiga-like operating systems such as AmigaOS 4.0 and MorphOS can handle also MIME types. Any kind of file, due to its peculiar characteristics (thanks to filename extensions), or data embedded into the file itself (for example into file header) can be associated with a program that handle it, and this feature improves and completes the capabilities of Amiga to recognize and deal with any kind of file. Device support USB The only known historical USB stack for the Amiga was created for the MacroSystem DraCo Amiga clone. It supported only USB 1.0 and ceased with the demise of that platform. Modern USB support drivers for Amiga are: Poseidon USB stack available for AmigaOS 3, AROS, and MorphOS by Chris Hodges (open-source software). Poseidon has a modular approach to USB, and various hardware devices are supported by a certain number of HID devices. Sirion USB stack of AmigaOS 4.0 ANAIIS (Another Native Amiga IO Interface Stack) by Gilles Pelletier FireWire (IEEE 1394) The only known historical Amiga support for FireWire was built for the DraCo Amiga clone by Macrosystem. Only one FireWire interface exists for Amiga. It is named Fireworks, and it was created for the MorphOS system by programmer Pavel Fedin. It is still in an early stage of development and is freely downloadable. Printer drivers The print manager program TurboPrint, by German firm IrseeSoft, is the de facto standard for advanced printing on the Amiga. It is a modular program with many drivers which support many modern printers. PrintStudio Professional I and II are another well known printer driver system for the Amiga. PrintManager v39 by Stephan Rupprecht, available at the Aminet repository, is a print spooler for AmigaOS 3.x and 4.0. Video digitizers Video digitizing includes DigiView; the FrameMachine Zorro II expansion card for A2000, 3000, 4000; the Impact Vision IV24 from GVP; the VidiAmiga real time digitizer; and the Paloma module for the Picasso IV graphics card. Graphic Tablets In the 1980s, SummaGraphics tablets were common. Summagraphics directly supported Amiga with its drivers. In 1994, GTDriver (Graphic Tablet Driver) was the most common driver for serial port tablets, like Summagraphics MM, Summagraphics Bitpadone, CalComp 2000, Cherry, TekTronix 4967, and WACOM. It could also be used as a mouse driver. Graphics tablets now are mainly USB devices and are automatically recognized by Amiga USB stacks. The most widely used driver for graphic tablets is FormAldiHyd. FormAldiHyd can be used with Aiptek, Aldi, Tevion, and WACOM IV (Graphire, ArtPad, A3, A4, A5, and PenPartner) graphic tablets. The Poseidon USB driver, written by the same author as FormAldiHyd, Chris Hodges, directly supports USB graphics tablets, including ones more modern than FormAldiHyd. Scanner drivers Amiga programs often have scanner drivers embedded in their interface and are limited to some ancient scanner models. One example is Art Department Professional (ADPro). In recent times, scanner management is managed by the Amiga Poseidon USB stack. Poseidon detects scanners from their signature, and loads the corresponding HIDD scanner module. The graphical interface is managed by programs like ScanTrax and ScanQuix. Genlocks, chromakey, signal video inverters The Amiga has special circuitry to support a genlock signal and chromakey. Genlock software vendors included GVP (Great Valley Products) (an American hardware manufacturer) and Hama, Electronic Design, and Sirius genlocks from Germany. Infrared/remote controls The IRCom class is a driver that supports the IRCom standard and is available for the USB Poseidon Stack. Pegasos computers have an internal IrDA port connector for connecting infrared devices, but MorphOS offers no support for it. The internal IrDA port can be used by installing Linux. WiFi and Bluetooth The Amiga can use WiFi external routers connected physically through Ethernet cable and talk with remote WiFi devices. Drivers are available for Prism2 internal PCI and PCMCIA WiFi expansion cards, but there are no drivers for Bluetooth standard devices like mobile phones, Bluetooth handsets, keyboards, or mice. A USB class exists for the Poseidon stack to use the "Wireless PC Lock" USB device by Sitecom Europe BV and engage its security functions. It is called Wireless PC Lock. Others In the past, drivers and hardware cards were available to drive the Polaroid Freeze Frame Digital Camera System Polaroid Digital Palette CI-3000 and Digital Palette CI 5000, with Polaroid software. Drivers for single-frame video recorders allow users to save on tape the 3D animations created on the Amiga using Ampex and Betacam devices. Also available are time-base correctors (TBCs), a family of devices correcting timing errors; one was the Personal TBC series. The Amiga helped to create and launch digital recorders coupled with an internal hard disk and a DVD drive for file transfer. One was Broadcaster Elite, one of the first digital video recorders, based on a SCSI system and a Zorro II Amiga expansion card. Expansion cards could transform an Amiga into a waveform monitor or vectorscope. The Phonepak card from GVP transformed the Amiga into a telephone switchboard, fax system, and SOHO (small office/home office) answering machine. The Amiga was used as a video titler system in the experimental era of high-definition television. A battery of three Amigas was used as a video titler on analog HDTV experiments on HDTV NTSC 1125 lines standard, by channels like ESPN, ABC, and NBC. See also Amiga productivity software Amiga music software Amiga programming languages Amiga Internet and communications software References Amiga Support and maintenance Lists of software
Operating System (OS)
687
Computer program product The terms "computer program product" and "program product" may refer to: Software as a product T 1173/97, a decision by the Board of Appeal in the European Patent Office, which is also known as Computer program product/IBM or simply Computer program product computer programming when considered as a product of labor a License Program Product concept that is part of the IBM AIX operating system
Operating System (OS)
688
WOA WOA may refer to: Computing Web-oriented architecture, a computer systems architectural style Windows RT, a computer operating system formerly known as Windows on ARM WebObjects application, the file system suffix of an application written using the WebObjects framework from NeXT, later Apple Wars War of Attrition, a conflict between Israel and Egypt War of aggression Music Wacken Open Air, the largest exclusively metal music festival in the world War of Ages, metalcore band from Pennsylvania W.O.A Records of India and Dubai Sports World Olympians Association Welsh Orienteering Association Other uses World Ocean Atlas World of Art, a series of books on art the ICAO airline designator for World Airways
Operating System (OS)
689
IBM System/4 Pi The IBM System/4 Pi is a family of avionics computers used, in various versions, on the F-15 Eagle fighter, E-3 Sentry AWACS, Harpoon Missile, NASA's Skylab, MOL, and the Space Shuttle, as well as other aircraft. Development began in 1965, deliveries in 1967. It descends from the approach used in the System/360 mainframe family of computers, in which the members of the family were intended for use in many varied user applications. (This is expressed in the name: there are 4π steradians in a sphere, just as there are 360 degrees in a circle.) Previously, custom computers had been designed for each aerospace application, which was extremely costly. Models System/4 Pi consisted of basic models: Model TC (Tactical Computer) - A briefcase-size computer for applications such as missile guidance, helicopters, satellites and submarines. Weight: about Model CP (Customized Processor/Cost Performance) - An intermediate-range processor for applications such as aircraft navigation, weapons delivery, radar correlation and mobile battlefield systems. Weight: total Model CP-2 (Cost Performance - Model 2), weight Model EP (Extended Performance) - A large-scale data processor for applications requiring real-time processing of large volumes of data, such as crewed spacecraft, airborne warning and control systems and command and control systems. Weight: System/360 connections Connections with System/360: Main storage arrays of System/4 Pi were assembled from core planes that were militarized versions of those used in IBM System/360 computers Software was for both 360 and 4 Pi Model EP used an instruction subset of IBM System/360 (Model 44) - user programs could be checked on System/360 Uses The Skylab space station employed the model TC-1, which had a 16-bit word length and 16,384 words of memory with a custom input/output assembly. AP-101 The AP-101, being the top-of-the-line of the System/4 Pi range, shares its general architecture with the System/360 mainframes. It has 16 32-bit registers, and uses a microprogram to define an instruction set of 154 instructions. Originally only 16 bits were available for addressing memory; later this was extended with four bits from the program status word register, allowing a directly addressable memory range of 1M locations. This avionics computer has been used in the U.S. Space Shuttle, the B-52 and B-1B bombers, and other aircraft. It is a repackaged version of the AP-1 used in the F-15 fighter. When it was designed, it was a high-performance pipelined processor with core memory. While its specifications are exceeded by most of the modern microprocessors, it was considered high-performance for its era as it could process 480,000 instructions per second (0.48 MIPS; compared to the 7,000 instructions per second (0.007 MIPS) of the computer used on Gemini spacecraft, while top-of-the line microprocessors as of 2020 are capable of performing more than 2,000,000 MIPS). It remained in service on the Space Shuttle because it worked, was flight-certified, and developing a new system would have been too expensive. The Space Shuttle AP-101s were augmented by glass cockpit technology. The B-1B bomber employs a network of eight model AP-101F computers. The AP-101B originally used in the Shuttle had core memory. The AP-101S upgrade in the early 1990s used semiconductor memory. Each AP-101 on the Shuttle was coupled with an Input-Output Processor (IOP), consisting of one Master Sequence Controller (MSC) and 24 Bus Control Elements (BCEs). The MSC and BCEs executed programs from the same memory system as the main CPU, offloading control the Shuttle's serial data bus system from the CPU. The Space Shuttle used five AP-101 computers as general-purpose computers (GPCs). Four operated in sync, for redundancy, while the fifth was a backup running software written independently. The Shuttle's guidance, navigation and control software was written in HAL/S, a special-purpose high-level programming language, while much of the operating system and low-level utility software was written in assembly language. AP-101s used by the US Air Force are mostly programmed in JOVIAL, such as the system found on the B-1B Lancer bomber. References Bibliography External links IBM Archive: IBM and the Space Shuttle IBM Archive: IBM and Skylab NASA description of Shuttle GPCs NASA history of AP-101 development Space Shuttle Computers and Avionics Guidance computers 4999System 4 Pi Military computers
Operating System (OS)
690
Booting In computing, booting is the process of starting a computer as initiated via hardware such as a button or by a software command. After it is switched on, a computer's central processing unit (CPU) has no software in its main memory, so some process must load software into memory before it can be executed. This may be done by hardware or firmware in the CPU, or by a separate processor in the computer system. Restarting a computer also is called rebooting, which can be "hard", e.g. after electrical power to the CPU is switched from off to on, or "soft", where the power is not cut. On some systems, a soft boot may optionally clear RAM to zero. Both hard and soft booting can be initiated by hardware such as a button press or by a software command. Booting is complete when the operative runtime system, typically the operating system and some applications, is attained. The process of returning a computer from a state of sleep (suspension) does not involve booting; however, restoring it from a state of hibernation does. Minimally, some embedded systems do not require a noticeable boot sequence to begin functioning and when turned on may simply run operational programs that are stored in ROM. All computing systems are state machines, and a reboot may be the only method to return to a designated zero-state from an unintended, locked state. In addition to loading an operating system or stand-alone utility, the boot process can also load a storage dump program for diagnosing problems in an operating system. Boot is short for bootstrap or bootstrap load and derives from the phrase to pull oneself up by one's bootstraps. The usage calls attention to the requirement that, if most software is loaded onto a computer by other software already running on the computer, some mechanism must exist to load the initial software onto the computer. Early computers used a variety of ad-hoc methods to get a small program into memory to solve this problem. The invention of read-only memory (ROM) of various types solved this paradox by allowing computers to be shipped with a start up program that could not be erased. Growth in the capacity of ROM has allowed ever more elaborate start up procedures to be implemented. History There are many different methods available to load a short initial program into a computer. These methods reach from simple, physical input to removable media that can hold more complex programs. Pre integrated-circuit-ROM examples Early computers Early computers in the 1940s and 1950s were one-of-a-kind engineering efforts that could take weeks to program and program loading was one of many problems that had to be solved. An early computer, ENIAC, had no program stored in memory, but was set up for each problem by a configuration of interconnecting cables. Bootstrapping did not apply to ENIAC, whose hardware configuration was ready for solving problems as soon as power was applied. The EDSAC system, the second stored-program computer to be built, used stepping switches to transfer a fixed program into memory when its start button was pressed. The program stored on this device, which David Wheeler completed in late 1948, loaded further instructions from punched tape and then executed them. First commercial computers The first programmable computers for commercial sale, such as the UNIVAC I and the IBM 701 included features to make their operation simpler. They typically included instructions that performed a complete input or output operation. The same hardware logic could be used to load the contents of a punch card (the most typical ones) or other input media, such as a magnetic drum or magnetic tape, that contained a bootstrap program by pressing a single button. This booting concept was called a variety of names for IBM computers of the 1950s and early 1960s, but IBM used the term "Initial Program Load" with the IBM 7030 Stretch and later used it for their mainframe lines, starting with the System/360 in 1964. The IBM 701 computer (1952–1956) had a "Load" button that initiated reading of the first 36-bit word into main memory from a punched card in a card reader, a magnetic tape in a tape drive, or a magnetic drum unit, depending on the position of the Load Selector switch. The left 18-bit half-word was then executed as an instruction, which usually read additional words into memory. The loaded boot program was then executed, which, in turn, loaded a larger program from that medium into memory without further help from the human operator. The term "boot" has been used in this sense since at least 1958. Other IBM computers of that era had similar features. For example, the IBM 1401 system (c. 1958) used a card reader to load a program from a punched card. The 80 characters stored in the punched card were read into memory locations 001 to 080, then the computer would branch to memory location 001 to read its first stored instruction. This instruction was always the same: move the information in these first 80 memory locations to an assembly area where the information in punched cards 2, 3, 4, and so on, could be combined to form the stored program. Once this information was moved to the assembly area, the machine would branch to an instruction in location 080 (read a card) and the next card would be read and its information processed. Another example was the IBM 650 (1953), a decimal machine, which had a group of ten 10-position switches on its operator panel which were addressable as a memory word (address 8000) and could be executed as an instruction. Thus setting the switches to 7004000400 and pressing the appropriate button would read the first card in the card reader into memory (op code 70), starting at address 400 and then jump to 400 to begin executing the program on that card. IBM's competitors also offered single button program load. The CDC 6600 (c. 1964) had a dead start panel with 144 toggle switches; the dead start switch entered 12 words from the toggle switches to the memory of peripheral processor (PP) 0 and initiated the load sequence. PP 0 loaded the necessary code into its own memory and then initialized the other PPs. The GE 645 (c. 1965) had a "SYSTEM BOOTLOAD" button that, when pressed, caused one of the I/O controllers to load a 64-word program into memory from a diode read-only memory and deliver an interrupt to cause that program to start running. The first model of the PDP-10 had a "READ IN" button that, when pressed, reset the processor and started an I/O operation on a device specified by switches on the control panel, reading in a 36-bit word giving a target address and count for subsequent word reads; when the read completed, the processor started executing the code read in by jumping to the last word read in. A noteworthy variation of this is found on the Burroughs B1700 where there is neither a bootstrap ROM nor a hardwired IPL operation. Instead, after the system is reset it reads and executes opcodes sequentially from a tape drive mounted on the front panel; this sets up a boot loader in RAM which is then executed. However, since this makes few assumptions about the system it can equally well be used to load diagnostic (Maintenance Test Routine) tapes which display an intelligible code on the front panel even in cases of gross CPU failure. IBM System/360 and successors In the IBM System/360 and its successors, including the current z/Architecture machines, the boot process is known as Initial Program Load (IPL). IBM coined this term for the 7030 (Stretch), revived it for the design of the System/360, and continues to use it in those environments today. In the System/360 processors, an IPL is initiated by the computer operator by selecting the three hexadecimal digit device address (CUU; C=I/O Channel address, UU=Control unit and Device address) followed by pressing the LOAD button. On the high end System/360 models, most System/370 and some later systems, the functions of the switches and the LOAD button are simulated using selectable areas on the screen of a graphics console, often an IBM 2250-like device or an IBM 3270-like device. For example, on the System/370 Model 158, the keyboard sequence 0-7-X (zero, seven and X, in that order) results in an IPL from the device address which was keyed into the input area. The Amdahl 470V/6 and related CPUs supported four hexadecimal digits on those CPUs which had the optional second channel unit installed, for a total of 32 channels. Later, IBM would also support more than 16 channels. The IPL function in the System/360 and its successors prior to IBM Z, and its compatibles such as Amdahl's, reads 24 bytes from an operator-specified device into main storage starting at real address zero. The second and third groups of eight bytes are treated as Channel Command Words (CCWs) to continue loading the startup program (the first CCW is always simulated by the CPU and consists of a Read IPL command, , with command chaining and suppress incorrect length indication being enforced). When the I/O channel commands are complete, the first group of eight bytes is then loaded into the processor's Program Status Word (PSW) and the startup program begins execution at the location designated by that PSW. The IPL device is usually a disk drive, hence the special significance of the read-type command, but exactly the same procedure is also used to IPL from other input-type devices, such as tape drives, or even card readers, in a device-independent manner, allowing, for example, the installation of an operating system on a brand-new computer from an OS initial distribution magnetic tape. For disk controllers, the command also causes the selected device to seek to cylinder , head , simulating a Seek cylinder and head command, , and to search for record , simulating a Search ID Equal command, ; seeks and searches are not simulated by tape and card controllers, as for these device classes a Read IPL command is simply a sequential read command. The disk, tape or card deck must contain a special program to load the actual operating system or standalone utility into main storage, and for this specific purpose "IPL Text" is placed on the disk by the stand-alone DASDI (Direct Access Storage Device Initialization) program or an equivalent program running under an operating system, e.g., ICKDSF, but IPL-able tapes and card decks are usually distributed with this "IPL Text" already present. IBM introduced some evolutionary changes in the IPL process, changing some details for System/370 Extended Architecture (S/370-XA) and later, and adding a new type of IPL for z/Architecture. Minicomputers Minicomputers, starting with the Digital Equipment Corporation (DEC) PDP-5 and PDP-8 (1965) simplified design by using the CPU to assist input and output operations. This saved cost but made booting more complicated than pressing a single button. Minicomputers typically had some way to toggle in short programs by manipulating an array of switches on the front panel. Since the early minicomputers used magnetic core memory, which did not lose its information when power was off, these bootstrap loaders would remain in place unless they were erased. Erasure sometimes happened accidentally when a program bug caused a loop that overwrote all of memory. Other minicomputers with such simple form of booting include Hewlett-Packard's HP 2100 series (mid-1960s), the original Data General Nova (1969), and DEC's PDP-11 (1970). DEC later added an optional diode matrix read-only memory for the PDP-11 that stored a bootstrap program of up to 32 words (64 bytes). It consisted of a printed circuit card, the M792, that plugged into the Unibus and held a 32 by 16 array of semiconductor diodes. With all 512 diodes in place, the memory contained all "one" bits; the card was programmed by cutting off each diode whose bit was to be "zero". DEC also sold versions of the card, the BM792-Yx series, pre-programmed for many standard input devices by simply omitting the unneeded diodes. Following the older approach, the earlier PDP-1 has a hardware loader, such that an operator need only push the "load" switch to instruct the paper tape reader to load a program directly into core memory. The Data General Supernova used front panel switches to cause the computer to automatically load instructions into memory from a device specified by the front panel's data switches, and then jump to loaded code; the Nova 800 and 1200 had a switch that loaded a program into main memory from a special read-only memory and jumped to it. Early minicomputer boot loader examples In a minicomputer with a paper tape reader, the first program to run in the boot process, the boot loader, would read into core memory either the second-stage boot loader (often called a Binary Loader) that could read paper tape with checksum or the operating system from an outside storage medium. Pseudocode for the boot loader might be as simple as the following eight instructions: Set the P register to 9 Check paper tape reader ready If not ready, jump to 2 Read a byte from paper tape reader to accumulator Store accumulator to address in P register If end of tape, jump to 9 Increment the P register Jump to 2 A related example is based on a loader for a Nicolet Instrument Corporation minicomputer of the 1970s, using the paper tape reader-punch unit on a Teletype Model 33 ASR teleprinter. The bytes of its second-stage loader are read from paper tape in reverse order. Set the P register to 106 Check paper tape reader ready If not ready, jump to 2 Read a byte from paper tape reader to accumulator Store accumulator to address in P register Decrement the P register Jump to 2 The length of the second stage loader is such that the final byte overwrites location 7. After the instruction in location 6 executes, location 7 starts the second stage loader executing. The second stage loader then waits for the much longer tape containing the operating system to be placed in the tape reader. The difference between the boot loader and second stage loader is the addition of checking code to trap paper tape read errors, a frequent occurrence with relatively low-cost, "part-time-duty" hardware, such as the Teletype Model 33 ASR. (Friden Flexowriters were far more reliable, but also comparatively costly.) Booting the first microcomputers The earliest microcomputers, such as the Altair 8800 (released first in 1975) and an even earlier, similar machine (based on the Intel 8008 CPU) had no bootstrapping hardware as such. When started, the CPU would see memory that would contain executable code containing only binary zeros—memory was cleared by resetting when powering up. The front panels of these machines carried toggle switches for entering addresses and data, one switch per bit of the computer memory word and address bus. Simple additions to the hardware permitted one memory location at a time to be loaded from those switches to store bootstrap code. Meanwhile, the CPU was kept from attempting to execute memory content. Once correctly loaded, the CPU was enabled to execute the bootstrapping code. This process was tedious and had to be error-free. Integrated circuit read-only memory era The boot process for minicomputers and microcomputers was revolutionized by the introduction of integrated circuit read-only memory (ROM), with its many variants, including mask-programmed ROMs, programmable ROMs (PROM), erasable programmable ROMs (EPROM), and flash memory. These allowed firmware boot programs to be included as part of the computer. The introduction of an (external) ROM was in an Italian telephone switching elaborator, called "Gruppi Speciali", patented in 1975 by Alberto Ciaramella, a researcher at CSELT. Gruppi Speciali was, starting from 1975, a fully single-button machine booting into the operating system from a ROM memory composed from semiconductors, not from ferrite cores. Although the ROM device was not natively embedded in the computer of Gruppi Speciali, due to the design of the machine, it also allowed the single-button ROM booting in machines not designed for that (therefore, this "bootstrap device" was architecture-independent), e.g. the PDP-11. Storing the state of the machine after the switch-off was also in place, which was another critical feature in the telephone switching contest. Typically, every microprocessor will, after a reset or power-on condition, perform a start-up process that usually takes the form of "begin execution of the code that is found starting at a specific address" or "look for a multibyte code at a specific address and jump to the indicated location to begin execution". A system built using that microprocessor will have the permanent ROM occupying these special locations so that the system always begins operating without operator assistance. For example, Intel x86 processors always start by running the instructions beginning at F000:FFF0, while for the MOS 6502 processor, initialization begins by reading a two-byte vector address at $FFFD (MS byte) and $FFFC (LS byte) and jumping to that location to run the bootstrap code. Apple Inc.'s first computer, the Apple 1 introduced in 1976, featured PROM chips that eliminated the need for a front panel for the boot process (as was the case with the Altair 8800) in a commercial computer. According to Apple's ad announcing it "No More Switches, No More Lights ... the firmware in PROMS enables you to enter, display and debug programs (all in hex) from the keyboard." Due to the expense of read-only memory at the time, the Apple II series booted its disk operating systems using a series of very small incremental steps, each passing control onward to the next phase of the gradually more complex boot process. (See Apple DOS: Boot loader). Because so little of the disk operating system relied on ROM, the hardware was also extremely flexible and supported a wide range of customized disk copy protection mechanisms. (See Software Cracking: History.) Some operating systems, most notably pre-1995 Macintosh systems from Apple, are so closely interwoven with their hardware that it is impossible to natively boot an operating system other than the standard one. This is the opposite extreme of the scenario using switches mentioned above; it is highly inflexible but relatively error-proof and foolproof as long as all hardware is working normally. A common solution in such situations is to design a boot loader that works as a program belonging to the standard OS that hijacks the system and loads the alternative OS. This technique was used by Apple for its A/UX Unix implementation and copied by various freeware operating systems and BeOS Personal Edition 5. Some machines, like the Atari ST microcomputer, were "instant-on", with the operating system executing from a ROM. Retrieval of the OS from secondary or tertiary store was thus eliminated as one of the characteristic operations for bootstrapping. To allow system customizations, accessories, and other support software to be loaded automatically, the Atari's floppy drive was read for additional components during the boot process. There was a timeout delay that provided time to manually insert a floppy as the system searched for the extra components. This could be avoided by inserting a blank disk. The Atari ST hardware was also designed so the cartridge slot could provide native program execution for gaming purposes as a holdover from Atari's legacy making electronic games; by inserting the Spectre GCR cartridge with the Macintosh system ROM in the game slot and turning the Atari on, it could "natively boot" the Macintosh operating system rather than Atari's own TOS. The IBM Personal Computer included ROM-based firmware called the BIOS; one of the functions of that firmware was to perform a power-on self test when the machine was powered up, and then to read software from a boot device and execute it. Firmware compatible with the BIOS on the IBM Personal Computer is used in IBM PC compatible computers. The UEFI was developed by Intel, originally for Itanium-based machines, and later also used as an alternative to the BIOS in x86-based machines, including Apple Macs using Intel processors. Unix workstations originally had vendor-specific ROM-based firmware. Sun Microsystems later developed OpenBoot, later known as Open Firmware, which incorporated a Forth interpreter, with much of the firmware being written in Forth. It was standardized by the IEEE as IEEE standard ; firmware that implements that standard was used in PowerPC-based Macs and some other PowerPC-based machines, as well as Sun's own SPARC-based computers. The Advanced RISC Computing specification defined another firmware standard, which was implemented on some MIPS-based and Alpha-based machines and the SGI Visual Workstation x86-based workstations. Modern boot loaders When a computer is turned off, its softwareincluding operating systems, application code, and dataremains stored on non-volatile memory. When the computer is powered on, it typically does not have an operating system or its loader in random-access memory (RAM). The computer first executes a relatively small program stored in read-only memory (ROM, and later EEPROM, NOR flash) along with some needed data, to initialize RAM (especially on x86 systems), to access the nonvolatile device (usually block device, e.g. NAND flash) or devices from which the operating system programs and data can be loaded into RAM. The small program that starts this sequence is known as a bootstrap loader, bootstrap or boot loader. Often, multiple-stage boot loaders are used, during which several programs of increasing complexity load one after the other in a process of chain loading. Some earlier computer systems, upon receiving a boot signal from a human operator or a peripheral device, may load a very small number of fixed instructions into memory at a specific location, initialize at least one CPU, and then point the CPU to the instructions and start their execution. These instructions typically start an input operation from some peripheral device (which may be switch-selectable by the operator). Other systems may send hardware commands directly to peripheral devices or I/O controllers that cause an extremely simple input operation (such as "read sector zero of the system device into memory starting at location 1000") to be carried out, effectively loading a small number of boot loader instructions into memory; a completion signal from the I/O device may then be used to start execution of the instructions by the CPU. Smaller computers often use less flexible but more automatic boot loader mechanisms to ensure that the computer starts quickly and with a predetermined software configuration. In many desktop computers, for example, the bootstrapping process begins with the CPU executing software contained in ROM (for example, the BIOS of an IBM PC) at a predefined address (some CPUs, including the Intel x86 series are designed to execute this software after reset without outside help). This software contains rudimentary functionality to search for devices eligible to participate in booting, and load a small program from a special section (most commonly the boot sector) of the most promising device, typically starting at a fixed entry point such as the start of the sector. Boot loaders may face peculiar constraints, especially in size; for instance, on the IBM PC and compatibles, the boot code must fit in the Master Boot Record (MBR) and the Partition Boot Record (PBR), which in turn are limited to a single sector; on the IBM System/360, the size is limited by the IPL medium, e.g., card size, track size. On systems with those constraints, the first program loaded into RAM may not be sufficiently large to load the operating system and, instead, must load another, larger program. The first program loaded into RAM is called a first-stage boot loader, and the program it loads is called a second-stage boot loader. First-stage boot loader Examples of first-stage (ROM stage or Hardware initialization stage) bootloaders include BIOS, UEFI, coreboot, Libreboot and Das U-Boot. On the IBM PC, the boot loader in the Master Boot Record (MBR) and the Partition Boot Record (PBR) was coded to require at least 32 KB (later tightened to 64 KB) of system memory and only use instructions supported by the original 8088/8086 processors. Second-stage boot loader Second-stage boot loaders, such as GNU GRUB, rEFInd, BOOTMGR, Syslinux, NTLDR or iBoot, are not themselves operating systems, but are able to load an operating system properly and transfer execution to it; the operating system subsequently initializes itself and may load extra device drivers. The second-stage boot loader does not need drivers for its own operation, but may instead use generic storage access methods provided by system firmware such as the BIOS, UEFI or Open Firmware, though typically with restricted hardware functionality and lower performance. Many boot loaders (like GNU GRUB, rEFInd, Windows's BOOTMGR, Syslinux, and Windows NT/2000/XP's NTLDR) can be configured to give the user multiple booting choices. These choices can include different operating systems (for dual or multi-booting from different partitions or drives), different versions of the same operating system (in case a new version has unexpected problems), different operating system loading options (e.g., booting into a rescue or safe mode), and some standalone programs that can function without an operating system, such as memory testers (e.g., memtest86+), a basic shell (as in GNU GRUB), or even games (see List of PC Booter games). Some boot loaders can also load other boot loaders; for example, GRUB loads BOOTMGR instead of loading Windows directly. Usually a default choice is preselected with a time delay during which a user can press a key to change the choice; after this delay, the default choice is automatically run so normal booting can occur without interaction. The boot process can be considered complete when the computer is ready to interact with the user, or the operating system is capable of running system programs or application programs. Many embedded systems must boot immediately. For example, waiting a minute for a digital television or a GPS navigation device to start is generally unacceptable. Therefore, such devices have software systems in ROM or flash memory so the device can begin functioning immediately; little or no loading is necessary, because the loading can be precomputed and stored on the ROM when the device is made. Large and complex systems may have boot procedures that proceed in multiple phases until finally the operating system and other programs are loaded and ready to execute. Because operating systems are designed as if they never start or stop, a boot loader might load the operating system, configure itself as a mere process within that system, and then irrevocably transfer control to the operating system. The boot loader then terminates normally as any other process would. Network booting Most computers are also capable of booting over a computer network. In this scenario, the operating system is stored on the disk of a server, and certain parts of it are transferred to the client using a simple protocol such as the Trivial File Transfer Protocol (TFTP). After these parts have been transferred, the operating system takes over the control of the booting process. As with the second-stage boot loader, network booting begins by using generic network access methods provided by the network interface's boot ROM, which typically contains a Preboot Execution Environment (PXE) image. No drivers are required, but the system functionality is limited until the operating system kernel and drivers are transferred and started. As a result, once the ROM-based booting has completed it is entirely possible to network boot into an operating system that itself does not have the ability to use the network interface. Personal computers (PC) Boot devices The boot device is the device from which the operating system is loaded. A modern PC's UEFI or BIOS firmware supports booting from various devices, typically a local solid state drive or hard disk drive via the GPT or Master Boot Record (MBR) on such a drive or disk, an optical disc drive (using El Torito), a USB mass storage device (FTL-based flash drive, SD card or multi-media card slot, USB hard disk drive, USB optical disc drive, etc.), or a network interface card (using PXE). Older, less common BIOS-bootable devices include floppy disk drives, Zip drives, and LS-120 drives. Typically, the firmware (UEFI or BIOS) will allow the user to configure a boot order. If the boot order is set to "first, the DVD drive; second, the hard disk drive", then the firmware will try to boot from the DVD drive, and if this fails (e.g. because there is no DVD in the drive), it will try to boot from the local hard disk drive. For example, on a PC with Windows installed on the hard drive, the user could set the boot order to the one given above, and then insert a Linux Live CD in order to try out Linux without having to install an operating system onto the hard drive. This is an example of dual booting, in which the user chooses which operating system to start after the computer has performed its Power-on self-test (POST). In this example of dual booting, the user chooses by inserting or removing the DVD from the computer, but it is more common to choose which operating system to boot by selecting from a boot manager menu on the selected device, by using the computer keyboard to select from a BIOS or UEFI Boot Menu, or both; the Boot Menu is typically entered by pressing or keys during the POST; the BIOS Setup is typically entered by pressing or keys during the POST. Several devices are available that enable the user to quick-boot into what is usually a variant of Linux for various simple tasks such as Internet access; examples are Splashtop and Latitude ON. Boot sequence Upon starting, an IBM-compatible personal computer's x86 CPU, executes in real mode, the instruction located at reset vector (the physical memory address on 16-bit x86 processors and on 32-bit and 64-bit x86 processors), usually pointing to the firmware (UEFI or BIOS) entry point inside the ROM. This memory location typically contains a jump instruction that transfers execution to the location of the firmware (UEFI or BIOS) start-up program. This program runs a power-on self-test (POST) to check and initialize required devices such as main memory (DRAM), the PCI bus and the PCI devices (including running embedded Option ROMs). One of the most involved steps is setting up DRAM over SPD, further complicated by the fact that at this point memory is very limited. After initializing required hardware, the firmware (UEFI or BIOS) goes through a pre-configured list of non-volatile storage devices ("boot device sequence") until it finds one that is bootable. A bootable MBR device is defined as one that can be read from, and where the last two bytes of the first sector contain the little-endian word , found as byte sequence , on disk (also known as the MBR boot signature), or where it is otherwise established that the code inside the sector is executable on x86 PCs. Once the BIOS has found a bootable device it loads the boot sector to linear address (usually segment:offset :, but some BIOSes erroneously use :) and transfers execution to the boot code. In the case of a hard disk, this is referred to as the Master Boot Record (MBR). The conventional MBR code checks the MBR's partition table for a partition set as bootable (the one with active flag set). If an active partition is found, the MBR code loads the boot sector code from that partition, known as Volume Boot Record (VBR), and executes it. The MBR boot code is often operating-system specific. The boot sector code is the first-stage boot loader. It is located on fixed disks and removable drives, and must fit into the first 446 bytes of the Master Boot Record in order to leave room for the default 64-byte partition table with four partition entries and the two-byte boot signature, which the BIOS requires for a proper boot loader — or even less, when additional features like more than four partition entries (up to 16 with 16 bytes each), a disk signature (6 bytes), a disk timestamp (6 bytes), an Advanced Active Partition (18 bytes) or special multi-boot loaders have to be supported as well in some environments. In floppy and superfloppy Volume Boot Records, up to 59 bytes are occupied for the Extended BIOS Parameter Block on FAT12 and FAT16 volumes since DOS 4.0, whereas the FAT32 EBPB introduced with DOS 7.1 requires even 87 bytes, leaving only 423 bytes for the boot loader when assuming a sector size of 512 bytes. Microsoft boot sectors therefore traditionally imposed certain restrictions on the boot process, for example, the boot file had to be located at a fixed position in the root directory of the file system and stored as consecutive sectors, conditions taken care of by the SYS command and slightly relaxed in later versions of DOS. The boot loader was then able to load the first three sectors of the file into memory, which happened to contain another embedded boot loader able to load the remainder of the file into memory. When Microsoft added LBA and FAT32 support, they even switched to a boot loader reaching over two physical sectors and using 386 instructions for size reasons. At the same time other vendors managed to squeeze much more functionality into a single boot sector without relaxing the original constraints on only minimal available memory (32 KB) and processor support (). For example, DR-DOS boot sectors are able to locate the boot file in the FAT12, FAT16 and FAT32 file system, and load it into memory as a whole via CHS or LBA, even if the file is not stored in a fixed location and in consecutive sectors. The VBR is often OS-specific; however, its main function is to load and execute the operating system boot loader file (such as bootmgr or ntldr), which is the second-stage boot loader, from an active partition. Then the boot loader loads the OS kernel from the storage device. If there is no active partition, or the active partition's boot sector is invalid, the MBR may load a secondary boot loader which will select a partition (often via user input) and load its boot sector, which usually loads the corresponding operating system kernel. In some cases, the MBR may also attempt to load secondary boot loaders before trying to boot the active partition. If all else fails, it should issue an INT 18h BIOS interrupt call (followed by an INT 19h just in case INT 18h would return) in order to give back control to the BIOS, which would then attempt to boot off other devices, attempt a remote boot via network. Many modern systems (Intel Macs and newer PCs) use UEFI. Unlike BIOS, UEFI (not Legacy boot via CSM) does not rely on boot sectors, UEFI system loads the boot loader (EFI application file in USB disk or in the EFI System Partition) directly, and the OS kernel is loaded by the boot loader. Other kinds of boot sequences Some modern CPUs and microcontrollers (for example, TI OMAP) or sometimes even DSPs may have boot ROM with boot code integrated directly into their silicon, so such a processor could perform quite a sophisticated boot sequence on its own and load boot programs from various sources like NAND flash, SD or MMC card and so on. It is difficult to hardwire all the required logic for handling such devices, so an integrated boot ROM is used instead in such scenarios. Boot ROM usage enables more flexible boot sequences than hardwired logic could provide. For example, the boot ROM could try to perform boot from multiple boot sources. Also, a boot ROM is often able to load a boot loader or diagnostic program via serial interfaces like UART, SPI, USB and so on. This feature is often used for system recovery purposes when for some reasons usual boot software in non-volatile memory got erased, and it could also be used for initial non-volatile memory programming when there is clean non-volatile memory installed and hence no software available in the system yet. Some embedded system designs may also include an intermediary boot sequence step in form of additional code that gets loaded into system RAM by the integrated boot ROM. Additional code loaded that way usually serves as a way for overcoming platform limitations, such as small amounts of RAM, so a dedicated primary boot loader, such as Das U-Boot, can be loaded as the next step in system's boot sequence. The additional code and boot sequence step are usually referred to as secondary program loader (SPL). It is also possible to take control of a system by using a hardware debug interface such as JTAG. Such an interface may be used to write the boot loader program into bootable non-volatile memory (e.g. flash) by instructing the processor core to perform the necessary actions to program non-volatile memory. Alternatively, the debug interface may be used to upload some diagnostic or boot code into RAM, and then to start the processor core and instruct it to execute the uploaded code. This allows, for example, the recovery of embedded systems where no software remains on any supported boot device, and where the processor does not have any integrated boot ROM. JTAG is a standard and popular interface; many CPUs, microcontrollers and other devices are manufactured with JTAG interfaces (as of 2009). Some microcontrollers provide special hardware interfaces which cannot be used to take arbitrary control of a system or directly run code, but instead they allow the insertion of boot code into bootable non-volatile memory (like flash memory) via simple protocols. Then at the manufacturing phase, such interfaces are used to inject boot code (and possibly other code) into non-volatile memory. After system reset, the microcontroller begins to execute code programmed into its non-volatile memory, just like usual processors are using ROMs for booting. Most notably this technique is used by Atmel AVR microcontrollers, and by others as well. In many cases such interfaces are implemented by hardwired logic. In other cases such interfaces could be created by software running in integrated on-chip boot ROM from GPIO pins. Most digital signal processors have a serial mode boot, and a parallel mode boot, such as the host port interface (HPI boot). In case of DSPs there is often a second microprocessor or microcontroller present in the system design, and this is responsible for overall system behavior, interrupt handling, dealing with external events, user interface, etc. while the DSP is dedicated to signal processing tasks only. In such systems the DSP could be booted by another processor which is sometimes referred as the host processor (giving name to a Host Port). Such a processor is also sometimes referred as the master, since it usually boots first from its own memories and then controls overall system behavior, including booting of the DSP, and then further controlling the DSP's behavior. The DSP often lacks its own boot memories and relies on the host processor to supply the required code instead. The most notable systems with such a design are cell phones, modems, audio and video players and so on, where a DSP and a CPU/microcontroller are co-existing. Many FPGA chips load their configuration from an external serial EEPROM ("configuration ROM") on power-up. See also Boot disk Bootkit Comparison of boot loaders Linux startup process Macintosh startup Microreboot Multi boot Network booting RedBoot Self-booting disk Windows NT startup process Windows Vista startup process Notes References
Operating System (OS)
691
Computer virus A computer virus is a type of computer program that, when executed, replicates itself by modifying other computer programs and inserting its own code. If this replication succeeds, the affected areas are then said to be "infected" with a computer virus, a metaphor derived from biological viruses. Computer viruses generally require a host program. The virus writes its own code into the host program. When the program runs, the written virus program is executed first, causing infection and damage. A computer worm does not need a host program, as it is an independent program or code chunk. Therefore, it is not restricted by the host program, but can run independently and actively carry out attacks. Virus writers use social engineering deceptions and exploit detailed knowledge of security vulnerabilities to initially infect systems and to spread the virus. The vast majority of viruses target systems running Microsoft Windows, employing a variety of mechanisms to infect new hosts, and often using complex anti-detection/stealth strategies to evade antivirus software. Motives for creating viruses can include seeking profit (e.g., with ransomware), desire to send a political message, personal amusement, to demonstrate that a vulnerability exists in software, for sabotage and denial of service, or simply because they wish to explore cybersecurity issues, artificial life and evolutionary algorithms. Computer viruses cause billions of dollars' worth of economic damage each year. In response, an industry of antivirus software has cropped up, selling or freely distributing virus protection to users of various operating systems. History The first academic work on the theory of self-replicating computer programs was done in 1949 by John von Neumann who gave lectures at the University of Illinois about the "Theory and Organization of Complicated Automata". The work of von Neumann was later published as the "Theory of self-reproducing automata". In his essay von Neumann described how a computer program could be designed to reproduce itself. Von Neumann's design for a self-reproducing computer program is considered the world's first computer virus, and he is considered to be the theoretical "father" of computer virology. In 1972, Veith Risak directly building on von Neumann's work on self-replication, published his article "Selbstreproduzierende Automaten mit minimaler Informationsübertragung" (Self-reproducing automata with minimal information exchange). The article describes a fully functional virus written in assembler programming language for a SIEMENS 4004/35 computer system. In 1980 Jürgen Kraus wrote his diplom thesis "Selbstreproduktion bei Programmen" (Self-reproduction of programs) at the University of Dortmund. In his work Kraus postulated that computer programs can behave in a way similar to biological viruses. The Creeper virus was first detected on ARPANET, the forerunner of the Internet, in the early 1970s. Creeper was an experimental self-replicating program written by Bob Thomas at BBN Technologies in 1971. Creeper used the ARPANET to infect DEC PDP-10 computers running the TENEX operating system. Creeper gained access via the ARPANET and copied itself to the remote system where the message, "I'm the creeper, catch me if you can!" was displayed. The Reaper program was created to delete Creeper. In 1982, a program called "Elk Cloner" was the first personal computer virus to appear "in the wild"—that is, outside the single computer or computer lab where it was created. Written in 1981 by Richard Skrenta, a ninth grader at Mount Lebanon High School near Pittsburgh, it attached itself to the Apple DOS 3.3 operating system and spread via floppy disk. On its 50th use the Elk Cloner virus would be activated, infecting the personal computer and displaying a short poem beginning "Elk Cloner: The program with a personality." In 1984 Fred Cohen from the University of Southern California wrote his paper "Computer Viruses – Theory and Experiments". It was the first paper to explicitly call a self-reproducing program a "virus", a term introduced by Cohen's mentor Leonard Adleman. In 1987, Fred Cohen published a demonstration that there is no algorithm that can perfectly detect all possible viruses. Fred Cohen's theoretical compression virus was an example of a virus which was not malicious software (malware), but was putatively benevolent (well-intentioned). However, antivirus professionals do not accept the concept of "benevolent viruses", as any desired function can be implemented without involving a virus (automatic compression, for instance, is available under Windows at the choice of the user). Any virus will by definition make unauthorised changes to a computer, which is undesirable even if no damage is done or intended. The first page of Dr Solomon's Virus Encyclopaedia explains the undesirability of viruses, even those that do nothing but reproduce. An article that describes "useful virus functionalities" was published by J. B. Gunn under the title "Use of virus functions to provide a virtual APL interpreter under user control" in 1984. The first IBM PC virus in the "wild" was a boot sector virus dubbed (c)Brain, created in 1986 by Amjad Farooq Alvi and Basit Farooq Alvi in Lahore, Pakistan, reportedly to deter unauthorized copying of the software they had written. The first virus to specifically target Microsoft Windows, WinVir was discovered in April 1992, two years after the release of Windows 3.0. The virus did not contain any Windows API calls, instead relying on DOS interrupts. A few years later, in February 1996, Australian hackers from the virus-writing crew VLAD created the Bizatch virus (also known as "Boza" virus), which was the first known virus to target Windows 95. In late 1997 the encrypted, memory-resident stealth virus Win32.Cabanas was released—the first known virus that targeted Windows NT (it was also able to infect Windows 3.0 and Windows 9x hosts). Even home computers were affected by viruses. The first one to appear on the Commodore Amiga was a boot sector virus called SCA virus, which was detected in November 1987. Design Parts A viable computer virus must contain a search routine, which locates new files or new disks that are worthwhile targets for infection. Secondly, every computer virus must contain a routine to copy itself into the program which the search routine locates. The three main virus parts are: Infection mechanism Also called the infection vector, this is how the virus spreads or propagates. A virus typically has a search routine, which locates new files or new disks for infection. Trigger Also known as a logic bomb, this is the compiled version that could be activated any time within an executable file when the virus is run that determines the event or condition for the malicious "payload" to be activated or delivered such as a particular date, a particular time, particular presence of another program, capacity of the disk exceeding some limit, or a double-click that opens a particular file. Payload The "payload" is the actual body or data which carries out the malicious purpose of the virus. Payload activity might be noticeable (e.g., because it causes the system to slow down or "freeze"), as most of the time the "payload" itself is the harmful activity, or some times non-destructive but distributive, which is called virus hoax. Phases Virus phases is the life cycle of the computer virus, described by using an analogy to biology. This life cycle can be divided into four phases: Dormant phase The virus program is idle during this stage. The virus program has managed to access the target user's computer or software, but during this stage, the virus does not take any action. The virus will eventually be activated by the "trigger" which states which event will execute the virus. Not all viruses have this stage. Propagation phase The virus starts propagating, which is multiplying and replicating itself. The virus places a copy of itself into other programs or into certain system areas on the disk. The copy may not be identical to the propagating version; viruses often "morph" or change to evade detection by IT professionals and anti-virus software. Each infected program will now contain a clone of the virus, which will itself enter a propagation phase. Triggering phase A dormant virus moves into this phase when it is activated, and will now perform the function for which it was intended. The triggering phase can be caused by a variety of system events, including a count of the number of times that this copy of the virus has made copies of itself. The trigger may occur when an employee is terminated from their employment or after a set period of time has elapsed, in order to reduce suspicion. Execution phase This is the actual work of the virus, where the "payload" will be released. It can be destructive such as deleting files on disk, crashing the system, or corrupting files or relatively harmless such as popping up humorous or political messages on screen. Targets and replication Computer viruses infect a variety of different subsystems on their host computers and software. One manner of classifying viruses is to analyze whether they reside in binary executables (such as .EXE or .COM files), data files (such as Microsoft Word documents or PDF files), or in the boot sector of the host's hard drive (or some combination of all of these). A memory-resident virus (or simply "resident virus") installs itself as part of the operating system when executed, after which it remains in RAM from the time the computer is booted up to when it is shut down. Resident viruses overwrite interrupt handling code or other functions, and when the operating system attempts to access the target file or disk sector, the virus code intercepts the request and redirects the control flow to the replication module, infecting the target. In contrast, a non-memory-resident virus (or "non-resident virus"), when executed, scans the disk for targets, infects them, and then exits (i.e. it does not remain in memory after it is done executing). Many common applications, such as Microsoft Outlook and Microsoft Word, allow macro programs to be embedded in documents or emails, so that the programs may be run automatically when the document is opened. A macro virus (or "document virus") is a virus that is written in a macro language and embedded into these documents so that when users open the file, the virus code is executed, and can infect the user's computer. This is one of the reasons that it is dangerous to open unexpected or suspicious attachments in e-mails. While not opening attachments in e-mails from unknown persons or organizations can help to reduce the likelihood of contracting a virus, in some cases, the virus is designed so that the e-mail appears to be from a reputable organization (e.g., a major bank or credit card company). Boot sector viruses specifically target the boot sector and/or the Master Boot Record (MBR) of the host's hard disk drive, solid-state drive, or removable storage media (flash drives, floppy disks, etc.). The most common way of transmission of computer viruses in boot sector is physical media. When reading the VBR of the drive, the infected floppy disk or USB flash drive connected to the computer will transfer data, and then modify or replace the existing boot code. The next time a user tries to start the desktop, the virus will immediately load and run as part of the master boot record. Email viruses are viruses that intentionally, rather than accidentally, uses the email system to spread. While virus infected files may be accidentally sent as email attachments, email viruses are aware of email system functions. They generally target a specific type of email system (Microsoft Outlook is the most commonly used), harvest email addresses from various sources, and may append copies of themselves to all email sent, or may generate email messages containing copies of themselves as attachments. Detection To avoid detection by users, some viruses employ different kinds of deception. Some old viruses, especially on the DOS platform, make sure that the "last modified" date of a host file stays the same when the file is infected by the virus. This approach does not fool antivirus software, however, especially those which maintain and date cyclic redundancy checks on file changes. Some viruses can infect files without increasing their sizes or damaging the files. They accomplish this by overwriting unused areas of executable files. These are called cavity viruses. For example, the CIH virus, or Chernobyl Virus, infects Portable Executable files. Because those files have many empty gaps, the virus, which was 1 KB in length, did not add to the size of the file. Some viruses try to avoid detection by killing the tasks associated with antivirus software before it can detect them (for example, Conficker). In the 2010s, as computers and operating systems grow larger and more complex, old hiding techniques need to be updated or replaced. Defending a computer against viruses may demand that a file system migrate towards detailed and explicit permission for every kind of file access. Read request intercepts While some kinds of antivirus software employ various techniques to counter stealth mechanisms, once the infection occurs any recourse to "clean" the system is unreliable. In Microsoft Windows operating systems, the NTFS file system is proprietary. This leaves antivirus software little alternative but to send a "read" request to Windows files that handle such requests. Some viruses trick antivirus software by intercepting its requests to the operating system. A virus can hide by intercepting the request to read the infected file, handling the request itself, and returning an uninfected version of the file to the antivirus software. The interception can occur by code injection of the actual operating system files that would handle the read request. Thus, an antivirus software attempting to detect the virus will either not be permitted to read the infected file, or, the "read" request will be served with the uninfected version of the same file. The only reliable method to avoid "stealth" viruses is to boot from a medium that is known to be "clear". Security software can then be used to check the dormant operating system files. Most security software relies on virus signatures, or they employ heuristics. Security software may also use a database of file "hashes" for Windows OS files, so the security software can identify altered files, and request Windows installation media to replace them with authentic versions. In older versions of Windows, file cryptographic hash functions of Windows OS files stored in Windows—to allow file integrity/authenticity to be checked—could be overwritten so that the System File Checker would report that altered system files are authentic, so using file hashes to scan for altered files would not always guarantee finding an infection. Self-modification Most modern antivirus programs try to find virus-patterns inside ordinary programs by scanning them for so-called virus signatures. Different antivirus programs will employ different search methods when identifying viruses. If a virus scanner finds such a pattern in a file, it will perform other checks to make sure that it has found the virus, and not merely a coincidental sequence in an innocent file, before it notifies the user that the file is infected. The user can then delete, or (in some cases) "clean" or "heal" the infected file. Some viruses employ techniques that make detection by means of signatures difficult but probably not impossible. These viruses modify their code on each infection. That is, each infected file contains a different variant of the virus. One method of evading signature detection is to use simple encryption to encipher (encode) the body of the virus, leaving only the encryption module and a static cryptographic key in cleartext which does not change from one infection to the next. In this case, the virus consists of a small decrypting module and an encrypted copy of the virus code. If the virus is encrypted with a different key for each infected file, the only part of the virus that remains constant is the decrypting module, which would (for example) be appended to the end. In this case, a virus scanner cannot directly detect the virus using signatures, but it can still detect the decrypting module, which still makes indirect detection of the virus possible. Since these would be symmetric keys, stored on the infected host, it is entirely possible to decrypt the final virus, but this is probably not required, since self-modifying code is such a rarity that finding some may be reason enough for virus scanners to at least "flag" the file as suspicious. An old but compact way will be the use of arithmetic operation like addition or subtraction and the use of logical conditions such as XORing, where each byte in a virus is with a constant so that the exclusive-or operation had only to be repeated for decryption. It is suspicious for a code to modify itself, so the code to do the encryption/decryption may be part of the signature in many virus definitions. A simpler older approach did not use a key, where the encryption consisted only of operations with no parameters, like incrementing and decrementing, bitwise rotation, arithmetic negation, and logical NOT. Some viruses, called polymorphic viruses, will employ a means of encryption inside an executable in which the virus is encrypted under certain events, such as the virus scanner being disabled for updates or the computer being rebooted. This is called cryptovirology. Polymorphic code was the first technique that posed a serious threat to virus scanners. Just like regular encrypted viruses, a polymorphic virus infects files with an encrypted copy of itself, which is decoded by a decryption module. In the case of polymorphic viruses, however, this decryption module is also modified on each infection. A well-written polymorphic virus therefore has no parts which remain identical between infections, making it very difficult to detect directly using "signatures". Antivirus software can detect it by decrypting the viruses using an emulator, or by statistical pattern analysis of the encrypted virus body. To enable polymorphic code, the virus has to have a polymorphic engine (also called "mutating engine" or "mutation engine") somewhere in its encrypted body. See polymorphic code for technical detail on how such engines operate. Some viruses employ polymorphic code in a way that constrains the mutation rate of the virus significantly. For example, a virus can be programmed to mutate only slightly over time, or it can be programmed to refrain from mutating when it infects a file on a computer that already contains copies of the virus. The advantage of using such slow polymorphic code is that it makes it more difficult for antivirus professionals and investigators to obtain representative samples of the virus, because "bait" files that are infected in one run will typically contain identical or similar samples of the virus. This will make it more likely that the detection by the virus scanner will be unreliable, and that some instances of the virus may be able to avoid detection. To avoid being detected by emulation, some viruses rewrite themselves completely each time they are to infect new executables. Viruses that utilize this technique are said to be in metamorphic code. To enable metamorphism, a "metamorphic engine" is needed. A metamorphic virus is usually very large and complex. For example, W32/Simile consisted of over 14,000 lines of assembly language code, 90% of which is part of the metamorphic engine. Effects Damage is due to causing system failure, corrupting data, wasting computer resources, increasing maintenance costs or stealing personal information. Even though no antivirus software can uncover all computer viruses (especially new ones), computer security researchers are actively searching for new ways to enable antivirus solutions to more effectively detect emerging viruses, before they become widely distributed. A power virus is a computer program that executes specific machine code to reach the maximum CPU power dissipation (thermal energy output for the central processing units). Computer cooling apparatus are designed to dissipate power up to the thermal design power, rather than maximum power, and a power virus could cause the system to overheat if it does not have logic to stop the processor. This may cause permanent physical damage. Power viruses can be malicious, but are often suites of test software used for integration testing and thermal testing of computer components during the design phase of a product, or for product benchmarking. Stability test applications are similar programs which have the same effect as power viruses (high CPU usage) but stay under the user's control. They are used for testing CPUs, for example, when overclocking. Spinlock in a poorly written program may cause similar symptoms, if it lasts sufficiently long. Different micro-architectures typically require different machine code to hit their maximum power. Examples of such machine code do not appear to be distributed in CPU reference materials. Infection vectors As software is often designed with security features to prevent unauthorized use of system resources, many viruses must exploit and manipulate security bugs, which are security defects in a system or application software, to spread themselves and infect other computers. Software development strategies that produce large numbers of "bugs" will generally also produce potential exploitable "holes" or "entrances" for the virus. To replicate itself, a virus must be permitted to execute code and write to memory. For this reason, many viruses attach themselves to executable files that may be part of legitimate programs (see code injection). If a user attempts to launch an infected program, the virus' code may be executed simultaneously. In operating systems that use file extensions to determine program associations (such as Microsoft Windows), the extensions may be hidden from the user by default. This makes it possible to create a file that is of a different type than it appears to the user. For example, an executable may be created and named "picture.png.exe", in which the user sees only "picture.png" and therefore assumes that this file is a digital image and most likely is safe, yet when opened, it runs the executable on the client machine. Viruses may be installed on removable media, such as flash drives. The drives may be left in a parking lot of a government building or other target, with the hopes that curious users will insert the drive into a computer. In a 2015 experiment, researchers at the University of Michigan found that 45–98 percent of users would plug in a flash drive of unknown origin. The vast majority of viruses target systems running Microsoft Windows. This is due to Microsoft's large market share of desktop computer users. The diversity of software systems on a network limits the destructive potential of viruses and malware. Open-source operating systems such as Linux allow users to choose from a variety of desktop environments, packaging tools, etc., which means that malicious code targeting any of these systems will only affect a subset of all users. Many Windows users are running the same set of applications, enabling viruses to rapidly spread among Microsoft Windows systems by targeting the same exploits on large numbers of hosts. While Linux and Unix in general have always natively prevented normal users from making changes to the operating system environment without permission, Windows users are generally not prevented from making these changes, meaning that viruses can easily gain control of the entire system on Windows hosts. This difference has continued partly due to the widespread use of administrator accounts in contemporary versions like Windows XP. In 1997, researchers created and released a virus for Linux—known as "Bliss". Bliss, however, requires that the user run it explicitly, and it can only infect programs that the user has the access to modify. Unlike Windows users, most Unix users do not log in as an administrator, or "root user", except to install or configure software; as a result, even if a user ran the virus, it could not harm their operating system. The Bliss virus never became widespread, and remains chiefly a research curiosity. Its creator later posted the source code to Usenet, allowing researchers to see how it worked. Before computer networks became widespread, most viruses spread on removable media, particularly floppy disks. In the early days of the personal computer, many users regularly exchanged information and programs on floppies. Some viruses spread by infecting programs stored on these disks, while others installed themselves into the disk boot sector, ensuring that they would be run when the user booted the computer from the disk, usually inadvertently. Personal computers of the era would attempt to boot first from a floppy if one had been left in the drive. Until floppy disks fell out of use, this was the most successful infection strategy and boot sector viruses were the most common in the "wild" for many years. Traditional computer viruses emerged in the 1980s, driven by the spread of personal computers and the resultant increase in bulletin board system (BBS), modem use, and software sharing. Bulletin board–driven software sharing contributed directly to the spread of Trojan horse programs, and viruses were written to infect popularly traded software. Shareware and bootleg software were equally common vectors for viruses on BBSs. Viruses can increase their chances of spreading to other computers by infecting files on a network file system or a file system that is accessed by other computers. Macro viruses have become common since the mid-1990s. Most of these viruses are written in the scripting languages for Microsoft programs such as Microsoft Word and Microsoft Excel and spread throughout Microsoft Office by infecting documents and spreadsheets. Since Word and Excel were also available for Mac OS, most could also spread to Macintosh computers. Although most of these viruses did not have the ability to send infected email messages, those viruses which did take advantage of the Microsoft Outlook Component Object Model (COM) interface. Some old versions of Microsoft Word allow macros to replicate themselves with additional blank lines. If two macro viruses simultaneously infect a document, the combination of the two, if also self-replicating, can appear as a "mating" of the two and would likely be detected as a virus unique from the "parents". A virus may also send a web address link as an instant message to all the contacts (e.g., friends and colleagues' e-mail addresses) stored on an infected machine. If the recipient, thinking the link is from a friend (a trusted source) follows the link to the website, the virus hosted at the site may be able to infect this new computer and continue propagating. Viruses that spread using cross-site scripting were first reported in 2002, and were academically demonstrated in 2005. There have been multiple instances of the cross-site scripting viruses in the "wild", exploiting websites such as MySpace (with the Samy worm) and Yahoo!. Countermeasures In 1989 The ADAPSO Software Industry Division published Dealing With Electronic Vandalism, in which they followed the risk of data loss by "the added risk of losing customer confidence." Many users install antivirus software that can detect and eliminate known viruses when the computer attempts to download or run the executable file (which may be distributed as an email attachment, or on USB flash drives, for example). Some antivirus software blocks known malicious websites that attempt to install malware. Antivirus software does not change the underlying capability of hosts to transmit viruses. Users must update their software regularly to patch security vulnerabilities ("holes"). Antivirus software also needs to be regularly updated to recognize the latest threats. This is because malicious hackers and other individuals are always creating new viruses. The German AV-TEST Institute publishes evaluations of antivirus software for Windows and Android. Examples of Microsoft Windows anti virus and anti-malware software include the optional Microsoft Security Essentials (for Windows XP, Vista and Windows 7) for real-time protection, the Windows Malicious Software Removal Tool (now included with Windows (Security) Updates on "Patch Tuesday", the second Tuesday of each month), and Windows Defender (an optional download in the case of Windows XP). Additionally, several capable antivirus software programs are available for free download from the Internet (usually restricted to non-commercial use). Some such free programs are almost as good as commercial competitors. Common security vulnerabilities are assigned CVE IDs and listed in the US National Vulnerability Database. Secunia PSI is an example of software, free for personal use, that will check a PC for vulnerable out-of-date software, and attempt to update it. Ransomware and phishing scam alerts appear as press releases on the Internet Crime Complaint Center noticeboard. Ransomware is a virus that posts a message on the user's screen saying that the screen or system will remain locked or unusable until a ransom payment is made. Phishing is a deception in which the malicious individual pretends to be a friend, computer security expert, or other benevolent individual, with the goal of convincing the targeted individual to reveal passwords or other personal information. Other commonly used preventive measures include timely operating system updates, software updates, careful Internet browsing (avoiding shady websites), and installation of only trusted software. Certain browsers flag sites that have been reported to Google and that have been confirmed as hosting malware by Google. There are two common methods that an antivirus software application uses to detect viruses, as described in the antivirus software article. The first, and by far the most common method of virus detection is using a list of virus signature definitions. This works by examining the content of the computer's memory (its Random Access Memory (RAM), and boot sectors) and the files stored on fixed or removable drives (hard drives, floppy drives, or USB flash drives), and comparing those files against a database of known virus "signatures". Virus signatures are just strings of code that are used to identify individual viruses; for each virus, the antivirus designer tries to choose a unique signature string that will not be found in a legitimate program. Different antivirus programs use different "signatures" to identify viruses. The disadvantage of this detection method is that users are only protected from viruses that are detected by signatures in their most recent virus definition update, and not protected from new viruses (see "zero-day attack"). A second method to find viruses is to use a heuristic algorithm based on common virus behaviors. This method can detect new viruses for which antivirus security firms have yet to define a "signature", but it also gives rise to more false positives than using signatures. False positives can be disruptive, especially in a commercial environment, because it may lead to a company instructing staff not to use the company computer system until IT services have checked the system for viruses. This can slow down productivity for regular workers. Recovery strategies and methods One may reduce the damage done by viruses by making regular backups of data (and the operating systems) on different media, that are either kept unconnected to the system (most of the time, as in a hard drive), read-only or not accessible for other reasons, such as using different file systems. This way, if data is lost through a virus, one can start again using the backup (which will hopefully be recent). If a backup session on optical media like CD and DVD is closed, it becomes read-only and can no longer be affected by a virus (so long as a virus or infected file was not copied onto the CD/DVD). Likewise, an operating system on a bootable CD can be used to start the computer if the installed operating systems become unusable. Backups on removable media must be carefully inspected before restoration. The Gammima virus, for example, propagates via removable flash drives. Many websites run by antivirus software companies provide free online virus scanning, with limited "cleaning" facilities (after all, the purpose of the websites is to sell antivirus products and services). Some websites—like Google subsidiary VirusTotal.com—allow users to upload one or more suspicious files to be scanned and checked by one or more antivirus programs in one operation. Additionally, several capable antivirus software programs are available for free download from the Internet (usually restricted to non-commercial use). Microsoft offers an optional free antivirus utility called Microsoft Security Essentials, a Windows Malicious Software Removal Tool that is updated as part of the regular Windows update regime, and an older optional anti-malware (malware removal) tool Windows Defender that has been upgraded to an antivirus product in Windows 8. Some viruses disable System Restore and other important Windows tools such as Task Manager and CMD. An example of a virus that does this is CiaDoor. Many such viruses can be removed by rebooting the computer, entering Windows "safe mode" with networking, and then using system tools or Microsoft Safety Scanner. System Restore on Windows Me, Windows XP, Windows Vista and Windows 7 can restore the registry and critical system files to a previous checkpoint. Often a virus will cause a system to "hang" or "freeze", and a subsequent hard reboot will render a system restore point from the same day corrupted. Restore points from previous days should work, provided the virus is not designed to corrupt the restore files and does not exist in previous restore points. Microsoft's System File Checker (improved in Windows 7 and later) can be used to check for, and repair, corrupted system files. Restoring an earlier "clean" (virus-free) copy of the entire partition from a cloned disk, a disk image, or a backup copy is one solution—restoring an earlier backup disk "image" is relatively simple to do, usually removes any malware, and may be faster than "disinfecting" the computer—or reinstalling and reconfiguring the operating system and programs from scratch, as described below, then restoring user preferences. Reinstalling the operating system is another approach to virus removal. It may be possible to recover copies of essential user data by booting from a live CD, or connecting the hard drive to another computer and booting from the second computer's operating system, taking great care not to infect that computer by executing any infected programs on the original drive. The original hard drive can then be reformatted and the OS and all programs installed from original media. Once the system has been restored, precautions must be taken to avoid reinfection from any restored executable files. Popular culture The first known description of a self-reproducing program in fiction is in the 1970 short story The Scarred Man by Gregory Benford which describes a computer program called VIRUS which, when installed on a computer with telephone modem dialing capability, randomly dials phone numbers until it hits a modem that is answered by another computer, and then attempts to program the answering computer with its own program, so that the second computer will also begin dialing random numbers, in search of yet another computer to program. The program rapidly spreads exponentially through susceptible computers and can only be countered by a second program called VACCINE. The idea was explored further in two 1972 novels, When HARLIE Was One by David Gerrold and The Terminal Man by Michael Crichton, and became a major theme of the 1975 novel The Shockwave Rider by John Brunner. The 1973 Michael Crichton sci-fi movie Westworld made an early mention of the concept of a computer virus, being a central plot theme that causes androids to run amok. Alan Oppenheimer's character summarizes the problem by stating that "...there's a clear pattern here which suggests an analogy to an infectious disease process, spreading from one...area to the next." To which the replies are stated: "Perhaps there are superficial similarities to disease" and, "I must confess I find it difficult to believe in a disease of machinery." Other malware The term "virus" is also misused by extension to refer to other types of malware. "Malware" encompasses computer viruses along with many other forms of malicious software, such as computer "worms", ransomware, spyware, adware, trojan horses, keyloggers, rootkits, bootkits, malicious Browser Helper Object (BHOs), and other malicious software. The majority of active malware threats are trojan horse programs or computer worms rather than computer viruses. The term computer virus, coined by Fred Cohen in 1985, is a misnomer. Viruses often perform some type of harmful activity on infected host computers, such as acquisition of hard disk space or central processing unit (CPU) time, accessing and stealing private information (e.g., credit card numbers, debit card numbers, phone numbers, names, email addresses, passwords, bank information, house addresses, etc.), corrupting data, displaying political, humorous or threatening messages on the user's screen, spamming their e-mail contacts, logging their keystrokes, or even rendering the computer useless. However, not all viruses carry a destructive "payload" and attempt to hide themselves—the defining characteristic of viruses is that they are self-replicating computer programs that modify other software without user consent by injecting themselves into the said programs, similar to a biological virus which replicates within living cells. See also Botnet Comparison of computer viruses Computer fraud and abuse act Computer insecurity Crimeware Core Wars Cryptovirology Keystroke logging Malware Spam (electronic) Technical support scam Trojan horse (computing) Virus hoax Windows 7 File Recovery Windows Action Center Zombie (computer science) References Further reading External links (DMOZ) Microsoft Security Portal US Govt CERT (Computer Emergency Readiness Team) site 'Computer Viruses – Theory and Experiments' – The original paper by Fred Cohen, 1984 Hacking Away at the Counterculture by Andrew Ross  (On hacking, 1990) Virus Internet security Deception Security breaches Types of malware
Operating System (OS)
692
IBM DISOSS IBM Distributed Office Support System, or DISOSS is a centralized document distribution and filing application for IBM's mainframe computers running the MVS and VSE operating systems. DISOSS runs under both the CICS transaction processing system and the IMS/DS transaction processing system, and later versions use the SNADS architecture of peer to peer communication for distributed services. Heterogeneous office systems connect through DISOSS to OfficeVision/MVS series. The IBM systems are OV/MVS, “OV/VM, OV/400, PS/CICS, PS/TSO, PS/PC, PROFS, and other Mail Systems Supporting SNADS and DIA. Only a single copy of DISOSS needs to be installed somewhere in the network to accomplish the connection.” A number of other vendors such as Digital Equipment Corporation, Hewlett-Packard, and Data General provided links to DISOSS. Functions DISOSS provides document library function with search and retrieval controlled by security based on user ID, along with document translation based on Document Interchange Architecture (DIA) and Document Content Architecture (DCA). The different systems that use DISOSS for document exchange and distribution vary in their implementation of DCA and thus the end results of some combinations are only final form (FFT) documents rather than revisable form text (RFT). It supports document exchange between various IBM and non-IBM office devices including the IBM Displaywriter System, the IBM 5520, the IBM 8100/DOSF, IBM Scanmaster, and Personal computers and word processors. It offers format transformation and printing services, and provides a rich application programming interface (API) and interfaced with other office products such as IBM OfficeVision. History DISOSS was announced in 1980, and "was designated a strategic IBM product in 1982." It was a key part of IBM Systems Application Architecture (SAA), but suffered from a reputation as "difficult to understand" and "a resource hog." DISOSS continues to be actively marketed and supported as of 2012. Version 1 of DISOSS was introduced in June 1980; Colgate-Palmolive was one of the first sites to implement DISOSS version 1, and reported dissatisfaction with the poor quality of the documentation and with software bugs. IBM released version 2 in 1982, in which IBM claimed to resolve the issues which version 1 users had experienced. DISOSS was implemented by the city government of Long Beach, California during 1983–1984. See also PROFS IBM OfficeVision References IBM Corporation: Document Interchange with DISOSS Version 3 (1983) External links DISOSS/370 V3 CONCEPTS MVS VSE (GC30-3434-00) DISOSS DISOSS DISOSS
Operating System (OS)
693
Software system safety In software engineering, software system safety optimizes system safety in the design, development, use, and maintenance of software systems and their integration with safety-critical hardware systems in an operational environment. Overview Software system safety is a subset of system safety and system engineering and is synonymous with the software engineering aspects of Functional Safety. As part of the total safety and software development program, software cannot be allowed to function independently of the total effort. Both simple and highly integrated multiple systems are experiencing an extraordinary growth in the use of computers and software to monitor and/or control safety-critical subsystems or functions. A software specification error, design flaw, or the lack of generic safety-critical requirements can contribute to or cause a system failure or erroneous human decision. To achieve an acceptable level of safety for software used in critical applications, software system safety engineering must be given primary emphasis early in the requirements definition and system conceptual design process. Safety-critical software must then receive continuous management emphasis and engineering analysis throughout the development and operational lifecycles of the system. Software with safety-critical functionality must be thoroughly verified with objective analysis. Functional Hazard Analyses (FHA) are often conducted early on - in parallel with or as part of system engineering Functional Analyses - to determine the safety-critical functions (SCF) of the systems for further analyses and verification. Software system safety is directly related to the more critical design aspects and safety attributes in software and system functionality, whereas software quality attributes are inherently different and require standard scrutiny and development rigor. Development Assurance levels (DAL) and associated Level of Rigor (LOR) is a graded approach to software quality and software design assurance as a pre-requisite that a suitable software process is followed for confidence. LOR concepts and standards such as DO-178C are NOT a substitute for software safety. Software safety per IEEE STD-1228 and MIL-STD-882E focuses on ensuring explicit safety requirements are met and verified using functional approaches from a safety requirements analysis and test perspective. Software safety hazard analysis required for more complex systems where software is controlling critical functions generally are in the following sequential categories and are conducted in phases as part of the system safety or safety engineering process: software safety requirements analysis; software safety design analyses (top level, detailed design and code level); software safety test analysis, and software safety change analysis. Once these "functional" software safety analyses are completed the software engineering team will know where to place safety emphasis and what functional threads, functional paths, domains and boundaries to focus on when designing in software safety attributes to ensure correct functionality and to detect malfunctions, failures, faults and to implement a host of mitigation strategies to control hazards. Software security and various software protection technologies are similar to software safety attributes in the design to mitigate various types of threats vulnerability and risks. Deterministic software is sought in the design by verifying correct and predictable behavior at the system level. Goals Functional safety is achieved through engineering development to ensure correct execution and behavior of software functions as intended Safety consistent with mission requirements, is designed into the software in a timely, cost effective manner. On complex systems involving many interactions safety-critical functionality should be identified and thoroughly analyzed before deriving hazards and design safeguards for mitigations. Safety-critical functions lists and preliminary hazards lists should be determined proactively and influence the requirements that will be implemented in software. Contributing factors and root causes of faults and resultant hazards associated with the system and its software are identified, evaluated and eliminated or the risk reduced to an acceptable level, throughout the lifecycle. Reliance on administrative procedures for hazard control is minimized. The number and complexity of safety critical interfaces is minimized. The number and complexity of safety critical computer software components is minimized. Sound human engineering principles are applied to the design of the software-user interface to minimize the probability of human error. Failure modes, including hardware, software, human and system are addressed in the design of the software. Sound software engineering practices and documentation are used in the development of the software. Safety issues and safety attributes are addressed as part of the software testing effort at all levels. Software is designed for human machine interface, ease of maintenance and modification or enhancement Software with safety-critical functionality must be thoroughly verified with objective analysis and preferably test evidence that all safety requirements have been met per established criteria. See also Software assurance IEC 61508 - Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems ISO 26262 - Road vehicles – Functional safety Functional Safety Software quality System accident References Software quality
Operating System (OS)
694
Xerox Alto The Xerox Alto is the first computer designed from its inception to support an operating system based on a graphical user interface (GUI), later using the desktop metaphor. The first machines were introduced on 1 March 1973, a decade before mass-market GUI machines became available. The Alto is contained in a relatively small cabinet and uses a custom central processing unit (CPU) built from multiple SSI and MSI integrated circuits. Each machine cost tens of thousands of dollars despite its status as a personal computer. Only small numbers were built initially, but by the late 1970s, about 1,000 were in use at various Xerox laboratories, and about another 500 in several universities. Total production was about 2,000 systems. The Alto became well known in Silicon Valley and its GUI was increasingly seen as the future of computing. In 1979, Steve Jobs arranged a visit to Xerox PARC, during which Apple Computer personnel would receive demonstrations of Xerox technology in exchange for Xerox being able to purchase stock options in Apple. After two visits to see the Alto, Apple engineers used the concepts to introduce the Apple Lisa and Macintosh systems. Xerox eventually commercialized a heavily modified version of the Alto concepts as the Xerox Star, first introduced in 1981. A complete office system including several workstations, storage and a laser printer cost as much as $100,000, and like the Alto, the Star had little direct impact on the market. History The first computer with a graphical operating system, the Alto built on earlier graphical interface designs. It was conceived in 1972 in a memo written by Butler Lampson, inspired by the oN-Line System (NLS) developed by Douglas Engelbart and Dustin Lindberg at SRI International (SRI). Of further influence was the PLATO education system developed at the Computer-based Education Research Laboratory at the University of Illinois. The Alto was designed mostly by Charles P. Thacker. Industrial Design and manufacturing was sub-contracted to Xerox, whose Special Programs Group team included Doug Stewart as Program Manager, Abbey Silverstone Operations, Bob Nishimura, Industrial Designer. An initial run of 30 units was produced by Xerox El Segundo (Special Programs Group), working with John Ellenby at PARC and Doug Stewart and Abbey Silverstone at El Segundo, who were responsible for re-designing the Alto's electronics. Due to the success of the pilot run, the team went on to produce approximately 2,000 units over the next ten years. Several Xerox Alto chassis are now on display at the Computer History Museum in Mountain View, California, one is on display at the Computer Museum of America in Roswell, Georgia, and several are in private hands. Running systems are on display at the System Source Computer Museum in Hunt Valley, Maryland. Charles P. Thacker was awarded the 2009 Turing Award of the Association for Computing Machinery on March 9, 2010, for his pioneering design and realization of the Alto. The 2004 Charles Stark Draper Prize was awarded to Thacker, Alan C. Kay, Butler Lampson, and Robert W. Taylor for their work on Alto. On October 21, 2014, Xerox Alto's source code and other resources were released from the Computer History Museum. Architecture The following description is based mostly on the August 1976 Alto Hardware Manual by Xerox PARC. Alto uses a microcoded design, but unlike many computers, the microcode engine is not hidden from the programmer in a layered design. Applications such as Pinball take advantage of this to accelerate performance. The Alto has a bit-slice arithmetic logic unit (ALU) based on the Texas Instruments 74181 chip, a ROM control store with a writable control store extension and has 128 (expandable to 512) kB of main memory organized in 16-bit words. Mass storage is provided by a hard disk drive that uses a removable 2.5 MB one-platter cartridge (Diablo Systems, a company Xerox later bought) similar to those used by the IBM 2310. The base machine and one disk drive are housed in a cabinet about the size of a small refrigerator; one more disk drive can be added via daisy-chaining. Alto both blurred and ignored the lines between functional elements. Rather than a distinct central processing unit with a well-defined electrical interface (e.g., system bus) to storage and peripherals, the Alto ALU interacts directly with hardware interfaces to memory and peripherals, driven by microinstructions that are output from the control store. The microcode machine supports up to 16 cooperative multitasking tasks, each with fixed priority. The emulator task executes the normal instruction set to which most applications are written; that instruction set is similar to, but not the same as, that of a Data General Nova. Other tasks serve the display, memory refresh, disk, network, and other I/O functions. As an example, the bitmap display controller is little more than a 16-bit shift register; microcode moves display refresh data from main memory to the shift register, which serializes it into a display of pixels corresponding to the ones and zeros of the memory data. Ethernet is likewise supported by minimal hardware, with a shift register that acts bidirectionally to serialize output words and deserialize input words. Its speed was designed to be 3 Mbit/s because the microcode engine could not go faster and continue to support the video display, disk activity and memory refresh. Unlike most minicomputers of the era, Alto does not support a serial terminal for user interface. Apart from an Ethernet connection, the Alto's only common output device is a bi-level (black and white) cathode ray tube (CRT) display with a tilt-and-swivel base, mounted in portrait orientation rather than the more common "landscape" orientation. Its input devices are a custom detachable keyboard, a three-button mouse, and an optional 5-key chorded keyboard (chord keyset). The last two items had been introduced by SRI's On-Line System; while the mouse was an instant success among Alto users, the chord keyset never became popular. In the early mice, the buttons were three narrow bars, arranged top to bottom rather than side to side; they were named after their colors in the documentation. The motion was sensed by two wheels perpendicular to each other. These were soon replaced with a ball-type mouse, which was invented by Ronald E. Rider and developed by Bill English. These were photo-mechanical mice, first using white light, and then infrared (IR), to count the rotations of wheels inside the mouse. The keyboard is interesting in that each key is represented as a separate bit in a set of memory locations. As a result, it is possible to read multiple key presses concurrently. This trait can be used to alter from where on the disk the Alto boots. The keyboard value is used as the sector address on the disk to boot from, and by holding specific keys down while pressing the boot button, different microcode and operating systems can be loaded. This gave rise to the expression "nose boot" where the keys needed to boot for a test OS release required more fingers than you could come up with. Nose boots were made obsolete by the move2keys program that shifted files on the disk so that a specified key sequence could be used. Several other I/O devices were developed for the Alto, including a TV camera, the Hy-Type daisywheel printer and a parallel port, although these were quite rare. The Alto could also control external disk drives to act as a file server. This was a common application for the machine. Software Early software for the Alto was written in the programming language BCPL, and later in Mesa, which was not widely used outside PARC but influenced several later languages, such as Modula. The Alto used an early version of ASCII which lacked the underscore character, instead having the left-arrow character used in ALGOL 60 and many derivatives for the assignment operator: this peculiarity may have been the source of the CamelCase style for compound identifiers. Altos were also microcode-programmable by users. The Alto helped popularize the use of raster graphics model for all output, including text and graphics. It also introduced the concept of the bit block transfer operation (bit blit, BitBLT), as the fundamental programming interface to the display. Despite its small memory size, many innovative programs were written for the Alto, including: the first WYSIWYG typesetting document preparation systems, Bravo and Gypsy; the Laurel email tool, and its successor, Hardy the Sil vector graphics editor, used mainly for logic circuits, printed circuit board, and other technical diagrams; the Markup bitmap editor (an early paint program); the Draw graphical editor using lines and splines; the first WYSIWYG integrated circuit editor based on the work of Lynn Conway, Carver Mead, and the Mead and Conway revolution; the first versions of the Smalltalk environment Interlisp one of the first network-based multi-person video games (Alto Trek by Gene Ball). There was no spreadsheet or database software. The first electronic spreadsheet program, VisiCalc, did not arise until 1979. Diffusion and evolution Technically, the Alto was a small minicomputer, but it could be considered a personal computer in the sense that it was used by one person sitting at a desk, in contrast with the mainframe computers and other minicomputers of the era. It was arguably "the first personal computer", although this title is disputed by others. More significantly (and perhaps less controversially), it may be considered to be one of the first workstation systems in the style of single-user machines such as the Apollo, based on the Unix operating system, and systems by Symbolics, designed to natively run Lisp as a development environment. In 1976 to 1977 the Swiss computer pioneer Niklaus Wirth spent a sabbatical at PARC and was excited by the Alto. Unable to bring back one of the Alto systems to Europe, Wirth decided to build a new system from scratch and he designed with his group the Lilith. Lilith was ready to use around 1980, quite some time before Apple Lisa and Macintosh were released. Around 1985 Wirth started a complete redesign of the Lilith under the Name "Project Oberon". In 1978 Xerox donated 50 Altos to the Massachusetts Institute of Technology, Stanford University, Carnegie Mellon University, and the University of Rochester. The National Bureau of Standards's Institute for Computer Sciences in Gaithersburg, Maryland received one Alto in late 1978 along with Xerox Interim File System (IFS) file servers and Dover laser printers. These machines were the inspiration for the ETH Zürich Lilith and Three Rivers Company PERQ workstations, and the Stanford University Network (SUN) workstation, which was eventually marketed by a spin-off company, Sun Microsystems. The Apollo/Domain workstation was heavily influenced by the Alto. Following the acquisition of an Alto, the White House information systems department sought to lead federal computer suppliers in its direction. The Executive Office of the President of the United States (EOP) issued a request for proposal for a computer system to replace the aging Office of Management and Budget (OMB) budget system, using Alto-like workstations, connected to an IBM-compatible mainframe. The request was eventually withdrawn because no mainframe producer could supply such a configuration. In December 1979, Apple Computer's co-founder Steve Jobs visited Xerox PARC, where he was shown the Smalltalk-80 object-oriented programming environment, networking, and most importantly the WYSIWYG, mouse-driven graphical user interface provided by the Alto. At the time, he didn't recognize the significance of the first two, but was excited by the last one, promptly integrating it into Apple's products; first into the Lisa and then in the Macintosh, attracting several key researchers to work in his company. In 1980–1981, Xerox Altos were used by engineers at PARC and at the Xerox System Development Department to design the Xerox Star workstations. Xerox and the Alto Xerox was slow to realize the value of the technology that had been developed at PARC. The Xerox corporate acquisition of Scientific Data Systems (SDS, later XDS) in the late 1960s had no interest with PARC. PARC built their own emulation of the Digital Equipment Corporation PDP-10 named the MAXC. The MAXC was PARC's gateway machine to the ARPANET. The firm was reluctant to get into the computer business again with commercially untested designs, although many of the philosophies would ship in later products. Byte magazine stated in 1981, After the Alto, PARC developed more powerful workstations (none intended as projects) informally termed "the D-machines": Dandelion (least powerful, but the only to be made a product in one form), Dolphin; Dorado (most powerful; an emitter-coupled logic (ECL) machine); and hybrids like the Dandel-Iris. Before the advent of personal computers such as the Apple II in 1977 and the IBM Personal Computer (IBM PC) in 1981, the computer market was dominated by costly mainframes and minicomputers equipped with dumb terminals that time-shared the processing time of the central computer. Through the 1970s, Xerox showed no interest in the work done at PARC. When Xerox finally entered the PC market with the Xerox 820, they pointedly rejected the Alto design and opted instead for a very conventional model, a CP/M-based machine with the then-standard 80 by 24 character-only monitor and no mouse. With the help of PARC researchers, Xerox eventually developed the Xerox Star, based on the Dandelion workstation, and later the cost reduced Star, the 6085 office system, based on the Daybreak workstation. These machines, based on the 'Wildflower' architecture described in a paper by Butler Lampson, incorporated most of the Alto innovations, including the graphical user interface with icons, windows, folders, Ethernet-based local networking, and network-based laser printer services. Xerox only realized their mistake in the early 1980s, after Apple's Macintosh revolutionized the PC market via its bitmap display and the mouse-centered interface. Both of these were copied from the Alto. While the Xerox Star series was a relative commercial success, it came too late. The expensive Xerox workstations could not compete against the cheaper GUI-based workstations that arose in the wake of the first Macintosh, and Xerox eventually quit the workstation market for good. See also NLS (computer system) Mousepad Alan Kay Adele Goldberg (computer scientist) Apple Lisa References Notes Alto User's Handbook, Xerox PARC, September 2013 Further reading External links Xerox Alto documents at bitsavers.org At the DigiBarn museum Xerox Alto Source Code - CHM (computerhistory.org) Xerox Alto source code (computerhistory.org) "Hello world" in the BCPL language on the Xerox Alto simulator (righto.com) The Alto in 1974 video A lecture video of Butler Lampson describing Xerox Alto in depth. (length: 2h45m) A microcode-level Xerox Alto simulator ContrAlto Xerox Alto emulator brainsqueezer/salto_simulator: SALTO - Xerox Alto I/II Simulator (github.com) SALTO-Xerox Alto emulator (direct download) ConrAltoJS Xerox Alto Online Computer-related introductions in 1973 Alto Personal computers Computer workstations 16-bit computers
Operating System (OS)
695
MINIMOP MINIMOP was an operating system which ran on the International Computers Limited (ICL) 1900 series of computers. MINIMOP provided an on-line, time-sharing environment (Multiple Online Programming, or MOP in ICL terminology), and typically ran alongside George 2 running batch jobs. MINIMOP was named to reflect its role as an alternative to the MOP facilities of George 3, which required a more powerful machine. MINIMOP would run on all 1900 processors apart from the low-end 1901 and 1902 and required only 16K words of memory and two 4 or 8 million character magnetic disks. Each user was provided with a fixed size file to hold his data, which was subdivided into a number of variable sized subfiles. The command language could be extended with simple macros. Implementation MINIMOP was implemented as a multithreaded (sub-programmed in ICL terminology) user level program running on the standard executive (low level operating system) of the ICL 1900. The program under control facilities of executive were used to run user programs under MINIMOP. All user I/O operations were trapped by MINIMOP and emulated rather than accessing real peripherals. As memory was at a premium user programs would be swapped out of memory whenever they needed to wait (for input or output) or when they reached the end of their time slice. MAXIMOP Queen Mary College, London, now Queen Mary, University of London, later developed MAXIMOP, an improved system largely compatible with MINIMOP. The ICL Universities Sales Region started distributing MAXIMOP, and it was used at over 100 sites. References ICL operating systems
Operating System (OS)
696
Hyper-V Microsoft Hyper-V, codenamed Viridian, and briefly known before its release as Windows Server Virtualization, is a native hypervisor; it can create virtual machines on x86-64 systems running Windows. Starting with Windows 8, Hyper-V superseded Windows Virtual PC as the hardware virtualization component of the client editions of Windows NT. A server computer running Hyper-V can be configured to expose individual virtual machines to one or more networks. Hyper-V was first released with Windows Server 2008, and has been available without additional charge since Windows Server 2012 and Windows 8. A standalone Windows Hyper-V Server is free, but with command-line interface only. History A beta version of Hyper-V was shipped with certain x86-64 editions of Windows Server 2008. The finalized version was released on June 26, 2008 and was delivered through Windows Update. Hyper-V has since been released with every version of Windows Server. Microsoft provides Hyper-V through two channels: Part of Windows: Hyper-V is an optional component of Windows Server 2008 and later. It is also available in x64 SKUs of Pro and Enterprise editions of Windows 8, Windows 8.1 and Windows 10. Hyper-V Server: It is a freeware edition of Windows Server with limited functionality and Hyper-V component. Hyper-V Server Hyper-V Server 2008 was released on October 1, 2008. It consists of Windows Server 2008 Server Core and Hyper-V role; other Windows Server 2008 roles are disabled, and there are limited Windows services. Hyper-V Server 2008 is limited to a command-line interface used to configure the host OS, physical hardware, and software. A menu driven CLI interface and some freely downloadable script files simplify configuration. In addition, Hyper-V Server supports remote access via Remote Desktop Connection. However, administration and configuration of the host OS and the guest virtual machines is generally done over the network, using either Microsoft Management Consoles on another Windows computer or System Center Virtual Machine Manager. This allows much easier "point and click" configuration, and monitoring of the Hyper-V Server. Hyper-V Server 2008 R2 (an edition of Windows Server 2008 R2) was made available in September 2009 and includes Windows PowerShell v2 for greater CLI control. Remote access to Hyper-V Server requires CLI configuration of network interfaces and Windows Firewall. Also using a Windows Vista PC to administer Hyper-V Server 2008 R2 is not fully supported. Architecture Hyper-V implements isolation of virtual machines in terms of a partition. A partition is a logical unit of isolation, supported by the hypervisor, in which each guest operating system executes. There must be at least one parent partition in a hypervisor instance, running a supported version of Windows Server (2008 and later). The virtualization software runs in the parent partition and has direct access to the hardware devices. The parent partition creates child partitions which host the guest OSs. A parent partition creates child partitions using the hypercall API, which is the application programming interface exposed by Hyper-V. A child partition does not have access to the physical processor, nor does it handle its real interrupts. Instead, it has a virtual view of the processor and runs in Guest Virtual Address, which, depending on the configuration of the hypervisor, might not necessarily be the entire virtual address space. Depending on VM configuration, Hyper-V may expose only a subset of the processors to each partition. The hypervisor handles the interrupts to the processor, and redirects them to the respective partition using a logical Synthetic Interrupt Controller (SynIC). Hyper-V can hardware accelerate the address translation of Guest Virtual Address-spaces by using second level address translation provided by the CPU, referred to as EPT on Intel and RVI (formerly NPT) on AMD. Child partitions do not have direct access to hardware resources, but instead have a virtual view of the resources, in terms of virtual devices. Any request to the virtual devices is redirected via the VMBus to the devices in the parent partition, which will manage the requests. The VMBus is a logical channel which enables inter-partition communication. The response is also redirected via the VMBus. If the devices in the parent partition are also virtual devices, it will be redirected further until it reaches the parent partition, where it will gain access to the physical devices. Parent partitions run a Virtualization Service Provider (VSP), which connects to the VMBus and handles device access requests from child partitions. Child partition virtual devices internally run a Virtualization Service Client (VSC), which redirect the request to VSPs in the parent partition via the VMBus. This entire process is transparent to the guest OS. Virtual devices can also take advantage of a Windows Server Virtualization feature, named Enlightened I/O, for storage, networking and graphics subsystems, among others. Enlightened I/O is a specialized virtualization-aware implementation of high level communication protocols, like SCSI, that allows bypassing any device emulation layer and takes advantage of VMBus directly. This makes the communication more efficient, but requires the guest OS to support Enlightened I/O. Currently only the following operating systems support Enlightened I/O, allowing them therefore to run faster as guest operating systems under Hyper-V than other operating systems that need to use slower emulated hardware: Windows Server 2008 and later Windows Vista and later Linux with a 3.4 or later kernel FreeBSD System requirements The Hyper-V role is only available in the x86-64 variants of Standard, Enterprise and Datacenter editions of Windows Server 2008 and later, as well as the Pro, Enterprise and Education editions of Windows 8 and later. On Windows Server, it can be installed regardless of whether the installation is a full or core installation. In addition, Hyper-V can be made available as part of the Hyper-V Server operating system, which is a freeware edition of Windows Server. Either way, the host computer needs the following. CPU with the following technologies: NX bit x86-64 Hardware-assisted virtualization (Intel VT-x or AMD-V) Second Level Address Translation (in Windows Server 2012 and later) At least 2 GB memory, in addition to what is assigned to each guest machine The amount of memory assigned to virtual machines depends on the operating system: Windows Server 2008 Standard supports up to 31 GB of memory for running VMs, plus 1 GB for the host OS. Windows Server 2008 R2 Standard supports up to 32 GB, but the Enterprise and Datacenter editions support up to 2 TB. Hyper-V Server 2008 R2 supports up to 1 TB. Windows Server 2012 supports up to 4 TB. The number of CPUs assigned to each virtual machine also depends on the OS: Windows Server 2008 and 2008 R2 support 1, 2, or 4 CPUs per VM; the same applies to Hyper-V Server 2008 R2 Windows Server 2012 supports up to 64 CPUs per VM There is also a maximum for the number of concurrently active virtual machines. Windows Server 2008 and 2008 R2 support 384 per server; Hyper-V Server 2008 supports the same Windows Server 2012 supports 1024 per server; the same applies to Hyper-V Server 2012 Windows Server 2016 supports 8000 per cluster and per node Supported guests Windows Server 2008 R2 The following table lists supported guest operating systems on Windows Server 2008 R2 SP1. Fedora 8 or 9 are unsupported; however, they have been reported to run. Third-party support for FreeBSD 8.2 and later guests is provided by a partnership between NetApp and Citrix. This includes both emulated and paravirtualized modes of operation, as well as several HyperV integration services. Desktop virtualization (VDI) products from third-party companies (such as Quest Software vWorkspace, Citrix XenDesktop, Systancia AppliDis Fusion and Ericom PowerTerm WebConnect) provide the ability to host and centrally manage desktop virtual machines in the data center while giving end users a full PC desktop experience. Guest operating systems with Enlightened I/O and a hypervisor-aware kernel such as Windows Server 2008 and later server versions, Windows Vista SP1 and later clients and offerings from Citrix XenServer and Novell will be able to use the host resources better since VSC drivers in these guests communicate with the VSPs directly over VMBus. Non-"enlightened" operating systems will run with emulated I/O; however, integration components (which include the VSC drivers) are available for Windows Server 2003 SP2, Windows Vista SP1 and Linux to achieve better performance. Linux support On July 20, 2009, Microsoft submitted Hyper-V drivers for inclusion in the Linux kernel under the terms of the GPL. Microsoft was required to submit the code when it was discovered that they had incorporated a Hyper-V network driver with GPL-licensed components statically linked to closed-source binaries. Kernels beginning with 2.6.32 may include inbuilt Hyper-V paravirtualization support which improves the performance of virtual Linux guest systems in a Windows host environment. Hyper-V provides basic virtualization support for Linux guests out of the box. Paravirtualization support requires installing the Linux Integration Components or Satori InputVSC drivers. Xen-enabled Linux guest distributions may also be paravirtualized in Hyper-V. Microsoft officially supported only SUSE Linux Enterprise Server 10 SP1/SP2 (x86 and x64) in this manner, though any Xen-enabled Linux should be able to run. In February 2008, Red Hat and Microsoft signed a virtualization pact for hypervisor interoperability with their respective server operating systems, to enable Red Hat Enterprise Linux 5 to be officially supported on Hyper-V. Windows Server 2012 Hyper-V in Windows Server 2012 and Windows Server 2012 R2 changes the support list above as follows: Hyper-V in Windows Server 2012 adds support for Windows 8.1 (up to 32 CPUs) and Windows Server 2012 R2 (64 CPUs); Hyper-V in Windows Server 2012 R2 adds support for Windows 10 (32 CPUs) and Windows Server 2016 (64 CPUs). Minimum supported version of CentOS is 6.0. Minimum supported version of Red Hat Enterprise Linux is 5.7. Maximum number of supported CPUs for Windows Server and Linux operating systems is increased from four to 64. Windows Server 2012 R2 Hyper-V on Windows Server 2012 R2 added the Generation 2 VM. Backward compatibility Hyper-V, like Microsoft Virtual Server and Windows Virtual PC, saves each guest OS to a single virtual hard disk file. It supports the older .vhd format, as well as the newer .vhdx. Older .vhd files from Virtual Server 2005, Virtual PC 2004 and Virtual PC 2007 can be copied and used in Hyper-V, but any old virtual machine integration software (equivalents of Hyper-V Integration Services) must be removed from the virtual machine. After the migrated guest OS is configured and started using Hyper-V, the guest OS will detect changes to the (virtual) hardware. Installing "Hyper-V Integration Services" installs five services to improve performance, at the same time adding the new guest video and network card drivers. Limitations Audio Hyper-V does not virtualize audio hardware. Before Windows 8.1 and Windows Server 2012 R2, it was possible to work around this issue by connecting to the virtual machine with Remote Desktop Connection over a network connection and use its audio redirection feature. Windows 8.1 and Windows Server 2012 R2 add the enhanced session mode which provides redirection without a network connection. Optical drives pass-through Optical drives virtualized in the guest VM are read-only. Officially Hyper-V does not support the host/root operating system's optical drives to pass-through in guest VMs. As a result, burning to discs, audio CDs, video CD/DVD-Video playback are not supported; however, a workaround exists using the iSCSI protocol. Setting up an iSCSI target on the host machine with the optical drive can then be talked to by the standard Microsoft iSCSI initiator. Microsoft produces their own iSCSI Target software or alternative third party products can be used. VT-x/AMD-V handling Hyper-V uses the VT-x on Intel or AMD-V on AMD x86 virtualization. Since Hyper-V is a native hypervisor, as long as it is installed, third-party software cannot use VT-x or AMD-V. For instance, the Intel HAXM Android device emulator (used by Android Studio or Microsoft Visual Studio) cannot run while Hyper-V is installed. Client operating systems x64 SKUs of Windows 8, 8.1, 10 Pro, Enterprise, Education, come with a special version Hyper-V called Client Hyper-V. Features added per version Windows Server 2012 Windows Server 2012 introduced many new features in Hyper-V. Hyper-V Extensible Virtual Switch Network virtualization Multi-tenancy Storage Resource Pools .vhdx disk format supporting virtual hard disks as large as 64 TB with power failure resiliency Virtual Fibre Channel Offloaded data transfer Hyper-V replica Cross-premises connectivity Cloud backup Windows Server 2012 R2 With Windows Server 2012 R2 Microsoft introduced another set of new features. Shared virtual hard disk Storage quality of service Generation 2 Virtual Machine Enhanced session mode Automatic virtual machine activation Windows Server 2016 Hyper-V in Windows Server 2016 and Windows 10 1607 adds Nested virtualization (Intel processors only, both the host and guest instances of Hyper-V must be Windows Server 2016 or Windows 10 or later) Discrete Device Assignment (DDA), allowing direct pass-through of compatible PCI Express devices to guest Virtual Machines Windows containers (to achieve isolation at the app level rather than the OS level) Shielded VMs using remote attestation servers Monitoring of host CPU resource utilization by guests and protection (limiting CPU usage by guests) Windows Server 2019 Hyper-V in Windows Server 2019 and Windows 10 1809 adds Shielded Virtual Machines improvements including Linux compatibility Virtual Machine Encrypted Networks vSwitch Receive Segment Coalescing Dynamic Virtual Machine Multi-Queue (d. VMMQ) Persistent Memory support Significant feature and performance improvements to Storage Spaces Direct and Failover Clustering See also Comparison of platform virtualization software Features new to Windows 8 Virtual disk image Microsoft Open Specification Promise Microsoft Remote Web Workplace Virtual private server Windows Subsystem for Linux Web interface for Hyper-V Windows Admin Center Hv Manager References Further reading External links Virtualization Fabric Design Considerations Guide Hyper-V on Microsoft TechNet Benchmarking Hyper-V on Windows Server 2008 R2 x64 Hyper-V Architecture Windows Admin Center Hv Manager Virtualization software Windows components Windows Server
Operating System (OS)
697
WinFS WinFS (short for Windows Future Storage) was the code name for a canceled data storage and management system project based on relational databases, developed by Microsoft and first demonstrated in 2003 as an advanced storage subsystem for the Microsoft Windows operating system, designed for persistence and management of structured, semi-structured and unstructured data. WinFS includes a relational database for storage of information, and allows any type of information to be stored in it, provided there is a well defined schema for the type. Individual data items could then be related together by relationships, which are either inferred by the system based on certain attributes or explicitly stated by the user. As the data has a well defined schema, any application can reuse the data; and using the relationships, related data can be effectively organized as well as retrieved. Because the system knows the structure and intent of the information, it can be used to make complex queries that enable advanced searching through the data and aggregating various data items by exploiting the relationships between them. While WinFS and its shared type schema make it possible for an application to recognize the different data types, the application still has to be coded to render the different data types. Consequently, it would not allow development of a single application that can view or edit all data types; rather, what WinFS enables applications to do is understand the structure of all data and extract the information that they can use further. When WinFS was introduced at the 2003 Professional Developers Conference, Microsoft also released a video presentation, named IWish, showing mockup interfaces that showed how applications would expose interfaces that take advantage of a unified type system. The concepts shown in the video ranged from applications using the relationships of items to dynamically offer filtering options to applications grouping multiple related data types and rendering them in a unified presentation. WinFS was billed as one of the pillars of the "Longhorn" wave of technologies, and would ship as part of the next version of Windows. It was subsequently decided that WinFS would ship after the release of Windows Vista, but those plans were shelved in June 2006, with some of its component technologies being integrated into ADO.NET and Microsoft SQL Server. Motivation Many filesystems found on common operating systems, including the NTFS filesystem which is used in modern versions of Microsoft Windows, store files and other objects only as a stream of bytes, and have little or no information about the data stored in the files. Such file systems also provide only a single way of organizing the files, namely via directories and file names. Because a file system has no knowledge about the data it stores, applications tend to use their own, often proprietary, file formats. This hampers sharing of data between multiple applications. It becomes difficult to create an application which processes information from multiple file types, because the programmers have to understand the structure and semantics of all the files. Using common file formats is a workaround to this problem but not a universal solution; there is no guarantee that all applications will use the format. Data with standardized schema, such as XML documents and relational data fare better, as they have a standardized structure and run-time requirements. Also, a traditional file system can retrieve and search data based only on the filename, because the only knowledge it has about the data is the name of the files that store the data. A better solution is to tag files with attributes that describe them. Attributes are metadata about the files such as the type of file (such as document, picture, music, creator, etc.). This allows files to be searched for by their attributes, in ways not possible using a folder hierarchy, such as finding "pictures which have person X". The attributes can be recognizable by either the file system natively, or via some extension. Desktop search applications take this concept a step further. They extract data, including attributes, from files and index it. To extract the data, they use a filter for each file format. This allows for searching based on both the file's attributes and the data in it. However, this still does not help in managing related data, as disparate items do not have any relationships defined. For example, it is impossible to search for "the phone numbers of all persons who live in Acapulco and each have more than 100 appearances in my photo collection and with whom I have had e-mail within last month". Such a search could not be done unless it is based on a data model which has both the semantics as well as relationships of data defined. WinFS aims to provide such a data model and the runtime infrastructure that can be used to store the data, as well as the relationships between data items according to the data model, doing so at a satisfactory level of performance. Overview WinFS natively recognizes different types of data, such as picture, e-mail, document, audio, video, calendar, contact, rather than just leaving them as raw unanalyzed bytestreams (as most file systems do). Data stored and managed by the system are instances of the data type recognized by the WinFS runtime. The data are structured by means of properties. For example, an instance of a résumé type will surface the data by exposing properties, such as Name, Educational Qualification, Experience. Each property may be a simple type (strings, integers, dates) or complex types (contacts). Different data types expose different properties. Besides that, WinFS also allows different data instances to be related together; such as a document and a contact can be related by an Authored By relationship. Relationships are also exposed as properties; for example if a document is related to a contact by a Created By relationship, then the document will have a Created By property. When it is accessed, the relationship is traversed and the related data returned. By following the relations, all related data can be reached. WinFS promotes sharing of data between applications by making the data types accessible to all applications, along with their schemas. When an application wants to use a WinFS type, it can use the schema to find the data structure and can use the information. So, an application has access to all data on the system even though the developer did not have to write parsers to recognize the different data formats. It can also use relationships and related data to create dynamic filters to present the information the application deals with. The WinFS API further abstracts the task of accessing data. All WinFS types are exposed as .NET objects with the properties of the object directly mapping to the properties of the data type. Also, by letting different applications that deal with the same data share the same WinFS data instance rather than storing the same data in different files, the hassles of synchronizing the different stores when the data change are removed. Thus WinFS can reduce redundancies. Access to all the data in the system allows complex searches for data across all the data items managed by WinFS. In the example used above ("the phone numbers of all persons who live in Acapulco and each have more than 100 appearances in my photo collection and with whom I have had e-mail within last month"), WinFS can traverse the subject relationship of all the photos to find the contact items. Similarly, it can filter all emails in last month and access the communicated with relation to reach the contacts. The common contacts can then be figured out from the two sets of results and their phone number retrieved by accessing the suitable property of the contact items. In addition to fully schematized data (like XML and relational data), WinFS supports semi-structured data (such as images, which have an unstructured bitstream plus structured metadata) as well as unstructured data (such as files) as well. It stores the unstructured components as files while storing the structured metadata in the structured store. Internally, WinFS uses a relational database to manage data. It does not limit the data to belonging to any particular data model. The WinFS runtime maps the schema to a relational modality, by defining the tables it will store the types in and the primary keys and foreign keys that would be required to represent the relationships. WinFS includes mappings for object and XML schemas by default. Mappings for other schemas must be specified. Object schemas are specified in XML; WinFS generates code to surface the schemas as .NET classes. ADO.NET can be used to directly specify the relational schema, though a mapping to the object schema must be provided to surface it as classes. Relationship traversals are performed as joins on these tables. WinFS also automatically creates indexes on these tables, to enable fast access to the information. Indexing speeds up joins significantly, and traversing relationships to retrieve related data is performed very fast. Indexes are also used during information search; searching and querying use the indexes to quickly complete the operations, much like desktop search systems. Development The development of WinFS is an extension to a feature that was initially planned in the early 1990s. Dubbed Object File System, it was supposed to be included as part of Cairo. OFS was supposed to have powerful data aggregation features, but the Cairo project was shelved, and with it OFS. However, later during the development of COM, a storage system, called Storage+, based on then-upcoming SQL Server 8.0, was planned, which was slated to offer similar aggregation features. This, too, never materialized, and a similar technology, Relational File System (RFS), was conceived to be launched with SQL Server 2000. However, SQL Server 2000 ended up being a minor upgrade to SQL Server 7.0 and RFS was not implemented. The concept was not scrapped, and served as the base for WinFS. WinFS was initially planned for inclusion in Windows Vista, and build 4051 of Windows Vista, then called by its codename "Longhorn", given to developers at the Microsoft Professional Developers Conference in 2003, included WinFS, but it suffered from significant performance issues. In August 2004, Microsoft announced that WinFS would not ship with Windows Vista; it would instead be available as a downloadable update after Vista's release. On August 29, 2005, Microsoft quietly made Beta 1 of WinFS available to MSDN subscribers. It worked on Windows XP, and required the .NET Framework to run. The WinFS API was included in the System.Storage namespace. The beta was refreshed on December 1, 2005 to be compatible with version 2.0 of the .NET Framework. WinFS Beta 2 was planned for some time later in 2006, and was supposed to include integration with Windows Desktop Search, so that search results include results from both regular files and WinFS stores, as well as allow access of WinFS data using ADO.NET. On June 23, 2006, the WinFS team at Microsoft announced that WinFS would no longer be delivered as a separate product, and some components would be brought under the umbrella of other technologies. Many of the principle features Microsoft intended to provide with WinFS included a pane for metadata property editing, breadcrumb-based property navigation, filtering or stacking items over properties, incremental search, and saved searches; these features were incorporated in Windows Vista. Query composition, a feature of WinFS that allowed users to perform additional searches that reuse the results of a previous query, was later incorporated in Windows Vista. Examples of uses of the technology are the object-relational mapping components into ADO.NET Entity Framework; support for unstructured data, adminless mode of operation, support for file system objects via the FILESTREAM data type, and hierarchical data in SQL Server 2008, then codenamed Katmai, as well as integration with Win32 APIs and Windows Shell and support for traversal of hierarchies by traversing relationships into later releases of Microsoft SQL Server; and the synchronization components into Microsoft Sync Framework. In 2013 Bill Gates cited WinFS as his greatest disappointment at Microsoft and that the idea of WinFS was ahead of its time, which will re-emerge. Data storage Architecture WinFS uses a relational engine, which derives from SQL Server 2005, to provide the data-relations mechanism. WinFS stores are simply SQL Server database (.MDF) files with the FILESTREAM attribute set. These files are stored in the access-restricted folder named "System Volume Information" (placed in the volume root), in folders under the folder "WinFS" with names of GUIDs of these stores. At the bottom of the WinFS stack lies WinFS Core, which interacts with the filesystem and provides file-access and -addressing capabilities. The relational engine leverages the WinFS core services to present a structured store and other services such as locking, which the WinFS runtime uses to implement the functionality. The WinFS runtime exposes services such as Synchronization and Rules that can be used to synchronize WinFS stores or perform certain actions on the occurrence of certain events. WinFS runs as a service that runs three processes: WinFS.exe, which hosts relational datastore WinFSSearch.exe, which hosts the indexing and querying engine WinFPM.exe (WinFS File Promotion Manager), which interfaces with the underlying file-system It allows programmatic access to its features via a set of .NET Framework APIs. These enable applications to define custom-made data types, define relationships among data, store and retrieve information, and allow advanced searches. The applications can then aggregate the data and present the aggregated data to the user. Data store WinFS stores data in relational stores, which are exposed as virtual locations called stores. A WinFS store is a common repository where any application can store data along with its metadata, relationships and schema. WinFS runtime can apply certain relationships itself; for example, if the values of the subject property of a picture and the name property of a contact are same, then WinFS can relate the contact with the picture. Relations can also be specified by other applications or the user. WinFS provides unified storage, but stops short of defining the format that is to be stored in the data stores. Instead it supports writing data in application-specific formats. But applications must provide a schema that defines how the file format should be interpreted. For example, a schema could be added to allow WinFS to understand how to read and thus be able to search and analyze, (say) a PDF file. By using the schema, any application can read data from any other application, and this also allows different applications to write in each other's format by sharing the schema. Multiple WinFS stores can be created on a single machine. This allows different classes of data to be kept segregated; for example, official documents and personal documents can be kept in different stores. WinFS, by default, provides only one store, named "DefaultStore". WinFS stores are exposed as shell objects, akin to Virtual folders, which dynamically generate a list of all items present in the store and present them in a folder view. The shell object also allows searching information in the datastore. A data unit that has to be stored in a WinFS store is called a WinFS Item. A WinFS item, along with the core data item, also contains information on how the data item is related to other data. This Relationship is stored in terms of logical links. Links specify which other data items the current item is related with. Put in other words, links specify the relationship of the data with other data items. Links are physically stored using a link identifier, which specifies the name and intent of the relationship, such as type of or consists of. The link identifier is stored as an attribute of the data item. All the objects that have the same link ID are considered to be related. An XML schema, defining the structure of the data items that will be stored in WinFS, must be supplied to the WinFS runtime beforehand. In Beta 1 of WinFS, the schema assembly had to be added to the GAC before it could be used. Data model WinFS models data using the data items, along with their relationships, extensions and rules governing its usage. WinFS needs to understand the type and structure of the data items, so that the information stored in the data item can be made available to any application that requests it. This is done by the use of schemas. For every type of data item that is to be stored in WinFS, a corresponding schema needs to be provided to define the type, structure and associations of the data. These schemas are defined using XML. Predefined WinFS schemas include schemas for documents, e-mail, appointments, tasks, media, audio, video, and also includes system schemas that include configuration, programs, and other system-related data. Custom schemas can be defined on a per-application basis, in situations where an application wants to store its data in WinFS, but not share the structure of that data with other applications, or they can be made available across the system. Type system The most important difference between a file system and WinFS is that WinFS knows the type of each data item that it stores. And the type specifies the properties of the data item. The WinFS type system is closely associated with the .NET framework's concept of classes and inheritance. A new type can be created by extending and nesting any predefined types. WinFS provides four predefined base types – Items, Relationships, ScalarTypes and NestedTypes. An Item is the fundamental data object which can be stored, and a Relationship is the relation or link between two data items. Since all WinFS items must have a type, the type of item stored defines its properties. The properties of an Item may be a ScalarType, which defines the smallest unit of information a property can have, or a NestedType, which is a collection of more than one ScalarTypes and/or NestedTypes. All WinFS types are made available as .NET CLR classes. Any object represented as a data unit, such as contact, image, video, document etc., can be stored in a WinFS store as a specialization of the Item type. By default, WinFS provides Item types for Files, Contact, Documents, Pictures, Audio, Video, Calendar, and Messages. The File Item can store any generic data, which is stored in file systems as files. But unless an advanced schema is provided for the file, by defining it to be a specialized Item, WinFS will not be able to access its data. Such a file Item can only support being related to other Items. A developer can extend any of these types, or the base type Item, to provide a type for his custom data. The data contained in an Item is defined in terms of properties, or fields that hold the actual data. For example, an Item Contact may have a field Name that is a ScalarType, and one field Address, a NestedType, which is further composed of two ScalarTypes. To define this type, the base class Item is extended and the necessary fields are added to the class. A NestedType field can be defined as another class that contains the two ScalarType fields. Once the type is defined, a schema has to be defined, which denotes the primitive type of each field, for example, the Name field is a String, the Address field is a custom defined Address class, both the fields of which are Strings. Other primitive types that WinFS supports are Integer, Byte, Decimal, Float, Double, Boolean and DateTime, among others. The schema will also define which fields are mandatory and which are optional. The Contact Item defined in this way will be used to store information regarding the Contact, by populating the properties field and storing it. Only those fields marked as mandatory needs to be filled up during initial save. Other fields may be populated later by the user, or not populated at all. If more properties fields, such as last conversed date, need to be added, this type can be extended to accommodate them. Item types for other data can be defined similarly. WinFS creates tables for all defined Items. All the fields defined for the Item form the columns of the table and all instances of the Item are stored as rows in the table for the respective Items. Whenever some field in the table refers to data in some other table, it is considered a relationship. The schema of the relationship specifies which tables are involved and what the kind and name of the relationship is. The WinFS runtime manages the relationship schemas. All Items are exposed as .NET CLR objects, with a uniform interface providing access to the data stored in the fields. Thus any application can retrieve object of any Item type and can use the data in the object, without being aware of the physical structure the data was stored in. WinFS types are exposed as .NET classes, which can be instantiated as .NET objects. Data are stored in these type instances by setting their properties. Once done, they are persisted into the WinFS store. A WinFS store is accessed using an ItemContext class (see Data retrieval section for details). ItemContext allows transactional access to the WinFS store; i.e. all the operations since binding an ItemContext object to a store till it is closed either all succeed or are all rolled back. As changes are made to the data, they are not written to the disc; rather they are written to an in-memory log. Only when the connection is closed are the changes written to the disc in a batch. This helps to optimize disc I/O. The following code snippet, written in C#, creates a contact and stores it in a WinFS store. //Connect to the default WinFS store using(ItemContext ic = ItemContext.Open()) { //Create the contact and set the data in appropriate properties ContactEAddress contact = new ContactEAddress() { Name = new PersonName() { // Name is a ComplexType Displayname = "Doe, John", FirstName = "John", LastName = "Doe" }, TelephoneNumber = new TelephoneNumber() { // Telephone number is a ComplexType Country = CountryCode.Antarctica, Areacode = 4567, Number = 9876543210 }, Age = 111 // Age is a SimpleType }; //Add the object to the user's personal folder. //This relates the item with the Folder pseudo-type, for backward //compatibility, as this lets the item to be accessed in a folder //hierarchy for apps which are not WinFS native. Folder containingFolder = UserDataFolder.FindMyPersonalFolder(); containingFolder.OutFolderMemberRelationship.AddItem(ic, contact); //Find a document and relate with the document. Searching begins by creating an //ItemSearcher object. Each WinFS type object contains a GetSearcher() method //that generates an ItemSearcher object which searches documents of that type. using (ItemSearcher searcher = Document.GetSearcher(ic)) { Document d = searcher.Find(@"Title = 'Some Particular Document'"); d.OutAuthoringRelationship.AddItem(ic, contact); } //Since only one document is to be found, the ItemContext.FindOne() method //could be used as well. //Find a picture and relate with it using (ItemSearcher searcher = Picture.GetSearcher(ic)) { Picture p = searcher.Find(@"Occasion = 'Graduation' and Sequence = '3'"); p.OutSubjectRelationship.AddItem(ic, contact); } //Persist to the store and close the reference to the store ic.Update(); } Relationships A datum can be related to one more item, giving rise to a one-to-one relationship, or with more than one items, resulting in a one-to-many relationship. The related items, in turn, may be related to other data items as well, resulting in a network of relationships, which is called a many-to-many relationship. Creating a relationship between two Items creates another field in the data of the Items concerned which refer the row in the other Item's table where the related object is stored. In WinFS, a Relationship is an instance of the base type Relationship, which is extended to signify a specialization of a relation. A Relationship is a mapping between two items, a Source and a Target. The source has an Outgoing Relationship, whereas the target gets an Incoming Relationship. WinFS provides three types of primitive relationships – Holding Relationship, Reference Relationship and Embedding Relationship. Any custom relationship between two data types are instances of these relationship types. Holding Relationships specify ownership and lifetime (which defines how long the relationship is valid) of the Target Item. For example, the Relationship between a folder and a file, and between an Employee and his Salary record, is a Holding Relationship – the latter is to be removed when the former is removed. A Target Item can be a part of more than one Holding Relationships. In such a case, it is to be removed when all the Source Items are removed. Reference Relationships provide linkage between two Items, but do not have any lifetime associated, i.e., each Item will continue to be stored even without the other. Embedding Relationships give order to the two Items that are linked by the Relationship, such as the Relationship between a Parent Item and a Child Item. Relationships between two Items can either be set programmatically by the application creating the data, or the user can use the WinFS Item Browser to manually relate the Items. A WinFS item browser can also graphically display the items and how they are related, to enable the user to know how their data are organized. Rules WinFS includes Rules, which are executed when a certain condition is met. WinFS rules work on data and data relationships. For example, a rule can be created that states that whenever an Item is created which contains field "Name" and if the value of that field is some particular name, a relationship should be created that relates the Item with some other Item. WinFS rules can also access any external application. For example, a rule can be built which launches a Notify application whenever a mail is received from a particular contact. WinFS rules can also be used to add new properties fields to existing data Items. WinFS rules are also exposed as .NET CLR objects. As such any rule can be used for any purpose. A rule can even be extended by inheriting from it to form a new rule that consists of the condition and action of the parent rule plus something more. RAV WinFS supports creating Rich Application Views (RAV) by aggregating different data in a virtual table format. Unlike database view, where each individual element can only be a scalar value, RAVs can have complex Items or even collections of Items. The actual data can be across multiple data types or instances and can even be retrieved by traversing relationships. RAVs are intrinsically paged (dividing the entire set of data into smaller pages containing disconnected subsets of the data) by the WinFS runtime. The page size is defined during creation of the view and the WinFS API exposes methods to iterate over the pages. RAVs also supports modification of the view according to different grouping parameters. Views can also be queried against. Access control Even though all data are shared, everything is not equally accessible. WinFS uses the Windows authentication system to provide two data protection mechanisms. First, there is share-level security that controls access to your WinFS share. Second, there is item level security that supports NT compatible security descriptors. The process accessing the item must have enough privileges to access it. Also in Vista there is the concept of "integrity level" for an application. Higher integrity data cannot be accessed by a lower integrity process. Data retrieval The primary mode of data retrieval from a WinFS store is querying the WinFS store according to some criteria, which returns an enumerable set of items matching the criteria. The criteria for the query is specified using the OPath query language. The returned data are made available as instances of the type schemas, conforming to the .NET object model. The data in them can be accessed by accessing the properties of individual objects. Relations are also exposed as properties. Each WinFS Item has two properties, named IncomingRelationships and OutgoingRelationships, which provide access to the set of relationship instances the item participates in. The other item which participates in one relationship instance can be reached through the proper relationship instance. The fact that the data can be accessed using its description, rather than location, can be used to provide end-user organizational capabilities without limiting to the hierarchical organization as used in file-systems. In a file system, each file or folder is contained in only one folder. But WinFS Items can participate in any number of holding relationships, that too with any other items. As such, end users are not limited to only file/folder organization. Rather, a contact can become a container for documents; a picture a container for contacts and so on. For legacy compatibility, WinFS includes a pseudo-type called Folder, which is present only to participate in holding relationships and emulate file/folder organization. Since any WinFS Item can be related with more than one Folder item, from an end user perspective, an item can reside in multiple folders without duplicating the actual data. Applications can also analyze the relationship graphs to present various filters. For example, an email application can analyze the related contacts and the relationships of the contacts with restaurant bills and dynamically generate filters like "Emails sent to people I had lunch with". Searches The WinFS API provides a class called the ItemContext class, which is bound to a WinFS store. The ItemContext object can be used to scope the search to the entire store or a subset of it. It also provides transactional access to the store. An object of this class can then spawn an ItemSearcher object which then takes the type (an object representing the type) of the item to be retrieved or the relationship and the OPath query string representing the criteria for the search. A set of all matches is returned, which can then be bound to a UI widget for displaying en masse or enumerating individually. The properties items can also be modified and then stored back to the data store to update the data. The ItemContext object is closed (which marks the end of association of the object with the store) when the queries are made or changes merged into the store. Related items can also be accessed through the items. The IncomingRelationships and OutgoingRelationships properties give access to all the set of relationship instances, typed to the name of the relationship. These relationship objects expose the other item via a property. So, for example, if a picture is related to a picture, it can be accessed by traversing the relationship as: ContactsCollection contacts = picture.OutgoingRelationships.Cast(typeof(Contact)).Value; //This retrieves the collection of all outgoing relationships from a picture object //and filters down the contacts reachable from them and retrieves its value. //Or the relationship can be statically specified as ContactsCollection contacts = picture.OutgoingRelationships.OutContactRelationship.Contact; An OPath query string allows to express the parameters that will be queried for to be specified using Item properties, embedded Items as well as Relationships. It can specify a single search condition, such as "title = Something'", or a compound condition such as "title = 'Title 1' || title = 'Title 2' && author = 'Someone'". These boolean and relational operations can be specified using C# like &&, ||, =, != operators as well as their English-like equivalent like EQUAL, NOT EQUAL. SQL like operators such as LIKE, GROUP BY and ORDER BY are also supported, as are wildcard conditions. So, "title LIKE 'any*'" is a valid query string. These operators can be used to execute complex searches such as using (ItemContext ic = ItemContext.Open() ) { //Searching begins by creating a ItemSearcher object. The searcher is created from a //relationship instance because the contacts being searched for are in relation. The //first parameter defines the scope of the search. An ItemContext as the scope means //the entire store is to be searched. Scope can be limited to a set of Items which may //be in a holding relationship with the contacts. In that case, the set is passed as //the scope of the search. ItemSearcher searcher = OutContactRelationship.GetTargetSearcher(ic, typeof(Contact)); ContactCollection contacts = searcher.FindAll("OutContactRelationship.Contact.Name LIKE 'A*'"); } The above code snippet creates an ItemSearcher object that searches on the OutContactRelationship instance that relates pictures and contacts, in effect searching all pictures related with a contact. It then runs the query Name LIKE 'A*'" on all contacts reachable through OutContactRelationship, returning the list of "contacts whose names start with A and whose pictures I have". Similarly, more relationships could be taken into account to further narrow down the results. Further, a natural language query processor, which parses query in natural language and creates a well-formed OPath query string to search via proper relationships, can allow users to make searches such as "find the name of the wine I had with person X last month", provided financial management applications are using WinFS to store bills. Different relations specify a different set of data. So when a search is made that encompasses multiple relations, the different sets of data are retrieved individually and a union of the different sets is computed. The resulting set contains only those data items that correspond to all the relations. Notifications WinFS includes better support for handling data that changes frequently. Using WinFS Notifications, applications choose to be notified of changes to selected data Items. WinFS will raise an ItemChangedEvent, using the .NET Event model, when a subscribed-to Item changes, and the event will be published to the applications. Information Agent WinFS includes an Information Agent feature for the management, retrieval, and storage of end-user notification rules and preferences for changes to items in the data store. Using Information Agent, it is possible to automatically define relations to new items based on events such as appointments, with an example being that appointments can be related to photos based on the dates the photos were taken, enabling queries for birthdays or holidays without needing to know the actual dates of such events ("find all photos taken on this birthday"). Other examples include automatically moving new items to specific folders based on a rule as determined by appointment times and dates the photos were taken ("when I import a photo taken during a business event, move it to the Business Events folder") or more complex possibilities. Information Agent can also forward notifications to other devices ("if I receive a high priority email from my boss, send a notification to my phone") and is similar to Rules and Alerts functionality of Microsoft Outlook. Data sharing WinFS allows easy sharing of data between applications, and among multiple WinFS stores, which may reside on different computers, by copying to and from them. A WinFS item can also be copied to a non-WinFS file system, but unless that data item is put back into the WinFS store, it will not support the advanced services provided by WinFS. The WinFS API also provides some support for sharing with non-WinFS applications. WinFS exposes a shell object to access WinFS stores. This object maps WinFS items to a virtual folder hierarchy, and can be accessed by any application. Virtual folders can automatically share new content referenced by the query with users (a virtual folder for "all vacation photos" can automatically share new items returned by this query with users). WinFS data can also be manually shared using network shares, by sharing the legacy shell object. Non-WinFS file formats can be stored in WinFS stores, using the File Item, provided by WinFS. Importers can be written, to convert specific file formats to WinFS Item types. In addition, WinFS provides services to automatically synchronize items in two or more WinFS stores, subject to some predefined condition, such as "share only photos" or "share photos that have an associated contact X". The stores may be on different computers. Synchronization is done in a peer-to-peer fashion; there is no central authority. A synchronization can be either manual or automatic or scheduled. During synchronization, WinFS finds the new and modified Items, and updates accordingly. If two or more changes conflict, WinFS can either resort to automatic resolution based on predefined rules, or defer the synchronization for manual resolution. WinFS also updates the schemas, if required. Application support Shell namespace WinFS Beta 1 includes a shell namespace extension, which surfaces WinFS stores as top level objects in My Computer view. Files can be copied into and out of the stores, as well as applications can be directly used to save there. Even folders such as My Documents can be redirected to the stores. WinFS uses Importer plug-ins to analyze the files as they were being imported to the store and create proper WinFS schemas and objects, and when taking the objects out, re-pack them into files. If importers for certain files are not installed, they are stored as generic File types. Microsoft Rave Microsoft Rave is an application that shipped with WinFS Beta 1. It allows synchronization of two or more WinFS stores, and supports synchronization in full mesh mode as well as the central hub topology. While synchronizing, Microsoft Rave will determine the changes made to each store since the last sync, and update accordingly. When applying the changes, it also detects if there is any conflict, i.e., the same data has been changed on both stores since the last synchronization. It will either log the conflicting data for later resolution or have it resolved immediately. Microsoft Rave uses peer-to-peer technology to communicate and transfer data. StoreSpy With WinFS Beta 1, Microsoft included an unsupported application called StoreSpy, which allowed one to browse WinFS stores by presenting a hierarchical view of WinFS Items. It automatically generated virtual folders based on access permissions, date and other metadata, and presented them in a hierarchical tree view, akin to what traditional folders are presented in. The application generated tabs for different Item types. StoreSpy allowed viewing Items, Relationships, MultiSet, Nested Elements, Extensions and other types in the store along with its full metadata. It also presented a search interface to perform manual searches, and save them as virtual folders. The application also presented a graphical view of WinFS Rules. However, it did not allow editing of Items or their properties, though it was slated for inclusion in a future release. But the WinFS project was cut back before it could materialize. Type Browser WinFS also includes another application, named WinFS Type Browser, which can be used to browse the WinFS types, as well as visualize the hierarchical relationship between WinFS types. A WinFS type, both built-in types as well as custom schemas, can be visualized along with all the properties and methods that it supports. It also shows the types that it derives from as well as other types that extend the type schema. However, while it was included with WinFS, it was released as an unsupported tool. OPather WinFS Beta 1 also includes an unsupported application, named OPather. It presents a graphical interface for writing Opath queries. It can be used by selecting target object type and specifying the parameters of the query. It also includes Intellisense-like parameter completion feature. It can then be used to perform visualization tasks like binding results of a query to a DataGrid control, create views of the data in WinFS itself, or just extract the query string. Project "Orange" Microsoft launched a project to build a data visualization application for WinFS. It was codenamed "Project Orange" and was supposedly built using Windows Presentation Foundation. It was supposed to provide exploration of Items stored in WinFS stores, and data relationships were supposed to be a prominent part of the navigation model. It was supposed to let people allow organization of the WinFS stores graphically as well – productizing many of the concepts shown in the IWish Concept Video WMV File. However, since the WinFS project went dark, the status of this project is unknown. See also Desktop organizer GNOME Storage – a storage management system for the GNOME desktop NEPOMUK-KDE ReFS Relational database management system (RDBMS) References External links WinFS Blog WinFS at the Microsoft Developer Network (Internet Archive cache) Channel 9 Videos WinFS Newsgroup WinFS Beta 1 Preview WinFS area on NetFXGuide.com (Internet Archive cache) Microsoft application programming interfaces Windows disk file systems Windows administration Semantic file systems Articles with example C Sharp code
Operating System (OS)
698
Windows Phone 8.1 Windows Phone 8.1 was the third generation of Microsoft's Windows Phone mobile operating system, succeeding Windows Phone 8. Rolled out at Microsoft's Build Conference in San Francisco, California, on April 2, 2014, it was released in final form to Windows Phone developers on April 14, 2014 and reached general availability on August 4, 2014. All Windows Phones running Windows Phone 8 can be upgraded to Windows Phone 8.1, with release dependent on carrier rollout dates. Windows Phone 8.1 is also the last version that uses the Windows Phone brand name as it was succeeded by Windows 10 Mobile. Some Windows Phone 8.1 devices are capable of being upgraded to Windows 10 Mobile. Microsoft delayed the upgrade and reduced the supported device list from their initial promise. Support has ended for Windows Phone 8.1 on July 11, 2017. History Windows Phone 8.1 was first rumored to be Windows Phone Blue, a series of updates to Microsoft's mobile operating system that would coincide with the release of Windows 8.1. Although Microsoft had originally planned to release WP8.1 in late 2013, shortly after the release of its PC counterpart, general distribution of the new operating system was pushed back until early 2014. Instead of waiting over a year to add new features to Windows Phone 8, Microsoft opted to release three incremental updates to its existing mobile OS. These updates are delivered with corresponding firmware updates for the specific devices. The updates included GDR2 (Lumia Amber), which introduced features such as "Data Sense", and GDR3 (Lumia Black), which brought support for quad-core processors, 1080p high-definition screens of up to six inches, the addition of a "Driving Mode," and extra rows of live tiles for larger "phablet" devices. The updated operating system's final name was leaked to the public when Microsoft released the Windows Phone 8.1 SDK to developers on February 10, 2014, but it wasn't until Microsoft's Build conference keynote on April 2, 2014 when Windows Phone 8.1 was officially announced, alongside the Windows 8.1 Update. The final shipping code was released to registered users of the "Preview for Developers" program on April 14, 2014, and to the general public in subsequent months, the actual release date being determined by the devices' wireless carriers and accompanied with firmware updates, including Lumia Cyan. Preview for Developers The "Preview for Developers" program was initiated in October 2013 with the release of Windows Phone 8 Update 3. The program was intended for developers and enthusiasts to gain immediate access to OS updates as they become available from Microsoft, bypassing wireless carriers and OEMs who test changes before including device-specific firmware updates. Users of the "Preview for Developers" program do not void their warranty in most cases and can install any future firmware that is included with their carrier's official rollout of Windows Phone 8.1. The Windows Phone software updates delivered through "Preview for Developers" are complete, finished versions of the OS, as opposed to the Windows 10 Mobile builds in the Windows Insider program, which are preview versions of the software that are intended for users to try out new features before the final release and may contain bugs. Features Windows Phone 8.1 introduces a host of notable new features, most of which were unveiled in a preview released to developers on February 10, 2014. Cortana Cortana is a personal virtual assistant that was added in Windows Phone 8.1, and is similar to Google Now and Apple's Siri. The Cortana name derives from the Halo video game series, which is a Microsoft franchise exclusive to Xbox and Windows. Cortana's features include being able to set reminders, recognize natural voice without the user having to input a predefined series of commands and answer questions using information from Bing (like current weather and traffic conditions, sports scores, and biographies). Cortana also uses a special feature called a "Notebook", where it will automatically gather information about and interests of the user based on usage and allow the user to input additional personal information, such as quiet hours and close friends who are allowed to get through to the user during these quiet hours. Users can also delete information from the "Notebook" if they deem it undesirable for Cortana to know. Windows 8.1's universal Bing SmartSearch features are incorporated into Cortana, which replaces the previous Bing Search app which is activated when a user presses the "Search" button on their device. This feature, which is currently in beta, was released in the United States in the first half of 2014 and in China, the United Kingdom, India, Canada and Australia in August 2014. Microsoft has committed to updating Cortana twice a month and add features. The new features may include more "easter egg" replies, improvements in UI and better voice modulations. Web Windows Phone 8.1 uses a mobile version of Internet Explorer 11 as the default web browser. IE11 carries over many of its desktop counterpart's improvements, which include support for WebGL, normal mapping, InPrivate mode, Reading mode, and the ability to swipe left or right to navigate to a previous webpage and back. The updated browser also includes a new HTML5 video web player with support for inline playback and closed captions, Windows 8-style website live tiles, and the ability to save passwords. Furthermore, users can now open an unlimited number of tabs, instead of the previous maximum of 6. If a user is logged in with their Microsoft account on both their Windows 8.1 device and Windows Phone, their tabs on IE11 will now sync automatically. Apps and Windows Phone Store App framework Apps for Windows Phone 8.1 can now be created using the same application model as Windows Store apps for Windows 8.1, based on the Windows Runtime, and the file extension for WP apps is now ".appx" (which is used for Windows Store apps), instead of Windows Phone's traditional ".xap" file format. Applications built for WP8.1 can invoke semantic zoom, as well as access to single sign-on with a Microsoft account. The Windows Phone Store now also updates apps automatically. The store can be manually checked for updates available for applications on a device. It also adds the option to update applications when on Wi-Fi only. App developers will be able to develop apps using C# / Visual Basic.NET (.NET), C++ (CX) or HTML5/JavaScript, like for Windows 8. Developers will also be able to build "universal apps" for both Windows Phone 8.1 and Windows 8 that share almost all code, except for that specific to the platform, such as user interface and phone APIs. Any universal apps that have been installed on Windows 8.1 will automatically appear in the user's "My Apps" section on Windows Phone 8.1. Apps built for Windows Phone 8 and Windows Phone 7 automatically run on Windows Phone 8.1, but apps built for Windows Phone 8.1 will not run on any previous version of Windows Phone. Windows Phone Store The Windows Phone Store was redesigned in Windows Phone 8.1 to become more information-dense. App collections, which were previously visible in a different page, are now fully featured on the front column of the Store. There is also no more distinction between Games and other apps; both now show up in the app list, although categories for apps and games (such as "most popular games" or "most popular apps") are still separated. App ratings have been designed to match those of Windows 8.1, with horizontal bars added to indicate how many 5-star, 4-star, or 3-star reviews an app has received. App screenshots now no longer have their own page, but can instead can be viewed at the bottom of the app description page. Furthermore, the Windows Phone Store now includes a "My Apps" section under the three-dot menu which allows users to re-install any app they have purchased previously. Windows Phone 8.1 support ended on July 11, 2017. The Windows Phone Store on Windows Phone 8.1 was shut down on December 16, 2019. New and revamped apps Battery Saver adds the ability to track battery usage and determine profiles that will lower power consumption. In addition, the "Background Tasks" page, which allows a user to stop or allow an individual app from running in the background, has been moved from the Settings menu to Battery Saver. In addition to just being able to stop a background task from running, users can now set profiles which will prevent certain apps from running only if the battery level is below a designated percentage. Storage Sense lets users move files and apps between their phone's hard drive and a microSD card, and incorporates features previously available in the "Settings" section that gave users the ability to delete temporary files to free up storage and uninstall applications. Wi-Fi Sense automatically signs in Windows Phones to trusted available Wi-Fi hotspots. It also helps to share your own Wi-Fi credentials with your friends and contacts, but without the security compromise i.e. it only shares the WiFi connection without telling your friends the password. Calendar app now has a week view with current weather, similar to the features in the Microsoft Outlook calendar available to desktop users. The calendar further adds support for syncing Google Calendars with your Windows Phone. Maps has been overhauled with support for aerial-view, 3D mapping and a dynamic compass. Local Scout, which has been removed from Windows Phone 8.1 in the United States due to the implementation of Cortana, has been moved to Maps. The map also shows nearby WiFi-hotspots, if any available, in your location. Calling and Skype The Dialer app adds a "Speed Dial" page, and calls from a single caller in Call History are now grouped. Clicking on the group will reveal individual call details such as the time and date the call was made. A button has been added next to each caller which allows unidentified callers to be added to a user's contact list or brings up information on an existing contact. Users can now automatically upgrade existing phone calls to Skype video calls from within the phone call UI, which has also been revamped with larger buttons. In addition to a large photo of the contact, text with the user's name and phone number now appear at the top of the screen instead of directly above the dialer. Skype calls can also be directly initiated from Cortana. Multimedia Xbox Music and Xbox Video provide streaming services for movies, music, and TV shows, and are separated as opposed to being joined together in previous versions. Notably, Xbox Video now has built-in support for video streaming. In addition to separating its music and video streaming services, 8.1 also adds support for separate volume controls, audio and video transcoding, hardware acceleration, stereoscopic 3D, and the ability for apps to capture and record video independently of the operating system's built-in video recorder. Furthermore, built-in support for streaming through DLNA to monitors and television screens, referred to by Microsoft as PlayTo, is also included, as well as the ability to mirror display from a phone to a separate screen. Media editing tools have also been refined: apps for slow motion video capture, video effects, and audio effects have been added. Microsoft currently provides bi-weekly updates i.e. twice a month, to both of these apps. In August 2014, A2DP and AVRCP support was also added. The stock camera app has been updated with a more minimalist design similar to that of the camera app on Windows 8.1. Additionally, users can now save high-resolution photos directly to OneDrive, instead of only having the option to upload the 5MP version of the image to the cloud. Multitasking Building on improvements made in the third update to its predecessor, Windows Phone 8.1 adds support for closing apps by swiping down on them in the multitasking view (invoked by doing a long-press on the "back" button), which is similar to how multitasking operates on Windows 8 and iOS. Pressing the back button now suspends an app in the multitasking view instead of closing it. Live tiles A third column of live tiles, which was previously available only to Windows Phones with 1080p and select phones with 720p screens, is now an option for all Windows Phone 8.1 devices regardless of screen size. Microsoft has also added the ability for users to skin live tiles with a background image. With the inclusion of Update 1, WP8.1 now lets users drag app tiles on top of each other to create folders of apps on the Start Screen. Each individual app within the folder can still appear as a Live Tile, and opening the folder simply expands it on the Start Screen so the user can rearrange and open apps. Social The "Me" hub in Windows Phone 8.1 has been transformed from a single hub to update and maintain all social media accounts to a single hub to a viewer which allows users to view news feeds from social networks. When users click on a Facebook post, for example, they are instantly redirected to the Facebook app, instead of being allowed to like or comment on that post in the "Me" hub itself, a feature available in previous versions of Windows Phone. The Me hub's notification center for social networks has also been removed, as this feature is now integrated into Windows Phone 8.1's Action Center. Supported social networks in the "Me" hub include Facebook, Foursquare, LinkedIn, and Twitter, which has also been fully integrated into the Contacts Hub. "Threads," which allowed users to seamlessly switch between different chat services, have also been removed from the Messaging app, which is now solely for text messages. Other changes to the messaging app include the ability to select multiple text messages for forwarding or deletion. Lockscreen Windows Phone 8.1 adds the ability for OEMs and individual apps to customize their custom lock screen themes even further by skinning the font and orientation of time, date, and notification text. Notifications and settings A new notifications center known as "Action Center" has been added, and allows for the ability to change simple settings such as volume controls. The new notifications area's design allows the user to for example change wireless networks, turn Bluetooth and Airplane Mode on or off, and access "Driving Mode" from four customisable boxes at the top of the screen, while beneath these four horizontally placed boxes include recent text messages and social integration. Apps can also send users location-specific notifications with the addition of a new geofencing API. Keyboard Microsoft has added a Word Flow keyboard in Windows Phone 8.1 that, similarly to the Swype keyboard option available on Android devices, allows users to swipe through letters to type. As the user swipes, the keyboard generates space automatically for the next word to be entered. The keyboard was touted for its speed and accuracy, and brought fame to Microsoft's research division when fifteen-year-old Lakeside School student Gaurav Sharma, using a Nokia Lumia 520 equipped with Windows Phone 8.1 and the "Word Flow" keyboard, broke the Guinness World Record for the world's fastest typing on a mobile phone, which was previously held by a Samsung Galaxy S4 user, by eight seconds. This record was short-lived, which was subsequently beaten a month later by Marcel Fernandes, who finished a quarter of a second faster using the Fleksy Keyboard, a competing keyboard available on iOS and Android. However, as Flesky relies on predictive text algorithms rather than swiping gestures, it is fair to say that "Word Flow" remains the world's fastest "swipe" keyboard on a mobile phone. File system Windows Phone 8.1 allows apps to view all files stored on a WP device and move, copy, paste and share the contents of a device's internal file system. As a result of this change, multiple file explorer apps have been released on the Windows Phone Store since the operating system's debut. Microsoft released its own file explorer app on May 30, 2014. In addition to these changes, SkyDrive has been completely rebranded to OneDrive across the operating system after Microsoft's settlement of a dispute over the "Sky" trademark with BSkyB. Users are also presented with multiple options when a Windows Phone 8.1 device is connected to a computer via USB. Enterprise improvements Windows Phone 8.1 adds support for VPN and Bluetooth 4.0 LE. With the release of Update 1, receiving or sending data through VPN when connecting to a Wi-Fi network and the Bluetooth PAN (Personal Area Network) 1.0 standard are also supported. Apps Corner is a kiosk mode that allows users to restrict usage to a select group of applications and boot straight into a specific app. Hardware Devices Windows Phone 8.1 devices are currently manufactured by Microsoft Mobile (formerly Nokia) and its hardware partners, including HTC, Gionee, JSR, Karbonn, Micromax, Samsung, Alcatel, Lava (under both the Lava and Xolo brands), Cherry Mobile, and Blu. Additionally, Gionee, JSR, LG, Lenovo, Longcheer, and ZTE have registered as Windows Phone hardware partners, but have yet to release Windows Phone products. Sony (under the Xperia or Vaio brand) has also stated its intention to produce Windows Phone devices in the near future, but this has not materialized. During BUILD 2014, Microsoft announced two additional hardware partners - Micromax and Prestigio. Hardware requirements Starting with Windows Phone 8.1, several hardware buttons that were previously required on Windows Phone are no longer a requirement for device manufacturers, a move that was made in order to allow OEMs to develop devices that can run both WP and Android; the HTC One (M8) for Windows is an example of such a device. WP now supports on-screen buttons that OEMs can use to replace the capacitive "back, "Windows", and "search" buttons that have been required for devices running the OS since 2010. The new on-screen buttons can be hidden by swiping them to the side of the screen. Windows Phone device manufacturers are also no longer required to include a physical camera button on the side of the phone. Version history Reception Tom Warren of The Verge said that it is clear that the Windows Phone OS is being left behind by its competitors. Although the Windows Phone store has many apps, there are still less than the Android and iOS stores, and the Windows Phone apps equivalents are severely lacking. However, he commends Windows Phone for its ability to run smoothly across a wide variety of hardware and the thoughtful addition of features. See also References External links Official website (Archive) https://support.microsoft.com/en-us/windows/windows-phone-8-1-end-of-support-faq-7f1ef0aa-0aaf-0747-3724-5c44456778a3 Windows Phone Phone 8.1 Smartphones Cloud clients Mobile operating systems ARM operating systems C (programming language) software C++ software
Operating System (OS)
699